entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
10
200
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
2
817k
http://arxiv.org/abs/2306.07158v1
20230612144422
Riemannian Laplace approximations for Bayesian neural networks
[ "Federico Bergamin", "Pablo Moreno-Muñoz", "Søren Hauberg", "Georgios Arvanitidis" ]
stat.ML
[ "stat.ML", "cs.LG", "stat.ME" ]
toc Looking Around Corners: Generative Methods in Terrain Extension Alec Reed Department of Computer Science University of Colorado Boulder [email protected] Christoffer Heckman Department of Computer Science University of Colorado Boulder [email protected] July 31, 2023 ========================================================================================================================================================================================================================= Bayesian neural networks often approximate the weight-posterior with a Gaussian distribution. However, practical posteriors are often, even locally, highly non-Gaussian, and empirical performance deteriorates. We propose a simple parametric approximate posterior that adapts to the shape of the true posterior through a Riemannian metric that is determined by the log-posterior gradient. We develop a Riemannian Laplace approximation where samples naturally fall into weight-regions with low negative log-posterior. We show that these samples can be drawn by solving a system of ordinary differential equations, which can be done efficiently by leveraging the structure of the Riemannian metric and automatic differentiation. Empirically, we demonstrate that our approach consistently improves over the conventional Laplace approximation across tasks. We further show that, unlike the conventional Laplace approximation, our method is not overly sensitive to the choice of prior, which alleviates a practical pitfall of current approaches. § INTRODUCTION [14]r0.4 < g r a p h i c s > Our Riemannian Laplace approximation is a simple parametric distribution, which is shaped according to the local loss landscape through a Riemannian metric. Bayesian deep learning estimates the weight-posterior of a neural network given data, i.e. p(θ | ). Due to the generally high dimensions of the weight-space, the normalization of this posterior is intractable and approximate inference becomes a necessity. The most common parametric choice approximates the posterior with a Gaussian distribution, p(θ | ) ≈ q(θ | ) = (θ | μ, Σ), which is estimated variationally <cit.>, using Laplace approximations <cit.> or with other techniques <cit.>. Empirical evidence, however, suggests that the log-posterior is not locally concave <cit.>, indicating that the Gaussian approximation is overly crude. Indeed, this approximation is known to be brittle as the associated covariance is typically ill-conditioned implying a suboptimal behavior <cit.>, and for this reason, alternative approaches have been proposed to fix this issue <cit.>. Nonetheless, the Gaussian approximation is widely used due to the many benefits of parametric distributions, over e.g. Monte Carlo sampling <cit.> or deep ensembles <cit.>. In this paper we argue that the underlying issue is not with the Gaussian approximation, but rather with the weight-space over which the approximation is applied. We show that a Gaussian approximation can locally adapt to the loss by equipping the weight-space with a simple Riemannian metric and performing the approximation tangentially to the associated manifold. Practically, this ensures that samples from the Riemannian approximate posterior land in regions of weight-space yielding low training loss, which significantly improves over the usual Gaussian approximation. We obtain our Riemannian approximate posterior using a generalization of the Laplace approximation <cit.> to general Riemannian manifolds. Sampling from this distribution requires solving a system of ordinary differential equations, which we show can be performed efficiently by leveraging the structure of the used Riemannian metric and automatic differentiation. Empirically, we demonstrate that this significantly improves upon conventional Laplace approximations across tasks. § BACKGROUND Notation & assumptions. We consider independent and identically distributed (i.i.d.) data = {x̱_n, y̱_n}_n=1^N, consisting of inputs x̱∈^D and outputs y̱∈^C. To enable probabilistic modeling, we use a likelihood p(y̱ | f_θ(x̱)) which is either Gaussian (regression) or categorical (classification). This likelihood is parametrized by a deep neural network f_θ:^D →^C, where θ∈^K represent the weights for which we specify a Gaussian prior p(θ). The predictive distribution of a new test point x̱' equals p(y̱ | x̱') = ∫ p(y̱ | x̱', θ) p(θ | ) θ where p(θ | ) is the true weight-posterior given the data . To ensure tractability, this posterior is approximated. This paper focuses on the Laplace approximation, though the bulk of the methodology applies to other approximation techniques as well.=-1 §.§ The Laplace approximation The Laplace approximation (la) is widely considered in probabilistic models for approximating intractable densities <cit.>. The idea is to perform a second-order Taylor expansion of an unnormalized log-probability density, thereby yielding a Gaussian approximation. When considering inference of the true posterior p(θ | ), la constructs an approximate posterior distribution q_la(θ | 𝒟) = (θ | , Σ) that is centered at the maximum a-posteriori (map) estimate = _θ{log p(θ | ) } = _θ{ -∑_n=1^N log p(y̱_n  | x̱_n, θ) - log p(θ) }_(θ). A Taylor expansion around of the regularized loss (θ) then yields (θ) ≈() + θ(θ)|_θ=(θ - ) + 1/2(θ - )θ|_θ=(θ - ), where we know that ∇_θ(θ)|_θ=≈ 0, and θ∈^K× K denotes the Hessian of the loss. This expansion suggests that the approximate posterior covariance should be the inverse Hessian Σ = θ|_θ=^-1. The marginal likelihood of the data is then approximated as p() ≈exp(-())(2π)^D2(Σ)^12. This is commonly used for training hyper-parameters of both the likelihood and the prior <cit.>. We refer to appendix A for further details. Tricks of the trade. Despite the simplicity of the Laplace approximation, its application to modern neural networks is not trivial. The first issue is that the Hessian matrix is too large to be stored in memory, which is commonly handled by approximately reducing the Hessian to being diagonal, low-rank, Kronecker factored, or only considered for a subset of parameters (see <cit.> for a review). Secondly, the Hessian is generally not positive definite <cit.>, which is commonly handled by approximating the Hessian with the generalized Gauss-Newton approximation <cit.>. Furthermore, estimating the predictive distribution using Monte Carlo samples from the Laplace approximated posterior usually performs poorly <cit.><cit.> even for small models. Indeed, the Laplace approximation can place probability mass in low regions of the posterior. A solution, already proposed by <cit.>, is to consider a first-order Taylor expansion around θ_*, and use the sample to use the “linearized” function (x̱) = f_(x̱) + ∇_θ f_θ(x̱)|_θ=θ - as predictive, where ∇_θ f_θ(x̱)|_θ=∈^C× K is the Jacobian. Recently, this approach has been justified by <cit.>, who proved that the generalized Gauss-Newton approximation is the exact Hessian of this new linearized model. Even if this is a linear function with respect to the parameters θ, empirically it achieves better performance than the classic Laplace approximation. Although not theoretically justified, optimizing the prior precision post-hoc has been shown to play a crucial role in the Laplace approximation <cit.>. This is usually done either using cross-validation or by maximizing the log-marginal likelihood. In principle, this regularizes the Hessian, and the associated approximate posterior concentrates around the map estimate. Strengths & weaknesses. The main strength of the Laplace approximation is its simplicity in implementation due to the popularization of automatic differentiation. The Gaussian approximate posterior is, however, quite crude and often does not capture the shape locally of the true posterior <cit.>. Furthermore, the common reduction of the Hessian to not correlate all model parameters limit the expressive power of the approximate posterior. § RIEMANNIAN LAPLACE APPROXIMATIONS We aim to construct a parametric approximate posterior that better reflects the local shape of the true posterior and captures nonlinear correlations between parameters. The basic idea is to retain the Laplace approximation but change the parameter space Θ to locally encode the training loss. To realize this idea, we will first endow the parameter space with a suitable Riemannian metric (Sec. <ref>) and then construct a Laplace approximation according to this metric (Sec. <ref>). §.§ A loss-aware Riemannian geometry [12]r0.4 [unit=1mm,width=]images/example-metric-expmap.pdf (24,4)θ (45,4)𝐯 (31,23)Exp_θ(𝐯) The parameter space Θ of the bnn together with examples of the Riemannian metric and the exponential map. Note that the Riemannian metric adapts to the shape of the loss which causes the geodesic to follow its shape. For a given parameter value θ∈Θ, we can measure the training loss (θ) of the associated neural network. Assuming that the loss changes smoothly with θ, we can interpret the loss surface = g(θ) = [θ, (θ)] ∈^K+1 as a K-dimensional manifold in ^K+1. The goal of Riemannian geometry <cit.> is to do calculations that are restricted to such manifolds. The metric.   We can think of the parameter space Θ as being the intrinsic coordinates of the manifold , and it is beneficial to do all calculations directly in these coordinates. Note that a vector tangential to the manifold can be written as J̱_g(θ)v̱∈^K+1, where J̱_g:Θ→^K+1× K is the Jacobian of g that spans the tangent space 𝒯_g(θ) at the point g(θ)∈ and v̱∈^K is the vector of tangential coordinates for this basis of the tangent space. We can take inner products between two tangent vectors in the same tangent space as ⟨J̱_g(θ) v̱_1, J̱_g(θ) v̱_2 ⟩ = v̱_1J̱_g(θ)J̱_g(θ) v̱_2, which, we note, is now expressed directly in the intrinsic coordinates. From this observation, we define the Riemannian metric M̱(θ) = J̱_g(θ)J̱_g(θ), which gives us a notion of a local inner product in the intrinsic coordinates of the manifold (see ellipsoids in Fig. <ref>). The Jacobian of g is particularly simple J̱_g(θ) = [_K, θ]^⊺, such that the metric takes the form M̱(θ) = _K + θ(θ)θ(θ). The exponential map.   A local inner product allows us to define the length of a curve c:[0,1] →Θ as [c] = ∫_0^1 √(ċ(t)M̱(c(t))ċ(t)) t, where ċ(t)=∂_t c(t) is the velocity. From this, the distance between two points can be defined as the length of the shortest connecting curve, where the latter is known as the geodesic curve. Such geodesics can be expressed as solutions to a system of second-order non-linear ordinary differential equations (odes), which is given in appendix B alongside further details on geometry. Of particular interest to us is the exponential map, which solves these odes subject to an initial position and velocity. This traces out a geodesic curve with a given starting point and direction (see Fig. <ref>). Geometrically, we can also think of this as mapping a tangent vector back to the manifold, and we write the map as Exp:×θ→. The tangential coordinates v̱ can be seen as a coordinate system for the neighborhood around θ, and since the exponential map is locally a bijection we can represent any point locally with a unique tangent vector. However, these coordinates correspond to the tangent space that is spanned by J̱_g(θ), which implies that by changing this basis the associated coordinates change as well. By orthonormalizing this basis we get the normal coordinates where the metric vanishes. Let v̱ the tangential coordinates and v̱̅̅ the corresponding normal coordinates, then it holds that v̱M̱(θ)v̱ = v̱̅̅v̱̅̅⇒v̱ = A̱(θ)v̱̅̅ with A̱(θ) = M̱(θ)^-1/2. We will use the normal coordinates when doing Taylor expansions of the log-posterior, akin to standard Laplace approximations. §.§ The proposed approximate posterior In order to Taylor-expand the loss according to the metric, we first express the loss in normal coordinates of the tangent space at , h(v̱̅̅) = (M̱()^-1/2v̱̅̅). Following the standard Laplace approximation, we perform a second-order Taylor expansion of h as ĥ(v̱̅̅) ≈ h(0) + ∂_v̱̅̅h(v̱̅̅)|_v̱̅̅=0v̱̅̅ + 1/2v̱̅̅v̱̅̅h|_v̱̅̅=0v̱̅̅, where ∂_v̱̅̅h(v̱̅̅)|_v̱̅̅=0 = A̱()θ(θ)|_θ=≈ 0 as minimize the loss and v̱̅̅h|_v̱̅̅=0 = A̱()θA̱()|_θ = with θ the standard Euclidean Hessian matrix of the loss. Further details about this step can be found in appendix B. Tangential Laplace.   Similar to the standard Laplace approximation, we get a Gaussian approximate posterior q̅(v̱̅̅) = 𝒩(v̱̅̅ |  0,  Σ) on the tangent space in the normal coordinates with covariance Σ = v̱̅̅h|_v̱̅̅=0^-1. Note that changing the normal coordinates v̱̅̅ to tangential coordinates v̱ is a linear transformation and hence v̱∼𝒩(0, A̱()Σ̱A̱()), which means that this covariance is equal to θ|_θ=^-1 since A̱() is a symmetric matrix, and hence, it cancels out. The approximate posterior q_𝒯(v̱)=𝒩(v̱ | 0,Σ) in tangential coordinates, thus, matches the covariance of the standard Laplace approximation. The predictive posterior.   We can approximate the predictive posterior distribution using Monte Carlo integration as p(y | x̱', ) = ∫ p(y|x̱', , θ) q(θ) θ = ∫ p(y|x̱', , v̱) q_𝒯(v̱) v̱≈1/S∑_s=1^S p(y|x̱', , v̱_s), v̱_s ∼ q_𝒯(v̱). Intuitively, this generates tangent vectors according to the standard Laplace approximation and maps them back to the manifold by solving the geodesic ode. This lets the Riemannian approximate posterior take shape from the loss landscape, which is largely ignored by the standard Laplace approximation. We emphasize that this is a general construction that applies to the same Bayesian inference problems as the standard Laplace approximation and is not exclusive to Bayesian neural networks. The above analysis also applies to the linearized Laplace approximation. In particular, when the (x̱) is considered instead of the f_θ(x̱) the loss function in (<ref>) changes to θ. Consequently, our Riemannian metric is computed under this new loss, and θθ appears in the metric (<ref>). Example.   To build intuition, we consider a logistic regressor on a linearly separable dataset (Fig. <ref>). The likelihood of a point x̱∈^2 to be in one class is p(C=1|x̱)=σ(x̱θ + b), where σ(·) is the function, θ∈^2 and b∈. After learning the parameters, we fix b_* and show the posterior with respect to θ together with the corresponding standard Laplace approximation (Fig. <ref>). We see that the approximation assigns significant probability mass to regions where the true posterior is near-zero, and the result of a corresponding sample is a poor classifier (Fig. <ref>). Instead, when we consider this sample as the initial velocity and compute the associated geodesic with the exponential map, we generate a sample at the tails of the true posterior which corresponds to a well-behaved model (Fig. <ref>). We also show the predictive distribution for both approaches and even if both solve easily the classification problem, our model better quantifies uncertainty (Fig. <ref>). §.§ Efficient implementation Our approach is a natural extension of the standard Laplace approximation, which locally adapts the approximate posterior to the true posterior. The caveat is that computational cost increases since we need to integrate an ode for every sample. We now discuss partial alleviations. Integrating the ode.   In general, the system of second-order nonlinear odes (see appendix B for the general form) is non-trivial as it depends on the geometry of the loss surface, which is complicated in the over-parametrized regime <cit.>. In addition, the dimensionality of the parameter space is high, which makes the solution of the system even harder. Nevertheless, due to the structure of our Riemannian metric (<ref>), the ode simplifies to c̈(t) = -θ(c(t))(1 + θ(c(t))θ(c(t)))^-1ċ(t)H_θ[ℒ](c(t)) ċ(t), which can be integrated reasonably efficiently with standard solvers. In certain cases, this ode can be further simplified, for example when we consider the linearized loss θ and Gaussian likelihood. Automatic-differentiation.   The ode (<ref>) requires computing both gradient and Hessian, which are high-dimensional objects for modern neural networks. While we need to compute the gradient explicitly, we do not need to compute and store the Hessian matrix, which is infeasible for large networks. Instead, we rely on modern automatic-differentiation frameworks to compute the Hessian-vector product between H_θ [ℒ](c(t)) and ċ(t) directly. This both reduces memory use, increases speed, and simplifies the implementation. Mini-batching.   The cost of computing the metric, and hence the ode, scales linearly with the number of training data, which can be expensive for large datasets. A reasonable approximation is to mini-batch the estimation of the metric when generating samples, i.e. construct a batch ℬ of B random data points and use the associated loss in the ode (<ref>). As usual, we assume that (θ) ≈ (N/B)_ℬ(θ). Note that we only mini-batch the metric and not the covariance of our approximate posterior q_𝒯(v̱). [22]r0.4 riem-la < g r a p h i c s > lin-riem-la < g r a p h i c s > Analysis of mini-batching We analyze the influence of mini-batching in our methods and provide empirical evidence in Fig. <ref>. In principle, the geometry of the loss surface (θ) controls the geodesics via the associated Riemannian metric, so when we consider the full dataset we expect the samples to behave similarly to f_(x̱). In other words, our approximate posterior generates weights near resulting in models with similar or even better loss. When we consider a batch the geometry of the associated loss surface ℒ_ℬ(θ) controls the generated geodesic. So if the batch represents well the structure of the full dataset, then the resulting model will be meaningful with respect to the original problem, and in addition, it may exhibit some variation that is beneficial from the Bayesian perspective for the quantification of the uncertainty. The same concept applies in the linearized version, with the difference that when the full dataset is considered the geometry of θ may over-regularize the geodesics. Due to the linear nature of (θ) the associated Riemannian metric is small only close to so the generated samples are similar to f_(x̱). We relax this behavior and potentially introduce variations in the resulting models when we consider a different batch whenever we generate a sample. Find more details in appendix D. § RELATED WORK Bayesian neural networks.   Exact inference for bnns is generally infeasible when the number of parameters is large. Several methods rely on approximate inference, which differs in their trade-off between computational cost and the goodness of the approximation. These techniques are usually based on the Laplace approximation <cit.>, variational inference <cit.>, dropout <cit.>, stochastic weight averaging <cit.> or Monte Carlo based methods <cit.>, where the latter is often more expensive. Laplace approximations.   In this work, we are primarily focused on Laplace approximations, although the general geometric idea can be used in combination with any other inference approach listed above. Particularly, Laplace's method for bnns was first proposed by <cit.> in his evidence framework, where a closed-form approximation of predictive probabilities was also derived. This one uses a first-order Taylor expansion, also known as linearization around the map estimate. For long, Laplace's method was infeasible for modern architectures with large networks due to the exact computation of the Hessian. The seminal works of <cit.> and <cit.> made it possible to approximate the Hessian of large networks, which made Laplace approximations feasible once more <cit.>. More recently, the Laplace approximation has become a go-to tool for turning trained neural networks into bnns in a post-hoc manner, thanks to easy-to-use software <cit.> and new approaches to scale up computation <cit.>. In this direction, other works have only considered a subset of the network parameters <cit.>, especially the last-layer. This is de facto the only current method competitive with ensembles <cit.>. Posterior refinement.   Much work has gone into building more expressive approximate posteriors. Recently, <cit.> proposed to use normalizing flows to get a non-Gaussian approximate distribution using the Laplace approximation as a base distribution. Although this requires training an additional model, they showed that few bijective transformations are enough to improve the last-layer posterior approximation. <cit.>, instead, propose to refine the Laplace approximation by using Gaussian variational Bayes or a Gaussian process. This still results in a Gaussian distribution, but it has proven beneficial for linearized Laplace approximations. Other approaches rely on a mixture of distributions to improve the goodness of the approximation. <cit.> expand a variational approximation iteratively adding components to a mixture, while <cit.> use a weighted sum of posthoc Laplace approximations generated from different pre-trained networks. <cit.>, instead, introduces auxiliary variables to make a local refinement of a mean-field variational approximation. Differential geometry.   Differential geometry is increasingly playing a role in inference. <cit.> make a Riemannian normal distribution locally adapt to data by learning a suitable Riemannian metric from data. In contrast, our metric is derived from the model. This is similar in spirit to work that investigates pull-back metrics in latent variable models <cit.>. In addition to that, the geometry of the latent parameter space of neural networks was recently analyzed by <cit.> focusing on the invariance of flatness measures with respect to re-parametrizations. Finally, we note that <cit.> considers Laplace approximations on the sphere as part of constructing a recursive Kalman-like filter. § EXPERIMENTS We evaluate our Riemannian la (riem-la) using illustrative examples, image datasets where we use a convolutional architecture, and real-world classification problems. We compare our method and its linearized version to standard and linearized la. All predictive distributions are approximated using Monte Carlo (MC) samples. Although last-layer la is widely used lately, we focus on approximating the posterior of all the weights of the network. In all experiments, we maximize the marginal log-likelihood to tune the hyperparameters of the prior and the likelihood as proposed in <cit.>. To evaluate the performance in terms of uncertainty estimation we considered the standard metrics in the literature: negative log-likelihood (NLL), the Brier score (BRIER), the expected calibration error (ECE), and the maximum calibration error (MCE). More experiments are available in appendix D together with the complete training and modeling details. §.§ Regression problem [22]r0.4 < g r a p h i c s > < g r a p h i c s > Posterior samples under a simple (top) and an overparametrized model (bottom). Vanilla la is known to generate bad models, while our samples from riem-la quantify well the uncertainty. We consider the toy-regression problem proposed by <cit.>. The dataset contains 200 data points, and we randomly pick 150 examples as our training set and the remaining 50 as a test set. As shown by <cit.>, using samples from the LA posterior performs poorly in regression even if the Hessian is not particularly ill-conditioned, i.e. when the prior precision is optimized. For this reason, the linearization approach is necessary for regression with standard LA. Instead, we show that even our basic approach fixes this problem when the prior is optimized. We tested our approach by considering two fully connected networks, one with one hidden layer with 15 units and one with two layers with 10 units each, both with activations. Our approach approximates well the true posterior locally, so the resulting function samples follow the data. Of course, if the Hessian is extremely degenerate our approach also suffers, as the initial velocities are huge. When we consider the linearized version of our approach the result is the same as the standard LA-linearization, which we include in the appendix D, where we also report results for in-between uncertainty as proposed by <cit.>. §.§ Classification problems Illustrative example. We consider a 2-dimensional binary classification problem using the banana dataset which is shown in Fig. <ref>. We train a 2-layer fully connected neural net with 16 hidden units per layer and activation. For all methods, we use 100 MC samples for the predictive distribution. As in regression, direct samples from the vanilla la lead to a really poor model (Fig. <ref>) with high uncertainty both within and away from the data support. Instead, the other three methods (Fig. <ref>-<ref>) show a better-behaved confidence that decreases outside of the data support. This is also supported by the metrics in Table <ref>, where remarkably riem-la performs better in terms of NLL and Brier score on a separate test set. As we discussed in Sec. <ref>, using a subset of the dataset for computing the exponential map can be beneficial for our linearized manifold in addition to speeding up computation. In Fig. <ref> we plot the confidence for our linearized approach using batches while in appendix D we show the confidence of the same approach using the full data for solving the odes. We can see that our linearized riem-la tends to be overconfident outside the data region and also close to the decision boundary. This behaviour can be found in the high NLL that linearized riem-la gets compared to our vanilla approach and linearized la. UCI datasets.   We compare our approach against the standard la on a set of six UCI classification datasets using a fully connected network with a single layer, 50 hidden units and activation. The predictive distribution is estimated using MC with 30 samples from the approximate posterior of each approach. In Table <ref> we compare the methods in terms of their negative log-likelihood (NLL) in the test set. All other metrics are reported in appendix D. We are considering the setting where we optimize the prior-precision post-hoc, which is the optimal setting for la and linearized la. We consider our standard approaches without using batches, which we have seen that specifically for our linearized approach may lead to sub-optimal performance. From the results in Table <ref> we see that our riem-la consistently performs better in terms of negative log-likelihood than vanilla and linearized la. We also observe that in two datasets the performance of our linearized riem-la is not optimal. This implies that the loss surface of the linearized loss potentially over-regularizes the geodesics as we analyzed in Sec. <ref>, and in this case, considering mini-batching could have been beneficial. Image classification.   We consider a small convolutional neural network on MNIST and FashionMNIST. Our network consists of two convolutional layers followed by average pooling layers and three fully connected layers. We consider a model of this size as the high dimensionality of the parameter space is one of the main limitations of the ode solver. For the training of the model, we subsample each dataset and we consider 5000 observations by keeping the proportionality of labels, and we test in the full test set containing 8000 examples. In Table <ref> we compare the different methods with the prior precision optimized as this is the ideal setting for the linearized la. We refer to appendix D for the setting with the prior precision not optimized. From the results we observe that our standard riem-la performs better than all the other methods in terms of NLL and Brier score, meaning that the models are better calibrated, but it also leads to a more accurate classifier than the MAP. In terms of ECE, it seems that considering the linearized approach is beneficial in producing better-calibrated models in both datasets. This holds both for our approach linearized riem-la and the standard la. Optimizing the prior precision post-hoc is crucial for the vanilla la and associated results can be seen in appendix D. Instead, both our methods appear to be robust and consistent, as they achieve similar performance no matter if the prior precision is optimized or not. Note that for the mini-batches for our approaches, we consider 20% of the data by randomly selecting 1000 observations while we respect the label frequency based on the full dataset. Clearly, the batch-size is a hyperparameter for our methods and can be estimated systematically using cross-validation. Even if we do not optimize this hyperparameter, we see that our batched version of riem-la and lin-riem-la perform better than the standard la and on-par with our lin-riem-la without batches, implying that a well-tuned batch-size can potentially further improve the performance. Nevertheless, this also shows that our method is robust with respect to the batch-size. § CONCLUSION We propose an extension to the standard Laplace approximation, which leverages the natural geometry of the parameter space. Our method is parametric in the sense that a Gaussian distribution is estimated using the standard Laplace approximation, but it adapts to the true posterior through a nonparametric Riemannian metric. This is a general mechanism that, in principle, can also apply to, e.g., variational approximations. In a similar vein, while the focus of our work is on Bayesian neural networks, nothing prevents us from applying our method to other model classes. Empirically, we find that our Riemannian Laplace approximation is better or on par with alternative Laplace approximations. The standard Laplace approximation crucially relies on both linearization and on a fine-tuned prior to give useful posterior predictions. Interestingly, we find that the Riemannian Laplace approximation requires neither. This could suggest that the standard Laplace approximation has a rather poor posterior fit, which our adaptive approach alleviates. Limitations.   The main downside of our approach is the computational cost involved in integrating the ode, which is a common problem in computational geometry <cit.>. The cost of evaluating the ode scales linearly with the number of observations, and we have considered the `obvious' mini-batching solution. Empirically, we find that this introduces some stochasticity in the sampling, which can actually be helpful in the posterior exploration. The computational cost also grows with the dimensionality of the parameter space, predominantly because the number of necessary solver steps increases as well. Our implementation relies on an off-the-shelf ode solver, and we expect that significant improvements can be obtained using a tailor-made numerical integration method.=-1 This work was funded by the Innovation Fund Denmark (0175-00014B) and the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF20OC0062606). It also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research, innovation programme (757360). SH was supported in part by a research grant (42062) from VILLUM FONDEN. plainnat sectiondottedtocline11.5em2.3em figuresection equationsection height 4pt 0.25in - Riemannian Laplace approximations for Bayesian neural networks (Appendix) 0.29in -height 1pt 0.09in toc § LAPLACE BACKGROUND We provide a more detailed introduction to Laplace approximation and discuss the optimiziation of the prior and likelihood hyperparameters by maximizing the marginal likelihood. In BNNs, Laplace approximation is use to approximate the posterior distribution, i.e.: p(θ|𝒟) = p(𝒟|θ)p(θ)/p(𝒟) = p(𝒟|θ)p(θ)/∫_Θp(𝒟|θ)p(θ)= 1/Z p(𝒟|θ)p(θ). This is done by fitting a Gaussian approximation to the unnormalized distribution p(𝒟|θ)p(θ) at its peak, where p(𝒟|θ) is the likelihood distribution and p(θ) is the prior over the weights. In standard training of neural networks the mean-squared error loss is usually used for regression. This corresponds to optimizing a Gaussian log-likelihood up to a scaling factor while using the cross-entropy loss in classification correspond to minimizing the negative log-likelihood. The usual weight decay, or L2 regularization, corresponds instead to a Gaussian prior p(θ) distribution. More specifically, training with λ ||θ||_2^2 corresponds to a Gaussian prior 𝒩(θ; 0, λ^-2/2I) Therefore a natural peak to choose is given by the , which is the set of weights obtained at the end of the training. Indeed, is usually computed by: = _θ{ -∑_n=1^N log p(y̱_n  | x̱_n, θ) - log p(θ) }_(θ). Once we have , the Laplace approximation uses a a second-order Taylor expansion of ℒ(θ) around , which yields: (θ) ≈() + θ(θ)|_θ=(θ - ) + 1/2(θ - )θ|_θ=(θ - ), where the first-order term ∇_θ(θ)|_θ=≈ 0 because the gradient at is 0. By looking at ℒ(θ), we can notice that the Hessian is composed by two terms: a data fitting term and a prior term. Assuming p(θ) = 𝒩(θ; 0, γ^2 I), then the Hessian can be expressed as θ|_θ= = γ^-2𝐈 + ∑_i=1^n∇_θ^2 log p(y_i| x_i) |_= (H + α𝐈), where we defined α = 1/γ^2, i.e. the prior precision. Using the fact that (θ) is the negative log-numerator of (<ref>), we can get p(𝒟|θ)p(θ) by taking the exponential of - (θ). By doing so we have: p(𝒟|θ)p(θ) ≈exp( -(θ)) = exp(-ℒ() - 1/2 (θ - )^T (𝐇 + α𝐈)(θ - )). For simplicity, we can define Σ = (𝐇 + α𝐈)^-1 and rewrite the equation above to obtain: p(𝒟|θ)p(θ) ≈exp(-ℒ()) exp(- 1/2 (θ - )^T Σ^-1 (θ - ))) We can then use this approximation to estimate the normalizing constant of our approximate posterior, which corresponds to the marginal log-likelihood p(𝒟): p(𝒟) = Z ≈∫_Θexp(-ℒ()) exp(- 1/2 (θ - )^T Σ^-1 (θ - ))) dθ. This can be rewritten as p(𝒟) ≈exp(-ℒ()) ∫_Θexp(- 1/2 (θ - )^T Σ^-1 (θ - ))), and by using the Gaussian integral properties we can write it as: p(𝒟) ≈exp (-ℒ())(2π)^d/2(Σ)^1/2, and by taking the logarithm we get log p(𝒟) ≈ -ℒ() + log (2π)^d/2 + log(Σ)^1/2, which is the approximation of the log marginal likelihood that we want to maximize to optimize the parameters that appears in ℒ(). In a regression problem, we are interested in optimizing both the variance of the Gaussian likelihood and the prior precision. In classification, instead, we just have the prior precision as a hyperparameter to tune. § RIEMANNIAN GEOMETRY We rely on Riemannian geometry <cit.> in order to construct our approximate posterior. In a nutshell, a d-dimensional Riemannian manifold can be seen intuitively as a smooth d-dimensional surface that lies within a Euclidean space of dimension D>d, which allows one to compute distances between points that respect the geometry of the surface. A Riemannian manifold is a smooth manifold together with a Riemannian metric M̱(x̱) that acts on the associated tangent space x̱ at any point x̱∈. A Riemannian metric M̱:→^dim()×dim() is a smoothly changing positive definite metric tensor that defines an inner product on the tangent space x̱ at any point x̱∈. We focus on the Bayesian neural network framework where Θ=^K is the parameter space of the associated deep network, and we consider the manifold =g(θ)=[θ, (θ)]. This is essentially the loss surface which is K-dimensional and lies within a (K+1)-dimensional Euclidean space. In order to satisfy the smoothness condition for we restrict to activation functions as the , while common loss function as the mean squared error and softmax are also smooth. The parameter space Θ is a parametrization of this surface and technically represents the intrinsic coordinates of the manifold, which is known as the global chart in the literature. The Jacobian J̱_g(θ)∈^K+1× K of the map g spans the tangent space on the manifold, and a tangent vector can be written as J̱_g(θ)v̱, where v̱∈^K are the tangential coordinates. We can thus compute the inner product between two tangent vectors in the ambient space using the Euclidean metric therein as J̱_g(θ)v̱J̱_g(θ)u̱ = v̱M̱(θ)u̱. Here the matrix M̱(θ) = J̱_g(θ)J̱_g(θ) = _K + ∇_θ(θ)∇_θ(θ) is a Riemannian metric that is known in the literature as the pull-back metric. Note that the flat space Θ is technically a smooth manifold and together with M̱(θ) is transformed into a Riemannian manifold. This can be also seen as the abstract manifold definition where we only need the intrinsic coordinates and the Riemannian metric to compute geometric quantities, and not the actual embedded manifold in the ambient space. One example of a geometric quantity is the shortest path between two points θ_1, θ_2∈Θ. Let a curve c:[0,1]→Θ with c(0)=θ_1 and c(1)=θ_2, the its length defined under the Riemannian metric as [c]=∫_0^1 √(ċ(t)M̱(c(t))ċ(t))dt. This quantity is computed intrinsically but it also corresponds to the length of the associated curve on . The shortest path then is defined as c^*(t) = _c [c], but as the length is invariant under re-parametrizations of time, we consider the energy functional instead, which we optimize using the Euler-Lagrange equations. This gives the following system of second-order nonlinear ordinary differential equations (odes) c̈(t) = -M̱^-1(c(t))/2[2[ ∂M̱(c(t))/∂ c_1(t),…,∂M̱(c(t))/∂ c_K(t)] - ∂vec[M̱(c(t))]/∂ c(t)] (ċ(t)⊗ċ(t)), where vec[·] stacks the columns of a matrix and ⊗ the Kronocher product <cit.>. A curve that satisfies this system is known as geodesic and is potentially the shortest path. When the system is solved as a Boundary Value Problem (bvp) with initial condition c(0)=θ_1 and c(1)=θ_2, we get the geodesic that connects these two points. Let v̱=ċ(0) be the velocity of this curve at t=0. When the system is solved as an Initial Value Problem (IVP) with conditions c(0)=θ_1 and ċ(0)=v̱, we get the geodesic c_v̱(t) between c_v̱(0)=θ_1 and c_v̱(1)=θ_2. This operation is known as the exponential map and we use it for our approximate posterior. In general, an analytic solution for this system of odes does not exist <cit.>, and hence, we rely on an approximate numerical off-the-shelf ode solver. Note that this is a highly complicated system, especially when the Riemannian metric depends on a finite data set, while it is computationally expensive to evaluate it. As we show below, the structure of the Riemannian metric that we consider, allows us to simplify significantly this system. For the Riemannian metric in (<ref>) the general odes system in becomes c̈(t) = -θ(c(t))/1 + θ(c(t))θ(c(t))ċ(t)H_θ[](c(t))ċ(t) We consider the general system, and we compute each term individually. To simplify notation we will use M̱ := M̱(c(t)), ∇ := θ(c(t)), := H_θ[](c(t)), _i the i-th column of the Hessian and ∇_i the i-th element of the gradient ∇. Using the Sherman–Morrison formula we have that M̱(c(t))^-1 = _K - ∇_θ(c(t)) ∇_θ(c(t))/1 + ∇_θ(c(t))∇_θ(c(t)) The first term in the brackets is 2[∂M̱(c(t))/∂ c_1(t),…,∂M̱(c(t))/∂ c_K(t)]= 2[_1∇ + ∇_1, …, _K∇ + ∇_K]_D× D^2 and the second term in the brackets is ∂vec[M̱(c(t))]/∂ c(t) = [∇_1 + _1∇, …, ∇_K + _K∇] and their difference is equal to [2∇_1 + _1∇ - ∇_1, …, 2∇_K + _K∇ - ∇_K]. We compute the matrix-vector product between the matrix (<ref>) and the Kronocker product ċ(t)⊗ċ(t)=[ċ_1 ċ, …, ċ_K ċ]∈^D^2 × 1 which gives ∑_i=1^K 2∇_iċċ_i + _i∇ċċ_i - ∇_iċċ_i =2∇∑_i=1^K ċ_i _iċ + ∇ċ∑_i=1^K_i ċ_i - ċ∑_i=1^K∇_i ċ_i = 2 ∇ċċ where we used that ∑_i=1^K_iċ_i = ċ and ∑_i=1^K∇_i ċ_i = ∇ċ. As a final step, we plug-in this result and the inverse of the metric in the general system which gives c̈ = -1/2(_K - ∇∇/1 + ∇∇)2∇ċċ = -∇/1+∇∇ċċ. Taylor expansion.  As regards the Taylor expansion we consider the space Θ and the Riemannian metric therein M̱(θ) and an arbitrary smooth function f:Θ→. If we ignore the Riemannian metric the second-order approximation of the function f around a point x̱∈Θ is known to be f̂_Eucl(x̱+v̱) ≈ f(x̱) + ∇_θ f(θ)|_θ = x̱v̱ + 1/2v̱_θ[f](θ)|_θ=x̱v̱, where ∇_θ f(θ)|_θ = x̱ is the vector with the partial derivatives evaluated at x̱ and _θ[f](θ)|_θ=x̱ the corresponding Hessian matrix with the partial derivatives ∂^2 f(θ)/∂θ_i∂θ_j|_θ=x̱. When we take into account the Riemannian metric, then the approximation becomes f̂_Riem(x̱+v̱) ≈ f(x̱) + ∇_θ f(θ)|_θ = x̱v̱ + 1/2v̱[_θ[f](θ) - Γ_ij^k ∇_θ f(θ)_k]|_θ=x̱v̱, where Γ_ij^k are the Christoffel symbols and the Einstein summation is used. Note that even if the Hessian is different, the approximation again is a quadratic function. Now we consider the Taylor expansion on the associated tangent space at the point x̱ instead of directly on the parameter space Θ. We define the function h(v̱) = f(x̱v̱) on the tangent space centered at x̱ and we get that ĥ(u̱) ≈ h(0) + ∂_v̱f(x̱v̱)|_v̱=0u̱ + 1/2u̱[_v̱[f](x̱v̱) - Γ_ij^k ∂_v̱ f(x̱v̱)_k]|_v̱=0u̱, where we apply the chain-rule and we use the fact that ∂_v̱x̱v̱|_v̱=0= and ∂^2_v̱x̱v̱|_v̱=0=0. So, we get that ∂_v̱f(x̱v̱)|_v̱=0 = ∇_θ f(θ)|_θ=x̱ and _v̱[f](x̱v̱)|_v̱=0 = ∂^2 f(θ)/∂θ_i∂θ_j|_θ=x̱. The difference here is that the quadratic function is defined on the tangent space, and the exponential map is a non-linear mapping. Therefore, the actual approximation on the parameter space f̂_Tangent(x̱v̱) = ĥ(v̱) is not a quadratic function as before, but it adapts to the structure of the Riemannian metric. Intuitively, the closer a point is to the base point x̱ with respect to the Riemannian distance, the more similar is the associated value f̂_Tangent(x̱v̱) to f(x̱). In our problem of interest, this behavior is desirable implying that if a parameter θ' is connected through low-loss regions with a continuous curve to , then we will assign to θ high approximate posterior density. We can easily consider the approximation on the normal coordinates, where we know that the Christoffel symbols vanish, by using the relationship u̱ = A̱u̱̅̅ with A̱=M̱(x̱)^-1/2, and thus the Taylor approximation of the function h̅(u̱̅̅) = f(x̱A̱u̱̅̅) becomes ĥ̅̂(u̱̅̅) ≈ĥ(0) + A̱∇_θ f(θ)|_θ=x̱A̱u̱̅̅+ 1/2u̱̅̅A̱_θ[f](θ)|_θ=x̱A̱u̱̅̅. Further details can be found in related textbooks <cit.> and articles <cit.>. Linearized manifold.  Let us consider a regression problem with likelihood p(y̱|x̱,θ) = (y̱|f_θ(x̱),σ^2), where f_θ is a deep neural network, and prior p(θ)=(θ|0,λ_K). The loss of the (x̱) is then defined as θ = 1/2σ^2∑_n=1^N(y̱ - f_(x̱_n) - ∇_θ f(x̱_n)|_θ=θ - )^2 + λ ||θ||^2. The gradient and the Hessian of this loss function can be easily computed as ∇_θθ = 1/σ^2∑_n=1^N -(y̱ - f_(x̱_n) - ∇_θ f(x̱_n)|_θ=θ - )∇_θ f(x̱_n)|_θ= + 2λθ _θ[ℒ^lin](θ) =1/σ^2∑_n=1^N ∇_θ f(x̱_n)|_θ=∇_θ f(x̱_n)|_θ= + 2λ_K, which can be used to evaluate the odes system in (<ref>). A similar result can be derived for the binary cross entropy loss and the Bernoulli likelihood. § IMPLEMENTATION DETAILS In this section we present the implementation details of our work. The code will be released upon acceptance. Gradient, Hessian, and Jacobian computations.   The initial velocities used for our methods are samples from the Laplace approximation. We rely in the library <cit.> for fitting the Laplace approximation and for optimizing the hyperparameters by using the marginal log-likelihood. We also used the same library to implement all the baselines consider in this work. As we have seen from Sec. <ref>, to integrate the ode we have to compute (<ref>), which we report also here for clarity: c̈(t) = -θ(c(t))(1 + θ(c(t))θ(c(t)))^-1ċ(t)H_θ[ℒ](c(t)) ċ(t). We use <cit.> to compute both the gradient and the Hessian-vector-product. We then rely on <cit.> implementation of the explicit Runge-Kutta method of order 5(4) <cit.> to solve the initial-value problem. We use default tolerances in all our experiments. Linearized manifold.   We define our linearized manifold by considering the “linearized” function (x̱) = f_(x̱) + ∇_θ f_θ(x̱)|_θ=θ - to compute the loss, where ∇_θ f_θ(x̱)|_θ=∈^C× K is the Jacobian. To compute ∇_θ f_θ(x̱)|_θ=θ - we use to compute a jacobian-vector product. This way, we avoid having to compute and store the Jacobian, which for big networks and large dataset is infeasible to store. § EXPERIMENTS DETAILS AND ADDITIONAL RESULTS §.§ Regression example For the regression example we consider two fully connected networks, one with one hidden layer with 15 units and one with two layers with 10 units each, both with activations. We train both model using full-dataset GD, using a weight decay of 1e-2 for the larger model and 1e-3 for the smaller model for 35000 and 700000 epochs respectively. In both cases, we use a learning rate of 1e-3. In Sec. <ref>, we show some samples from the posterior distribution obtained by using vanilla LA and our riem-la approach. In Fig. <ref> we report the predictive distribution for our classic approach while in Fig. <ref> we show both the posterior and the predictive for our linearized manifold and linearized LA. We can see that our linearized approach perform similarly to linearized LA in terms of posterior samples and predictive distribution. A more interesting experiment is to consider a gap in our dataset to measure the in-between uncertainty. A good behaviour would be to give calibrated uncertainty estimates in between separated regions of observations <cit.>. In our regression example, we consider the points between 1.5 and 3 as test set, and report posterior samples and predictive distribution for both our methods, vanilla and linearized, and LA. From Fig. <ref>, we can notice that our riem-la is able to generate uncertainty estimates that are reliable both in the small and the overparametrized model. It gets more interesting when we consider our linearized manifold approach as it can be seen in Fig. <ref>. While for a small model the predictive distribution we get is comparable to linearized LA, when we consider the overparametrized model, it tends to overfit to a solution that it is better from the perspective of the linearized loss but very different from the MAP. Therefore, this results in uncertainty estimates that are not able to capture the true data set in this case. This behaviour is related to our discussion in Sec. <ref>. By using a subset of the training set to solve the odes system we alleviate this behavior by giving more reliable uncertainty estimates. [subfigure]labelformat=empty §.§ Illustrative 2D classification example We consider the dataset as an illustrative example to study the confidence of our proposed method against Laplace and linearized Laplace. We train a 2-layer fully connected neural net with 16 hidden units per layer and activation using SGD for 2500 epochs. We use a learning rate of 1e-3 and weight-decaay of 1e-2. Although it is a binary classification problem, we use a network with two outputs and use the cross-entropy loss to train the model because the library does not support binary cross-entropy at the moment. For all methods, we use 100 MC samples for the predictive distribution. As we have mentioned in Sec. <ref>, our linearized manifold when we solve the odes system by using the entire training set tends to be overconfident compared to the our classic non-linearized approach. This can be easily seen form Fig. <ref> where we can compare the last two rows and see that solving the odes system using batches is beneficial in terms of uncertainty quantification. We also plot the confidence of all different methods when we do not optimize the prior precision. We can see that our proposed approaches are robust to it, while linearized LA highly depends on it. §.§ An additional illustrative example We consider the dataset with five different classes as an additional illustrative classification example. Given the way these classes clusters, it is interesting to see if some of the approaches are able to be confident only in regions where there is data. We generate a dataset considering 200 examples per classes and we use 350 example as training set and the remaining as test set. We consider a two layer fully-connected network with 20 hidden units per layer and activation. As before, we train it using SGD with a learning rate of 1e-3 and a weight decay of 1e-2 for 5000 epochs. From Fig. <ref>, we can see not only vanilla LA but also linearized LA is failing in being confident also in-data region when we use 50 posterior samples. Our proposed approaches instead give meaningful uncertainty estimates, but we can see that by optimizing the prior precision, our methods get slightly more confident far from the data. §.§ Additional results in the UCI classification tasks In the main paper we present results on five different UCI classification tasks in terms of test negative log-likelihood. Here, we report also results in terms of test accuracy, Brier score, and expected calibration error. Results with prior precision optimized are shown in Table <ref>, while for prior precision not optimized we refer to Table <ref>. In both cases, we consider a neural network with a single fully connected layer consisting of 50 hidden units and activation. We train it using Adam optimizer <cit.> for 10000 epochs using a learning rate of 1e-3 and a weight decay of 1e-2. From the two tables, we can see that our riem-la is better than all the other methods both in terms of NLL, brier, and ECE. It is also surprising that our method, both the classic and the linearized approach, are able to improve the accuracy of the MAP estimate in most datasets. §.§ Complete results on MNIST and FMNIST For these two image classification tasks we consider a small convolutional neural network. Our network consists of the following layers: an initial convolutional layer with 4 channels and 5× 5 filter followed by activation and an average pooling layer. The we have another convolutional layer still with 4 channels and 5× 5 kernel also followed by activation and an average pooling layer. Then we have three fully connected layer with 16, 10, and 10 hidden units respectively and activation. We train both models using SGD with a learning rate of 1e-3 and weight decay of 5e-4 for 100 epoch. The learning rate is annealed using the cosine decay method <cit.>. In Table <ref> and Table <ref>, we can see that also when we do not optimize the prior precision our riem-la is mostly performing better than all the alternatives. In particular, for the CNN trained on MNIST and no prior precision optimization, we have that the MAP is also performing well in terms of NLL and Brier score. On FashionMNIST, instead, if we do not optimize the prior we have that both our approaches are better than all the other methods. §.§ Out-of-distribution results The benefit of having meaningful and robust uncertainty estimation is that our model would then be confident in-data region while being uncertain in region without data. Therefore, if this is happening, then we would expect the model to be able to detect out-of-distribution (OOD) examples more successfully. We consider classic OOD images detection tasks, where we train a model on MNIST and tested on FashionMNIST, EMNIST, and KMNIST and one trained on FMNIST and tested on the remaining datasets. It's well known that linearized LA is one of the strongest method for OOD detection <cit.>. We consider the same MAP estimates we used in the previous section and present OOD performance in Table <ref> and Table <ref>. We can see that our proposed method is consistently working better than linearized and classic LA in all the considered setting apart for models trained on FMNIST where we do not optimize the prior precision to compute our initial velocities. In that setting, however, our linearized approach using batches is getting similar performance than linearized LA.
http://arxiv.org/abs/2306.01968v2
20230603002850
End-of-Horizon Load Balancing Problems: Algorithms and Insights
[ "Daniel Freund", "Chamsi Hssaine", "Jiayu Kamessi Zhao" ]
math.OC
[ "math.OC" ]
Individual Causal Inference Using Panel Data With Multiple Outcomes Wei Tian UNSW August 24, 2021 =================================================================== Effective load balancing is at the heart of many applications in operations. Frequently tackled via the balls-into-bins paradigm, seminal results have shown that a limited amount of (costly) flexibility goes a long way in order to maintain (approximately) balanced loads throughout the decision-making horizon. This paper is motivated by the fact that balance across time is too stringent a requirement for some applications; rather, the only desideratum is approximate balance at the end of the horizon. Thus motivated, in this work we design “limited-flexibility” algorithms for three instantiations of the end-of-horizon balance problem: the canonical balls-into-bins problem <cit.>, opaque selling strategies for inventory management, and parcel delivery for e-commerce fulfillment. For the balls-into-bins model, we show that a simple policy which begins exerting flexibility toward the end of the time horizon (i.e., when Θ(√(Tlog T)) periods remain), suffices to achieve an approximately balanced load (i.e., a maximum load within 𝒪(1) of the average load). Moreover, with just a small amount of adaptivity, a threshold policy achieves the same result, while only exerting flexibility in 𝒪(√(T)) periods, thus matching a natural lower bound. We then adapt these algorithms to develop order-wise optimal policies for the opaque selling problem. Finally, we show via a data-driven case study on the 2021 Amazon Last Mile Routing Research Challenge that the adaptive policy designed for the simpler balls-into-bins model can be carefully modified to (i) achieve approximate balance at the end of the horizon and (ii) yield significant cost savings relative to policies which either never exert flexibility, or exert flexibility aggressively enough to always maintain balanced loads. The unifying motivation behind our algorithms for these three vastly different applications is the observation that exerting flexibility at the beginning of the horizon is likely wasted when system balance is only evaluated at the end. § INTRODUCTION A key question in operations management is how to effectively address supply-demand imbalances. When a decision-maker has access to different supply sources, and these imbalances are only due to stochastic fluctuations, this question is often tackled through the lens of load balancing. The canonical model of load balancing is the balls-into-bins paradigm, in which balls (demand) are sequentially placed into bins (supply) according to some (potentially random) allocation scheme. These models are used to understand how a decision-maker can maintain a (approximately) balanced load across bins over time, i.e., design policies that keep the number of balls in each bin approximately equal. This is a natural goal in many applications, including queuing settings where average delay is a metric of interest. In many other applications, however, maintaining a balanced load over all time may be an unnecessarily stringent requirement. Instead, there may only be specific points in time at which balance is required, or even a single such point. This point in time may be a priori unknown, it may depend on the way in which the process unfolds, and it may depend on the decision-maker's previous actions. Examples where this is the case include the following: GPU management in cloud computing. Consider an incoming stream of data that must be instantaneously allocated to a set of servers. Since GPU time on servers is expensive, these servers only start processing the files once they have all been received <cit.>. In order to minimize the makespan of the processing time, it is beneficial to have the workload be as balanced as possible across the servers once all files have been received; however, at intermediate points before the stream ends, balance across servers is not a metric of interest. Inventory management. Consider a retailer that sells a large number of a few different products, and jointly restocks them all once the stock of any one product is depleted. For inventory costs to be minimized, the retailer wants all items to be close to depletion at the time when restocking occurs (see <ref> for more details); however, imbalances in remaining inventory do not affect the retailer's supply costs at other points in time <cit.>. Parcel delivery. Consider a delivery fulfillment center in which parcels arrive in an online fashion over the course of a day. When a parcel arrives, it is allocated to one of several different trucks based on its destination. To avoid some trucks being overutilized, the goal is to have different trucks with approximately equal loads. However, for the truck's utilization only the final load matters; at earlier times during the day the balance of parcels is immaterial for the later truck utilization. Common to these three examples is that, rather than aiming to keep the system balanced throughout time, a decision-maker (DM) only requires balance at a single point in time, a goal that is potentially much easier to achieve. In this work we aim to investigate to what extent this can reduce the cost of the DM's operations. Loosely speaking, we consider settings of the following form: in each period an arrival occurs, and the DM needs to decide whether or not to exert flexibility. If she does, there is a constant probability that she gets to decide in which bin, out of a subset of randomly sampled bins, to place the arrival. Otherwise, the arrival is placed in a bin chosen uniformly at random. <cit.> showed that exerting flexibility in every period allows the DM to keep the load approximately balanced — i.e., the deviation between the maximum and average loads across bins is upper bounded by a constant independent of the time horizon — at all times with high probability. Against this backdrop, this work considers the following question: Can a DM achieve the less ambitious goal of balance at the end of the time horizon while exerting significantly less flexibility? §.§ Our contributions We study three instantiations of this problem with varying levels of complexity. The first — and most tractable — of these is a “limited-flexibility” variant of the canonical balls-into-bins model studied in the applied probability and theoretical computer science communities. We leverage our analysis of this vanilla framework when we subsequently consider the problem of designing (near)-optimal opaque selling strategies for inventory management. Finally, we adapt these policies to the significantly more complex problem of parcel delivery in e-commerce fulfillment, demonstrating their practical use via a data-driven case study. Vanilla balls-into-bins. For the standard balls-into-bins problem, we design two policies that achieve approximate balance at the end of a time horizon of length T with limited flexibility. The first is a non-adaptive policy — which we term the static policy — that starts exerting flexibility when Θ(√(Tlog T)) periods remain in the time horizon. We show that such a policy can approximately achieve balance at the end of the time horizon while exerting flexibility only Θ(√(Tlog T)) times, whereas any policy that achieves approximate balance requires exerting flexibility Ω(√(T)) times in expectation. We further show that no policy that starts exerting flexibility at a deterministic point in time can close this gap to Θ(√(T)). Motivated by this fact, we design a dynamic policy to match this lower bound. This policy exerts flexibility whenever the imbalance of the system exceeds a carefully designed, time-varying threshold. The analysis of this first problem is based on the following main idea: over the course of the entire time horizon, if the decision-maker never exerted flexibility, the imbalance between bins would scale as Θ(√(T)). If each time the decision-maker exerted flexibility that imbalance was reduced by 1, then she would only need to do so 𝒪(√(T)) times in order to achieve approximate balance. Though exerting flexibility always reduces the instantaneous imbalance among the bins, it does not always reduce the imbalance as measured in hindsight. For instance, suppose the decision-maker exerted flexibility early on during the time horizon to put a ball into bin i that would have landed in bin j in the absence of flexibility; if it turned out in hindsight that more balls landed in bin i than in bin j, then exerting flexibility in this early period (keeping all later decisions fixed) would actually increase, not decrease, the imbalance over the entire horizon. However, if she only starts exerting flexibility towards the end of the horizon, when the imbalance is of size Θ(√(T)) and only o(T) periods remain, then it is unlikely that the imbalance between i and j would be overcome by the natural variation of the stochastic process, and consequently exerting flexibility is likely to reduce the imbalance as measured over the entire horizon. On a technical level, our analysis requires us to overcome a number of hurdles in analyzing non-trivial stochastic processes. The main difficulty stems from the fact that the system is already imbalanced when the decision-maker first exerts flexibility; as a result, a good policy must ensure the flexibility exerted in the remaining rounds suffices to close this existing gap. This difficulty is compounded in the analysis of our dynamic policy for which flexing commences at a random time. Thus, designing such an adaptive policy requires us to construct the threshold carefully enough that the now-random number of rounds remaining suffices to control the accumulated imbalance. Opaque Selling.We subsequently turn our attention to the problem of opaque selling in inventory management. In this setting, to ensure that inventory isn't replenished too frequently, a retailer can exert flexibility by offering a discounted opaque option to customers. Under this practice the customer chooses a subset of items, and the retailer decides which item from this subset is sold to the customer. To minimize total inventory and discount costs, the retailer must trade off between the benefits of increasing (expected) time-to-replenishment (i.e., minimizing the imbalance of inventory levels across items), and the cost of offering the discounted option to achieve this outcome. The additional level of complexity in this setting is that, in contrast to the known and deterministic time horizon in the balls-into-bins problem, the time horizon corresponds to the (random) first time a product is depleted, which depends on both the random realization of arrivals and the DM's actions. The introduction of this moving target requires the DM to exert flexibility more frequently than the dynamic policy designed for balls-into-bins; to address this we design a semi-dynamic policy which similarly maintains a time-varying threshold on the imbalance of the inventory level, and offers the opaque option starting from the first time the imbalance condition is triggered, all the way to the time of depletion. We show that in a large-inventory scaling the per-period loss of the semi-dynamic policy converges, for a range of parameter regimes, at a linear rate to a loose lower bound in which the DM's inventory is depleted evenly without the DM ever needing to exert flexibility. For parameters where this is not the case, its loss relative to that lower bound is of the same/better order as that of natural benchmark “never-flex” and “always-flex” policies (which, as the names indicate, respectively never exert the flexible option, or do so in every time period). We complement our theoretical results with synthetic experiments that demonstrate the robustness of our insights with respect to (i) different input parameters and (ii) varying threshold choices for our algorithms. Parcel delivery in e-commerce fulfillment. We finally consider the problem of parcel delivery in e-commerce fulfillment. In the setting we consider, a warehouse receives a sequence of packages throughout the day, and must assign each package to one of N trucks in online fashion. The goal is to find an assignment of packages to trucks that minimizes expected routing costs and overtime pay to delivery drivers. As is common in practice, packages are ex-ante associated with default trucks based on their geographic coordinates <cit.>. However, given the fluctuation of volumes in each region across days, it may be desirable to exert flexibility by assigning packages to non-default trucks. This flexibility, however, comes at a cost, as a non-default assignment may cause costly detours in hindsight. Though this problem bears conceptual similarities to the balls-into-bins and inventory management models we consider for our analytical results, the routing component adds a significant level of complexity. Indeed, the offline setting generalizes the Travelling Salesman Problem, given the overtime pay consideration. In the online setting, a good flexing policy must also contend with the fact that “mistakes” due to flexing may be much costlier in hindsight than in the two previously studied settings. Still, we show via a case study on the 2021 Amazon Last Mile Routing Research Challenge Dataset <cit.> that a careful adaptation of the balls-into-bins dynamic policy yields on the order of 5% cost savings relative to the default-only, “no-flex” policy. In summary, our work explores load-balancing applications in which balance is evaluated not across periods, but only at a specific point in time. For these scenarios, common in both analogue and digital applications, we design simple heuristics that combine three attractive properties: (i) provable approximate balance at the end of the horizon, (ii) a significantly lower need to exert flexibility when compared to standard approaches that guarantee balance throughout, and (iii) adaptability to vastly different contexts. The unifying motivation for these heuristics is the observation that exerting flexibility at the beginning of a horizon is likely wasted/unnecessary when system balance is only evaluated at the end. §.§ Related work Our work relates to three traditional streams of literature: work on the balls-into-bins model, revenue management literature related to opaque selling (and more generally, the value of demand flexibility in service systems), and vehicle routing as it relates to parcel delivery. We survey the most closely related papers for each of these lines of work below. The balls-into-bins model. As noted above, the balls-into-bins model has a long history in the theoretical computer science literature, with a number of variants proposed and used to model a variety of computing applications. We refer the reader to <cit.> for an exhaustive survey, and only highlight two closely related results: <cit.> consider the basic model in which balls are sequentially (and randomly) thrown into n bins, and derive sharp upper and lower high-probability bounds on the maximum number of balls in any bin after m throws, finding that the gap between the maximally loaded bin and the average load is Θ(√(mlog n/n)). Later, <cit.> showed that, if the ball goes into a random bin with probability q, and the lesser-loaded of two random bins with probability 1-q, this expected gap is a constant independent of m (though dependent on the number of bins n). We leverage these latter results in the analyses of the policies we consider for both the balls-into-bins and the opaque selling problem. On the power of flexibility in opaque selling. The practice of offering opaque, or flexible, products to customers has long been studied in operations management. In particular, it has been found that opaque selling has two potential benefits, from a revenue perspective: (i) it may increase the overall demand for products, and (ii) it may enable better capacity utilization when there is a mismatch between capacity and demand <cit.>. In this regard, there has been growing attention regarding how one can leverage opaque selling to price discriminate among customers who are differentiated in their willingness to pay for products <cit.>. These papers all focus on the retailer's pricing decisions, rather than the inventory management problem which we consider here. More recently the literature has formalized the inventory cost savings that can be realized due to the flexibility of customers who pick opaque options. <cit.> consider a simple model in which a retailer sells two similar products over a finite selling period, and quantify the potential inventory pooling effect of opaque selling for this stylized model. <cit.> similarly consider a two-product model with replenishments, and analytically show that selling relatively few opaque products to balance inventory can have substantial cost advantages. <cit.>, to which our work is most closely related, generalize this latter model to N products and makes, to the best of our knowledge, the first connection to the seminal balls-into-bins model. Our work relates to this latter paper in that we show that one need not exercise the opaque option with every flexible customer to realize the full benefits of opaque selling: strategically timed end-of-season opaque promotions suffice. General flexible processes. The idea of inventory cost savings from opaque selling is closely related to other recent studies that consider how demand-side flexibility can improve supply costs/utilization. <cit.> and <cit.> respectively consider (time-)flexible demand in scheduled service systems and ride-hailing, and demonstrate how flexibility improves utilization for these. Relatedly, <cit.> show the value of demand flexibility in a resource allocation setting in which both time-flexible and time-inflexible customers seek a service with periodic replenishments. <cit.> consider a flexible variant of the classical network revenue management problem, in which a service provider gets to choose which combination of resources is used to serve each customer. Contrary to our setting, however, the act of exerting flexibility does not come at an extra cost. Finally, closely related to our case study, <cit.> consider a model in which an online retailer can fulfill customer demand in two ways: either from a nearby, “local” distribution center, or from a distribution center that is further away, and poses the risk of customer abandonment due to longer delivery times. Whereas they assume a constant cost of assignment to each distribution center, we are interested in the micro-level truck assignment and routing policies induced by the flexible policy, which adds significant complexity to the problem. We note that characterizing the power of flexibility has a long history in both the theoretical computer science and the operations literature: the seminal works of <cit.> and <cit.> (load balancing), <cit.> and <cit.> (resource pooling), as well as <cit.> (manufacturing) are perhaps the most notable examples of these respective streams of work. All of these demonstrate that small amounts of flexibility suffice to realize most of the benefits of full flexibility. For an overview of more recent results on flexibility in the operations literature, <cit.> provides an excellent survey. Vehicle routing. The dataset we use for our case study originates from the 2021 Amazon Last Mile Routing Research Challenge <cit.>, whose aim it was to encourage data-driven and learning-based solution approaches that mimic existing high-quality routes operated by experienced drivers, after the assignment of packages to trucks. In contrast, we are interested in the problem that precedes this (i.e., the assignment itself), which renders solutions proposed for the competition (e.g., <cit.>), as well as case studies on single-vehicle routing heuristics that rely on the dataset <cit.> tangent to our problem. The problem we consider for this setting is closely related to the Vehicle Routing Problem (VRP) <cit.>, and more specifically the online multiple-vehicle routing problem (see <cit.> for an excellent survey). Though we also consider a variant of online VRP, a key difference of our model is that we also account for the cost of overtime wages (in addition to travel time), thus placing a premium on balanced loads. This is in a similar spirit to prior work that aims for an even partition of workload <cit.>. These works, however, focus on optimal partitioning of the service territory into sub-regions and require that each vehicle only be responsible for demand occurring in its own sub-region. In contrast, we are interested in the value of (infrequently) violating this partitioning. More importantly, we highlight that the goal of our case study is not to propose new algorithms for the online vehicle routing problem, or its applications in parcel delivery. Instead, we aim to leverage our load balancing insights to show that reasonable heuristics that exert flexibility sparingly can yield significant cost savings. § BASIC SETUP In this section we present the classical balls-into-bins model <cit.>, which also forms the backbone of the applications we study. The vanilla balls-into-bins model evolves over a discrete, finite-time horizon in which balls are sequentially allocated into ≥ 2 bins. Each ball is a flex ball with probability ∈ (0,1]. If a ball is a flex ball, the decision-maker (DM) may exert flexibility; if she exercises this flexibility, she observes a set of size — the flex set t — drawn uniformly at random from {1,2,...,}, and chooses the bin in t to which the ball is allocated. If a ball is not a flex ball, we write t = ϕ. Such a ball, or one for which the flexibility is not exerted, is placed into a bin drawn uniformly at random, denoted by t. We let t be the indicator variable that denotes whether the ball is flexible, with t=1 if it is, and t=0 otherwise. Note that t∼(). The type t=(t, t, t) of a ball at time t ∈{1,2,...,} is characterized by (i) whether or not it is a flex ball, (ii) the bin it would go into without flexibility, and (iii) the ball's flex set. We further define the history of balls t to be the types of all balls before time t, i.e., t = (1,…,t-1), and let t be the set of all possible histories at time t. The DM's policy π is characterized by a tuple consisting of both the decision to exercise the flex throw, denoted by ^π(t) (with ^π(t) = 1 if exercised, and 0 otherwise), and which bin to throw the ball into, denoted by ^π(t). With x_i^π(t) denoting the number of balls in bin i at the beginning of period t, the second decision is assumed to be, unless otherwise specified, ^π(t) = min_j∈t x_j^π(t) if t(t) = 1, t if t(t) = 0. Here, we define the min with a lexicographic tie-breaking rule that returns the smallest value i of all bins with the smallest number of balls. The DM's goal is to ensure that the load across bins (i.e., the number of balls in each bin) is approximately balanced at the end of the time horizon. To characterize the degree of imbalance of the state of the system at time t, we define the gap of the system under π as the difference between the maximum load across all bins and the average load. Note that, given t ∈ℕ^+, the average load is given by t/, since exactly one ball is thrown in each period. Then, for policy π, we denote: ^π(t) = max_i ∈ [] x_i^π(t) - t/ ∀ t ∈ℕ^+. The DM's goal is to design a policy π such that ^π()∈𝒪(1), where the Big-O notation is with respect to the time horizon . If this is satisfied under π, we say that the system is approximately balanced. We further let denote the number of times that the decision-maker gets to choose the bin that a flex ball goes into. Formally, = ∑_t=1^Tf_B(t)ω_B(t). Benchmarks. In order to contextualize the performance of our policies, we present two simple policies that have previously been analyzed in the literature. The no-flex policy, denoted by superscript nf, sets ^nf(t) = 0 ∀ t. By construction, the no-flex policy yields M^nf = 0, but a gap that grows with the horizon 𝔼[] ∈Θ(√(T)) <cit.>. On the other hand, the always-flex policy, denoted by superscript a, sets ^a(t) = 1 ∀ t. This leads to M^a=Tq with a balanced load at T (i.e., 𝔼[] ∈𝒪(1), by <cit.>). We assume without loss of generality that our policies operate with =2, i.e., we have |t|=2 ∀ t. Our algorithms can be directly adapted to arbitrary r as follows: when |t|>2, choose a subset of size 2 of t uniformly at random, and flex only within this subset. Hence, we abuse notation by writing our policies for arbitrary flex sets, but analyze them for r=2. § ANALYTICAL RESULTS In this section, we propose and analyze two policies that achieve two desiderata for the balls-into-bins problem: (i) keeping the load approximately balanced at the end of the time horizon, and (ii) doing this with as few flexes as possible (e.g., o(T) flexes). We then show how these algorithms can be leveraged within the context of inventory management. Before presenting our two policies, we first turn to the question of how many flexes are required to achieve an approximately balanced system. <ref> establishes a lower bound. Consider any policy π such that 𝔼[^π(T)] ∈𝒪(1). Then, 𝔼[] ∈Ω(√(T)). This lower bound is quite intuitive: every time the DM exerts flexibility, the gap at time T decreases by at most 1. Since the expected gap when balls are randomly thrown into the bins is well-known to scale as Θ(√(T)), closing this gap to 𝒪(1) would require the decision-maker to exert flexibility at least Θ(√(T)) times. We defer a full proof to Appendix <ref>. We next design two policies that strive to achieve this lower bound. Both leverage the fact that, without any flexing, the load in each bin at the end of the time horizon is, with high probability, within Θ(√(T)) of the expected load T/N. Thus, it should suffice to begin exercising the flex option with 𝒪(√(T)) periods remaining. In this section we design two policies that strive to achieve this bound lower bound. Both leverage the fact that, in order to ensure the system is balanced at the end of the time horizon, the system designer need not aggressively manage the system state a constant fraction q of the time. Instead, it suffices to do so toward the end of the time horizon. In particular, we again rely on the fact that without any flexing, the load in each bin at the end of the time horizon is, with high probability, within Θ(√(T)) of the expected load T/N. Thus, it should suffice to begin exercising the flex option with 𝒪(√(T)) periods remaining. We first describe a non-adaptive policy that exerts flexibility  𝒪(√(T)) times, before transforming it into an adaptive one that matches the lower bound by exerting flexibility, in expectation, Θ(√(T)) times. The first policy we consider, referred to as the static policy , is non-adaptive, and exerts flexibility if and only if Θ(√(Tlog T)) periods remain. Specifically, fixes a time = T-√(Tlog T), where = 2√(6)N(N-1)/q. It begins actively load balancing, placing flex balls in the minimally loaded bin within the flex set for all t ≥, but not before (see <ref> in Appendix <ref>). With (t) denoting the gap in period t for the static policy, we now establish that the intuition underlying the design of this naive policy is correct: exerting flexibility Θ(√(Tlog T)) times suffices to achieve a balanced load by the end of the horizon (proof in Appendix <ref>). For the static policy defined in <ref>, 𝔼[(T)] ∈𝒪(1). We highlight that this result holds for arbitrary r ∈{2,…,N}. Thus, a slight subtlety in establishing this result is that, for r < N, balls are not always allocated to the minimally loaded bin; exerting a flex may then in theory increase the gap of the system. However, we show that starting to flex Θ(√(Tlog T)) periods before the end of the horizon precludes these “errors” from occurring too frequently in expectation, and we are nonetheless able to achieve a constant gap. Our static policy does not exactly meet the lower bound from <ref> due to the additional 𝒪(√(log T)) factor. This is not an artifact of our analysis, nor is it due to our definition of . Instead, it is a general fact about non-adaptive policies: no non-adaptive policy can achieve 𝒪(1) gap at T while exerting flexibility 𝒪(√(T)) times (see <ref> in Appendix <ref>). For a policy π, at most one of the two holds: * M^π≤ a√(T) almost surely, for some a > 0, or * 𝔼[^π(T)] ∈ o(√(T)). The proof of <ref> relates to the observation from <ref> that the gap when balls are randomly thrown into the bins scale as Θ(√(T)). Specifically, for any a > 0, with constant probability we find a gap of at least (a+1)√(T) when no flexes are used. Thus, even with full knowledge of where the random balls land, a policy π that flexes no more than a√(T) times would not have enough flexes to balance the bins. We defer the proof of the result to Appendix <ref>. §.§ The Dynamic Policy The static policy proposed above ignores the fact that not all sample paths are created equal — while the loads under certain sample paths require a larger number of flexes to achieve balance, on others a smaller number suffices. Thus, although <ref> implies that no static policy can achieve an approximately balanced load with 𝒪(√(T)) flexes, it may be that an adaptive policy can. We thus consider an adaptive policy, referred to as the dynamic policy, which flexes only when the gap of the system exceeds (a constant factor of) the expected remaining number of flexes. Periods in which this condition is satisfied can be viewed as a “point of no return”; since exerting a flex in any given period reduces the gap of the system by at most one, if the current gap exceeds the remaining number of flex balls by a significant amount, then there is no hope obtaining a balanced load at the end of the horizon. We provide a formal description of the policy in <ref>, with superscript d referring to quantities induced by the policy. The following two theorems establish that such a “point of no return”-type policy meets both desiderata: a constant gap at time T with 𝒪(√(T)) balls flexibly allocated in expectation. For the policy defined in <ref>, 𝔼[(T)] ∈𝒪(1). For any constant a > 0, let := inf{t:(t) ≥a (T-t)q/N}. Then, 𝔼[T-]∈𝒪(√(T)). Thus, the policy achieves M^d∈𝒪(√(T)). In contrast to the static policy, which begins flexing aggressively within Θ(√(T log T)) periods of the end of the time horizon, the key challenge in analyzing the policy is that it exerts flexibility only when absolutely necessary. Indeed, it is not immediately clear that such parsimony suffices to recoup the imbalance accumulated before the first time the threshold was satisfied. We overcome this obstacle by introducing an auxiliary policy, the semi-dynamic policy, which begins flexing at the same time the policy first starts flexing (let denote this random time), and continues flexing until the end of the horizon. Thus, the semi-dynamic policy can be thought of as a middle ground between the static and semi-dynamic policies. reduction lemma here Note that, despite the semi-dynamic policy's similarity to the static policy, one cannot directly leverage the previously derived bounds on the induced gap since the time at which begins exerting flexibility is random and a priori unknown. We however leverage the construction of the adaptive threshold to show that the gap accumulated by can be reduced to a constant in expectation by exerting flexibility between and T. It then remains to show that does not occur too early in expectation. We defer the formal proofs of the theorems to Appendix <ref> and <ref>, respectively, and provide proofs outlines below. We first provide some high-level intuition for <ref>, deferring its proof to Appendix <ref>. Consider the first time the flexing condition is triggered, denoted by . Since the policy makes the same decisions as the no-flex policy before , we have () ∈𝒪(√()) with high probability <cit.>. Solving the flexing condition for we obtain an approximate high-probability upper bound of = T-𝒪(√(T)). Since the number of flexes exerted by the policy is upper bounded by T-, this yields a high-probability upper bound on the expected number of flexes. Translating this intuition into a formal proof, however, presents additional challenges, including the fact that large-deviation bounds on the binomial distribution are too loose to yield the desired result, and thus require tighter bounds on . In contrast to the static policy, which begins flexing aggressively within Θ(√(T log T)) periods of the end of the time horizon, the key challenge in analyzing the policy is that it exerts flexibility only when absolutely necessary, i.e., at the point of no return described above. Indeed, it is not immediately clear that such parsimony suffices to recoup the imbalance accumulated before the first time the threshold was satisfied. We however leverage the construction of the adaptive threshold to show that the gap accumulated whenever isn't flexing can be reduced to a constant in expectation by being progressively more aggressive as the end of the horizon approaches. We defer the formal proof of <ref> to Appendix <ref>, and provide a proof sketch below. Proof sketch of <ref>. The proof upper bounds the gap at time T by conditioning on a particular pair of bins i and j to respectively be the most- and least-filled bins at that time.[Taking a union bound over all i and j incurs an increase of the constant gap that is independent of T.] It then defines as the start of the last consecutive sequence of periods in which the policy always exerts flexibility, and considers two events: either bins i and j never have the same load in the periods ,…,T or they do. Consider first the event that bins i and j have the same load in some period t ∈{,…,T}, and let τ denote the last period in which this occurs. We observe that T-τ is an upper bound on the gap at time T. We then establish that T-τ is unlikely to be large, which follows from the following three facts: (i) bins i and j have the same load in period τ, (ii) the policy always exerts flexibility between τ and T, and (iii) i has larger load than j between τ+1 and T (since i is defined to be the most-loaded bin at T). As a result, the policy biases balls away from i and toward j, thus pushing the difference in loads between the two bins toward 0. The formal analysis of this event requires us to define a fictitious state-independent policy that allows us to bound probabilities without the pitfalls of the conditional probabilities induced by the case analysis. Consider now the event that i and j never have the same load in any period between and T. We similarly condition on the value of T-, noting that (i) the definition of gives an upper bound on the gap at time (namely, (T^⋆)≈a_d(T-)q/N) and (ii) the gap increases by at most T- between and T. As a result, it suffices to probabilistically bound T-; similar to the above, we obtain this bound by observing that the policy biases decisions away from i and toward j. Given this, for i to maintain a larger load than j over periods ,…,T it cannot be the case that T- is too large. Here as well, the devil is in the details as the formal proof requires us to circumvent dealing with the conditional probabilities induced by the case analysis. §.§ Application: Inventory Management and Opaque Selling We now leverage the results obtained for the balls-into-bins model to derive insights into the design of policies for the opaque selling problem. We first present the opaque selling model, similar to that of <cit.> who previously observed the analogy between the two models. Customers. A retailer sells ≥ 2 horizontally differentiated (i.e., identical from a quality perspective), equally-priced product types to customers over an infinite horizon. In each period, a customer arrives seeking to buy her preferred item at the regular price, which is normalized to 1. In addition to the regular-priced option, the retailer may offer an opaque promotion wherein the customer may pick a subset of items of size r ≤ N — the flex set — and receive an additive discount δ∈ (0,1). The retailer then sells the item in the flex set for which the most inventory remains. Motivated by the fact that products are horizontally differentiated, we assume that a customer draws both her preferred item and the flex set uniformly at random.[ One can extend our results to account for differentiated products, wherein customers choose their preferred products according to a non-uniform distribution; in this case, the initial inventory level of different products would be re-scaled based on their popularity, and decisions about flexibility are made based on re-scaled inventory levels.] Inventory dynamics. The retailer begins with units of inventory for each item at t = 0. In each period, the retailer incurs a holding cost h ∈ℝ_> 0 per unit of inventory on-hand. Whenever the inventory level of a product drops to 0, the stock of all  products is immediately replenished to the initial inventory level at a joint replenishment cost K ∈ℝ_> 0 <cit.>, and refer to the amount of time between two consecutive inventory replenishments as a replenishment cycle. Retailer policy and objective. The goal of the retailer is to find a policy π — which determines whether or not to offer the opaque option to a customer in each period — that minimizes her long-run average inventory costs, composed of holding costs, replenishment costs, and discount costs. As derived in <cit.>, this long-run average objective is given by: 𝒞^π = K/^π + h/2(2+1 - (^π)^2/^π) + D^π/^π δ. Let R^π be the random variable representing the length of a replenishment cycle under π, and D^π=∑_t=1^R^πt(t) be the random variable that represents the number of times the opaque option is exercised during one cycle. Similar to Equations (1) and (2) in <cit.>, by the Renewal Reward Theorem <cit.> we have 𝒦^π=K/^π, ℋ^π = (2NS+1)^π-(^π)^2/2^π h, and 𝒟^π= D^π/^π δ. We thus have the following expression for the long-run average inventory costs: 𝒞^π = K/^π + h/2(2+1 - (^π)^2/^π) + D^π/^π δ. Given (<ref>), it follows that our objective is decreasing in ^π, the expected length of a replenishment cycle. Analogy to balls-into-bins and results. The analogy is as follows: bins correspond to products, and balls to customers. Specifically, one can view a customer purchasing a given product (and depleting the product's inventory by one) as a ball being allocated to a bin (and increasing the bin's load by one). With this analogy, a “good” policy trades off between exercising the opaque option often enough to keep inventory levels approximately balanced, thus ensuring long replenishment cycles, but not so often that its discount costs are too high. Despite the straightforward analogy between the two models, there exist a subtle — yet important — difference that presents an additional challenge in designing and analyzing policies for the opaque selling model. In particular, contrary to the balls-into-bins model, in the opaque selling problem not only is the end of the time horizon (i.e., the length of the replenishment cycle) unknown, but it also depends on the time at which the policy begins flexing. As a result, policies that start flexing a fixed number of periods before the horizon ends are not implementable. This renders adaptive policies, which exert flexibility based on inventory state, particularly attractive. We thus consider a variant of the policy previously proposed. To do so, we define _I^π(t) = S - min_i ∈ [N] z_i^π(t) - t/N, ∀ t ∈ [^π], where _i^π(t) ∈ [1,S] denotes the remaining inventory of item i in period t under policy π, R^π is the replenishment cycle in which t finds itself, and t is re-initialized every time the retailer's inventory is replenished. Under the opaque selling policy, denoted again by , the DM starts exercising the opaque option the first time the gap (weakly) exceeds (T-t)q/N, where = 1/10 N2. From then on,  sells the product with the most remaining inventory in the customer's flex set. We benchmark the cost incurred by against (i) , the cost of the “never-flex” policy, which never exercises the opaque option, (ii) , that of the “always-flex” policy, which exercises the opaque option whenever the customer is open to flexing, (iii) , that of a static policy, which begins exercising the opaque option in period T-c_s√(Tlog T) for some c_s > 0, where T:= N(S-1)+1 is a loose upper bound on the length of any replenishment cycle, and (iv) a lower bound =K/N(S-1)+1 + h/2(NS+N) on the least possible long-run average cost. We benchmark the performance of this algorithm against the following costs: * , the cost incurred by the “never-flex” policy, which never exercises the opaque option and always has customers choose their preferred item; * , the cost incurred by the “always-flex” policy, which exercises the opaque option whenever the customer is open to flexing (i.e., with probability )[This is the policy analyzed in <cit.>.]; * 𝒞^s, the cost incurred by a static policy which begins exercising the opaque option in period T-c_s√(Tlog T), where T:= N(S-1)+1 is a loose upper bound on the length of any replenishment cycle; defer formal description... * , a lower bound on the least possible long-run average cost achieved by any policy. In particular, 𝒞^⋆ = K/N(S-1)+1 + h/2(NS+N). Our main result for this section considers a “large-inventory” limit in which S grows large, with K ∈Θ(S) and h ∈Θ(1/S) (i.e., the per-unit replenishment and average holding cost are both Θ(1)).[This regime assumes the retailer follows a variant of the classical Economic Order Quantity (EOQ) equation, i.e., = √(2K/h) <cit.>. ] Suppose K ∈Θ(S) and h ∈Θ(1/S). Then, the following holds: * When δ∈𝒪(1/√(S)), - ≤𝒪(1/S), i.e., the policy is optimal up to a constant additive loss in each replenishment cycle. * When δ∈Ω(1/√(S)) and δ∈𝒪(1), of the four policies, the policy has the order-wise best performance relative to 𝒞^⋆. <ref> guarantees that, as long as the cost of each opaque promotion is not overwhelmingly high (i.e., as long as δ∈𝒪(1)), the policy (order-wise) performs best among the four policies. If, moreover, the promotional cost is relatively small as compared with the inventory cost (i.e., δ∈𝒪(1/√(S))), the policy incurs at most a constant loss relative to the universal lower bound in each replenishment cycle, even as S scales large. The proof of <Ref> relies on tight analyses of the expected cycle length of the and benchmark policies which require bounds on the tail of the distribution of the gap of the system, rather than the expected gap, in contrast to the vanilla balls-into-bins model we analyzed above. We defer its lengthy proof to Appendix <ref>. §.§.§ Numerical results. We next numerically investigate the performance of our policies in the opaque selling model. In the remainder of this section, we refer to the difference between maximum and expected replenishment cycle length (i.e., T - ^π) as system balancedness. Inputs. Unless otherwise stated, our numerical results assume N=5, q=0.1 and r = 2. We instantiate the static and policies with =10 and = 0.7, respectively (see Appendix <ref> for experiments on robustness of our results to these hyperparameters). We simulate the long-run average performance of each tested policy over 10 instances of 10 replenishment cycles. Benchmark comparisons. Beyond the four policies (no-flex, always-flex, static and dynamic) studied above, we consider an additional benchmark policy, the flex-√(T) policy π^f, which exerts flexibility in every period with probability T-T/T (recall T is the period in which the static policy begins flexing), thereby approximately matching the expected number of flexes for the static policy. <Ref> illustrates the two sources of costs in the opaque selling model: short replenishment cycles and excessive discounts. <ref> shows that the always-flex, static and policies achieve 𝒪(1) gap at T, in contrast to the no-flex and flex-√(T) policies. Moreover, <ref> shows that the static, and flex-√(T) policies exert flexibility comparably often, and much less frequently than the always-flex policy. This then highlights that it is not only the number of times flexibility is exerted that drives system balancedness, but also the timing of these flexes. We next (<Ref>) compare the total costs of these policies under different regimes of δ, K and h (see Appendix <ref> for theoretical bounds). In particular, we set K = NS/2 and h = 1/NS (values for which the EOQ formula, NS = √(2K/h), is satisfied) and benchmark the performance of our policies against 𝒞^⋆, the theoretical lower bound on any policy's total cost, under different regimes of δ. In many of the regimes considered, all of the policies are asymptotically optimal with respect to S, i.e., the per-period loss relative to the lower-bound converges to 0 as S grows large. When δ = 0 (i.e., exerting flexibility incurs no cost, <ref>), the always-flex and no-flex policies are the best- and worst-performing policies, respectively. However, the and static policies' performance is quite close to the always-flex policy, with the always-flex and policies seemingly converging at a linear rate. On the other extreme, when δ∈Θ(√(S)) (i.e., discounts are very expensive, <ref>), the no-flex policy is the best-performing policy and the others cease to be asymptotically optimal. The most interesting cases arise when δ is constant (<ref>) or δ is slowly decreasing (i.e., δ∈Θ(1/√(S)), <ref>): in the former case, the and static policies perform best (converging at rate 1/√(S) to the lower bound); the no-flex and the flex-√(T) policies exhibit slightly worse performance but converge at the same rate to the lower bound, while the always-flex policy converges to a constant per-period loss. In the latter case, we again find that the policy converges at a linear rate and the static policy performs almost as well while in this case all of the benchmark policies converge much slower to the lower bound (seemingly at rate 1/√(S)). Notably, <ref> to <ref> show that for any δ∈𝒪(1), (i) the policy either performs better or as well as other policies, and (ii) the static policy performs almost as well as the policy. <ref> explores a regime outside the scope of <ref>, allowing the opaque discount δ to scale with the initial inventory level S. While this regime is unlikely to hold in practice, it illustrates that as δ increases, the no-flex policy begins to emerge as the best policy, while the always-flex policy has the worst performance. In summary, our three benchmark policies (always-flex, never-flex, and flex-√(T)) each demonstrate significant weaknesses in at least two out of the four settings. This then suggests that parsimoniously (yet strategically) exerting flexibility is a more robust strategy than the benchmarks that either aggressively manage the load of the system, don't at all, or do so in a non-methodologically grounded way. § DATA-DRIVEN CASE STUDY: PARCEL DELIVERY We next conduct a case study using data from the 2021 Amazon Last Mile Routing Research Challenge Dataset <cit.> to illustrate the applicability of our insights to parcel delivery in e-commerce fulfillment. This setting exhibits a similar challenge as the one at the crux of the basic balls-into-bins and opaque selling models. Namely, for a set of packages that arrive online over the course of a day, a retailer must efficiently assign packages to trucks. Practically speaking, this requires packages to be assigned in a way that maintains low travel times across all trucks, in addition to minimizing overtime compensation costs to delivery drivers (incurred when a route's overall completion time exceeds a given threshold). An added complexity of this setting is that the completion times depend on both the number of packages assigned to a truck and their geographic location. Nonetheless, the online parcel delivery problem exhibits key features of our load balancing problem: the retailer seeks to balance final route completion times across trucks (i.e., after all packages have been assigned) as a way of avoiding overtime costs, but has to trade off this goal with the risk of inter-truck load balancing causing in hindsight costly detours. Despite this conceptual similarity, the online routing aspect of the parcel delivery problem renders it significantly more complex than vanilla balls-into-bins. In particular, in the latter setting an individual flex reduces/increases the imbalance between two bins at the end of the horizon by at most one ball. In the former setting, however, characterizing the impact of moving a package from one truck to another is more involved, given its dependence on an a priori unknown final route (which also depends on the geographic location of future packages assigned to each truck). In this section we will see that “good” assignment policies need to account for these complexities. However, by appropriately incorporating these into our flexing condition we find that our policies for balls-into-bins directly extend, with strong empirical performance, to this more complicated setting. §.§ Model We consider a warehouse delivering packages in a region partitioned into N different zones (e.g., a set of contiguous zip-codes), each served by one uncapacitated truck. A sequence of T packages arrives and must be assigned (irrevocably) to one of N trucks in an online fashion. (We address the uncapacitated assumption in Appendix <ref>.) Each package t is associated by default to a truck t corresponding to one of these zones.[The notion of a default mapping to an a priori determined zone is well-founded in many real-world e-commerce fulfillment operations, where drivers prefer to operate within fixed zones with which they are familiar. The offline computation of these default mappings also eases the computational burden of online routing <cit.>.] We provide further details as to how these zones are constructed in <Ref>. The goal is to minimize the total delivery costs of an assignment of packages to trucks, composed of: transportation (e.g., fuel) costs per hour; and overtime compensation of delivery drivers, i.e., a cost for every hour a driver works over an overtime threshold hours. Formally, let _i(t) and _i(t) respectively denote the unloading and travel times required by truck i before the assignment of the t-th package, with the load of the truck denoted by _i(t) = _i(t)+_i(t). ∑_i·_i(T+1) + · (_i(T+1)-)^+, where (·)^+ = max{·,0}. Here the `load' of a truck depends on both the service time and transportation time for the packages assigned to the truck. After t packages have been assigned, we denote the hours of service time required by packages in truck i by _i(t), the hours of travel time by _i(t) and the sum by _i(t). Then, the objective in parcel delivery is to find an algorithm that minimizes ∑_i··_i(T) + · (_i(T)-8)^+, Analogy to balls-into-bins model. Given (<ref>), good policies should aim to (i) minimize travel times, and (ii) keep overall route completion times below the overtime threshold. Note that, since the zone assignment was determined using packages' geographic coordinates, the default mapping should perform well in terms of the first objective. On the other hand, if demand for a particular zone comes in heavy on a given day, the associated route will incur a high unloading time and risk exceeding the overtime threshold . Then, assigning packages from this cluster to different trucks may be beneficial. Doing so does not come for free, however, since adding a package to a truck may cause a detour, and hence additional transportation costs, as we will see in <ref>. <ref> compares the two models in order to draw the analogy that will drive our algorithm design. Given this analogy, we restrict our attention to the class of “flexibility-exerting” policies that sequentially determine (i) when to assign an incoming package to a non-default truck, and (ii) which truck to assign it to. For simplicity we assume that q = 1, i.e., the decision to exert flexibility results in a flex being exerted; unless otherwise specified, the flex set t of an incoming package is the set of all zones such that the distance between package t and the geographic center of the zone is within 1 kilometer of the distance from t to the center of the zone associated with t. §.§ Dataset description The 2021 Amazon Last Mile Routing Research Challenge Dataset <cit.> describes a set of historical routes used by Amazon drivers between July and August 2018, in the five metropolitan areas of Seattle, Los Angeles, Austin, Chicago, and Boston. We use the dataset that was previously created for training purposes in <cit.>. This dataset is composed of 6,112 historical routes, and includes the following information about each route: its originating delivery station, the location of each stop on the route, and the time a delivery driver took to drop off each package (i.e., the unloading time, which <cit.> refer to as service time). Since all packages at the same stop have the same coordinates, for simplicity, we aggregate all packages associated with a stop into a single package whose unloading time is the sum of all component packages' unloading times. Thus, we use the terms package and stop interchangeably. Synthetic dataset construction. We consider a single delivery station in Los Angeles, DLA8, over the course of 29 days, deferring results for two other stations in the dataset to Appendix <ref>. In order to infer a default assignment of packages to trucks, we cluster all historical packages into N = 24 routes based on their geographic coordinates (see Appendix <ref> for details on our clustering method). <ref> provides an illustration of the output clustering, with each color representing a different zone. The number of trucks we use in our synthetic dataset is higher than the daily number of routes historically operated out of DLA8 (N=15.4, as seen in <ref>). This is due to the fact that our simulation samples from all package data over the 29-day period, which typically has a wider geographical span than that of any single day in the historical data and thus leads to a longer average travel time given the same number of packages per truck. (This may also be due in part to the pre-processing of the original dataset, in which geographic locations were perturbed to preserve driver and customer anonymity.) As a result, we increased the number of trucks to N = 24 to ensure that the average route completion time under the no-flex policy matches the historical route completion time. We further compare the historical and synthetic datasets in Appendix <ref>. Finally, to simulate the stream of incoming packages over a given day, we bootstrapped from the sample distribution of all delivered packages over the recorded time period (see <ref>), taking T = 2,000 to be the average number of daily packages. All reported results are averaged over 50 replications. We refer the reader to <ref> for a summary of key inputs to our case study. §.§ Results On the importance of travel time considerations. A first attempt that simplifies the complexities of the parcel delivery setting is to construct a routing-oblivious policy, which makes its flexing decision based solely on unloading times. Concretely, we apply the policy designed for the balls-into-bins problem (<ref>) to this setting, letting _i(t) = _i(t) ∀ i ∈ [N], t ∈ [T], a flex radius of 5 kilometers and an appropriately tuned a_d. When flexed, the package is assigned to the truck with the smallest current unloading time. For our estimates of c^r and c^o (see <ref>), the routing-oblivious policy performs quite poorly, yielding a 126% increase in total costs on average (approximately $1,697 for the unloading-only policy, as opposed to $752.4 for the no-flex policy). <ref> provides intuition for this poor performance: despite its effectiveness in balancing unloading times, the algorithm does so at a cost of significantly longer travel times (5.47 hours for the unloading-only policy versus 3.99 hours for the no-flex policy, on average). As a result, the unloading-only policy incurs much higher overtime: 80% of its routes are completed in over 8 hours with an average overtime of 0.95 hours, versus approximately 25% of no-flex routes completed in over 8 hours and an average overtime of 0.16 hours. <ref> summarizes additional metrics of interest for the two policies. A routing-aware policy. The above results highlight the importance of considering both unloading and travel times in the parcel delivery setting. The natural extension of the routing-oblivious policy, then, would be to apply <ref> with _i(t) = _i(t) + _i(t) ∀ i ∈ [N], t ∈ [T]. We approximate _i(t) in each period as follows: for a set of packages S, denote by (S) the minimum travel time associated with a truck delivering these packages. Then, after every 100 arrivals, for each truck i we compute the optimal route (S_i) for the packages currently assigned to i. In between updates, we approximate the incremental travel time associated with assigning package t to truck i as twice the distance from t to the closest package currently assigned to i.[This is an upper bound on the incremental travel distance by the triangle inequality.] We use (S_i,t) to denote this incremental travel time, given the current set of packages S_i. Letting ỹ_i^r(t) denote this approximate travel time, we apply <ref> with _i(t) = ỹ_i^r(t) + _i(t). Experiments show that this application of <ref>, which we refer to as the policy, is also ineffective in reducing costs. The average cost of the no-flex policy is $752.4, while that of the policy is $850.4, a 13% increase. As shown in <ref> and <ref>, while the policy achieves lower travel times and overtime than the unloading-only policy, it performs worse on both fronts relative to the no-flex policy. Indeed, the average overtime of the policy is 0.21 hours versus an average overtime of 0.16 hours for the no-flex policy; similarly, the travel time for the policy averages 4.33 hours of travel time as compared to 3.99 hours for the no-flex policy. At a high level, the policy's poor performance is due to the “looseness” of the flexing condition: in general, the largest load across all trucks may be much higher than the load of t, for any given t. Thus, the policy will flex frequently, causing costly detours when default trucks could have handled their respective packages in hindsight. This then highlights that, in the parcel delivery setting, policies must exert flexibility much more parsimoniously than in the balls-into-bins and inventory settings. <ref> further illustrates why the current policy incurs high costs: the decision it makes in period t is independent of the package t at hand, an obviously bad decision in hindsight. The next policy we present addresses this shortcoming by incorporating the impact of package t assignment on truck loads. In the parcel delivery case, flexing a package that does not strictly need to be flexed can lead to huge increase in additional travel time because the incremental travel time from package t can be high for trucks other than t. Penalizing unnecessary flexing via a patient dynamic policy. Our final policy — which we call the policy — modifies the flexing condition to allow for flexing only when we estimate that the gap between the loads of the default truck P(t) and some truck in the flex set F(t) cannot be closed in the time remaining. To make this idea concrete, suppose first that our policy had access to the entire sequence of arrivals at the beginning of time. Then, in each period t our policy would consider all possible packages that may be assigned to P(t) (either because P(t') = P(t), or because P(t) ∈ F(t'), for t' > t). Let t denote this set of packages. Now, for truck j ∈ F(t), we restrict our attention to the packages t' ∈t such that package t' is within the flexing radius of truck j. Let jt denote this set of packages. Then, in the best case for truck P(t), all packages in this set are placed in j, and the packages that a (P(t), j) flex cannot control do not further contribute to the imbalance between the two. If, even in this best-case scenario, the load of P(t) exceeds that of j, we consider P(t) to be “over-loaded” and a flex is required. We then flex into j ∈ F(t) with the minimum load. The main challenge here is that our policy does not have offline access to all arriving packages. Hence, we must construct estimates of these future loads in every period. To construct these estimates, for every pair of trucks i, j∈ [N], we use _i,j and _i,j to respectively denote estimates of the average travel and unloading times added onto a no-flex route if a package whose default truck is i is flexed into truck j. To compute _i,j, we simulate 50 replications of the no-flex policy over the entire time horizon. Under this policy, we consider the packages associated with truck i that can be flexed into j, and compute the difference between the optimal TSP route with and without each of these packages. Averaging across all such packages, we obtain _i,j. The computation for _i,j is analogous. Since _i,j and _i,j are averaged across all periods, we require an estimate of the average size of S_j(t); we estimate this to be on the order of (T-t)/, where is a tunable parameter. Finally, (t) is used to denote the unloading time of package t. The policy is described in <ref>. Ignoring the routing aspects and the heterogeneity in unloading times, this intuition is consistent with that in <ref> for balls-into-bins. There, a ball is flexed into a minimally loaded bin when the imbalance between the maximally loaded bin and the average load exceeds times the average number of flexes remaining between the maximally and minimally loaded bins. Intuitively, <ref> redefines the gap as the difference between the loads of the default and the minimally loaded bins. The proofs of <ref> extend to this modified benchmark. We find that the policy yields significant cost savings, with an overall cost reduction of 5.47% relative to the no-flex policy ($711.3 versus $752.4, on average). In <ref> we observe that though the policy incurs slightly higher travel times on average, its route completion times are more tightly concentrated around 8 hours. This then results in average overtime decreasing from 0.16 hours to 0.10 hours, from which our policy derives its gains. We test the sensitivity of our results in <ref>, comparing the performance of the two policies as travel costs, overtime costs, and overtime thresholds respectively vary. When travel costs are low, and overtime costs high, our policy achieves more than 7% cost savings relative to the no-flex policy. The difference between the two, however, drops significantly as the overtime threshold increases: this is due to the fact that almost all routes are completed within 9 hours for both policies. We conclude the section by noting that the policy yields cost savings relative to the no-flex policy, despite its decision rule being divorced from the actual travel and overtime costs. Another style of policy that one could consider is one that seeks to minimize the incremental cost of a flex, in each period. We explore this in Appendix <ref>, and observe that doing so introduces additional levels of complexity (in particular, estimating the incremental cost of an action online) that prevent it from achieving the magnitude of cost savings achieved by our more simple, “informationally light” policy. § CONCLUSION In this work we studied a variation of typical load balancing problems, in which the load needs to be balanced only at a specific, potentially random, point in time. We focused on three instantiations of this problem: the canonical balls-into-bins problem (used in computing applications), optimal opaque selling strategies (used in inventory management), and parcel delivery (used in e-commerce fulfillment). For these diverse applications we designed practical heuristics that sparingly exert flexibility and provably achieve approximate balance at the end of the horizon while achieving substantial cost savings. This work opens several avenues for future exploration. Though our findings point to our heuristics' broad applicability in load balancing applications where imbalance predominantly stems from stochastic fluctuations, it would be interesting to explore similar ideas in settings with first-order supply-demand imbalances. Secondly, blending our balancing strategies — which heavily focus on rebalancing near the end of the horizon — with existing policies in these domains could present new opportunities. This integration could lead to hybrid models that encapsulate the strengths of both traditional heuristics in these domains and our insights. For instance, for the parcel delivery problem it would be interesting to design provable algorithms that adapt state-of-the-art online VRP solutions to include balancing considerations. Lastly, tighter analyses could yield stronger guarantees in non-asymptotic regimes. These would be particularly beneficial in problems with short or fluctuating time horizons and may require new models to better capture these regimes. § THE BALLS-INTO-BINS MODEL §.§ Analysis of the static policy §.§.§ Algorithm Before providing a formal description of the algorithm, we recall some notation. For t∈ [T], we let _i(t) denote the number of balls in bin i ∈ [N] under the static policy, and use t to denote the bin in which the ball lands at time t. Finally, we let (T) denote the gap of the system at time T under the static policy. A formal description of the static policy is then provided in <ref> (with ties assumed to be broken lexicographically within the .) §.§.§ Proof of <ref> We first state Claim <ref>, with its proof deferred to the end of the section. For the never-flex policy , for any a>0, there exists a constant a' > 0 such that ℙ((T) ≥ a √(T)) ≥ a' for large enough T. Plugging a = 2 into Claim <ref>, there exists a' > 0 such that ℙ((T) ≥ 2 √(T)) ≥ a' for large enough T. Now, consider a particular history T∈T with (T) ≥ 2 √(T), and pick i = _i'x_i'(T). By definition, (T) ≥ 2 √(T) implies ∑_t = 1^T 1_t = i≥ T/N+2 √(T). Suppose now that π has full information on the realization of the T random trials, and can arbitrarily set t = j, where j ≠ i, whenever t = i. That is, each time π sets (t)=1 it replaces one of the at least T/N+2√(T) balls that go into bin i with a flex ball that goes into some other bin and decreases (T) by 1. It then follows that, if ≤√(T): 𝔼[(T)] ≥𝔼[(T)|(T) ≥ 2 √(T)] ·ℙ((T) ≥ 2 √(T)) ≥𝔼[(T)-√(T)|(T) ≥ 2 √(T)] · a ≥√(T)· a for large enough T, which contradicts the fact that 𝔼[(T)] ∈𝒪(1). Thus, 𝔼[] ≥𝔼[|(T) ≥ 2 √(T)] ·ℙ((T) ≥ 2 √(T)) ≥√(T)· a for large enough T. We have: ℙ((T) ≥ a √(T)) = ℙ(max_i' x_i'(T) - T/N≥ a √(T)) ≥ℙ(x_1(T)- T/N≥ a √(T)) = ℙ(x_1(T)- T/N/√(T)σ≥a/σ), where σ = 1/N(1-1/N). By the Berry-Esseen Theorem (<cit.>, Chapter XVI.5, Theorem 2): ℙ(x_1(T)- T/N/√(T)σ≥a/σ) ≥ 1-Φ(a/σ)-b/√(T)≥ a', for some constants a', b and large enough T. §.§.§ Lower bound for deterministic policies For a policy π, at most one of the two holds: * M^π≤ a√(T) almost surely, for some a > 0, or * 𝔼[^π(T)] ∈ o(√(T)). We prove the claim by contradiction. Suppose there exists a policy π such that M^π≤ a √(T) for some constant a > 0 and 𝔼[^π(T)] ∈ o(√(T)). When no flexing is involved, by Claim <ref>, ℙ((T) ≥ (a+1) √(T)) ≥ a', for some constant a' > 0 and large enough T. Now, consider a particular history T∈T such that (T) ≥ (a+1) √(T), and consider i ∈_i'x_i'(T). By definition, (T) ≥ (a+1) √(T) implies ∑_t = 1^T 1_t = i≥ T/N+ (a+1)√(T). Suppose now that π knows the exact realization of the T random balls, i.e., knows T, and , for all t such that t = i and (t)=1, can place the ball that would have gone into bin i into any bin j that decreased (T) by 1. Then, if M^π≤ a √(T), π can relocate at most a √(T) balls from bin i to the other bins and thus 𝔼[^π(T)] ≥𝔼[^π(T)|(T) ≥ (a+1) √(T)] ·ℙ((T) ≥ (a+1) √(T)) ≥𝔼[(T)-a √(T)|(T) ≥ (a+1) √(T)] · a' ≥√(T)· a' for large enough T, which contradicts the fact that 𝔼[(T)] ∈ o(√(T)). §.§.§ Proof of <ref> Before proving the theorem, we present a result from the literature to characterize the gap at time t under the always-flex policy. 𝔼[(t)] ∈Θ(1) ∀ t ∈ [T].[The result stated in <cit.> considers more variables than what is presented in <ref>, but their bound reduces to Θ(1) given our assumption that N, q and r are all exogenously given and fixed.] We next define a class of fictional allocation rules that we couple to any flexing policy π. We define the allocation rule 𝒜_ij(t) as 𝒜_ij(t):= j if t(t) = 1 and t = {i,j}, j if t(t) = 1, t = {k,j} for some k ≠ i and min_j' ∈{i,k} x^π_j'(t) = i, 𝒜^π(t) otherwise. The allocation rule 𝒜_ij mimics 𝒜^π, except in two scenarios: when choosing between i and j, 𝒜_ij always places a ball into j; and when choosing between j and k≠ i, 𝒜_ij places a ball into j if the policy π would place a ball into i rather than k in period t (with respect to its own load). At a high level, 𝒜_ij favors bin j over bin i. We have the following lemma. Suppose t̅ balls are thrown into N bins according to 𝒜_ij, with each ball being a flex ball with probability q. Moreover, let Y_k(t̅), k ∈ [N], denote the number of flex balls, among the t̅ throws, that are placed into bin k. Then, there exists some constant α > 0 such that * 𝔼[Y_j(t̅)-Y_i(t̅)] ≥q/N2t̅, * ℙ(Y_j(t̅)-Y_i(t̅) ≤q/2 N2t̅) ≤ e^-αt̅. The next lemma presents Binomial tail bounds that we use throughout our analysis. Consider two binomial random variables X_i(t) ∼ B(t,p_i), X_j(t) ∼ B(t,p_j), where X_i(t) and X_j(t) need not be independent. Then, for any ϵ > 0: * ℙ(|(X_j(t) - X_i(t)) - 𝔼[X_j(t) - X_i(t)]| ≥ϵ t) ≤ 4e^-ϵ^2t/2, * ℙ(|(X_j(t) - X_i(t)) - 𝔼[X_j(t) - X_i(t)]| ≥ϵ√(t log(t))) ≤ 4 t^-ϵ^2/2. We leverage <ref> and <ref> to prove <ref> next, and defer their proofs to Appendix <ref>. Let = T - ·√(T log(T)). By construction, the static policy starts flexing at +1. Let E denote the event that the gap is non-zero at T. Then, 𝔼[(T)] = 𝔼[(T)| E] ℙ(E) + 𝔼[(T)| E^c] ℙ(E^c) = 𝔼[(T)| E] ℙ(E), where the second equality follows from the fact that (T) = 0 given E^c, by definition. Let E^1 denote the event that the maximally and minimally loaded bins at time T never have the same loads between and T, and E^2 the event that they do. Letting τ denote the last period in which the loads of the maximally and minimally loaded bins at time T were equal, we have: 𝔼[(T)] = (T) | E^1E^1 + (T) | E^2E^2 ≤ TE^1 + T-τ| E^2E^2, where the inequality follows from the fact that under E^2, in the worst case, all balls between τ and T land in the maximally loaded bin, resulting in (T) ≤ T-τ. For i, j ∈ [N], we use E_ij^1 to denote the event that (a) i and j are respectively the maximally and minimally loaded bins at time T, and (b) the loads of i and j are never the same between and T. We moreover use E_ij^2 to denote the event that (a) i and j are respectively the maximally and minimally loaded bins at time T and do not have the same loads, but (b) their loads were the same at some point between and T-1. Formally: E_ij^1 := { T∈T| i = max_k∈[N]_k(T), j = min_k∈[N]_k(T), _i(t) ≠_j(t), ∀ t ∈{, … , T}} E_ij^2 := { T∈T| i = max_k∈[N]_k(T), j = min_k∈[N]_k(T), _i(t) = _j(t) for some t ∈{, … , T-1},_i(T) ≠_j(T)} Since E ⊆∪_i ≠ j E_ij^1 ∪ E_ij^2, we further bound (<ref>) as follows: (T) ≤ T ∑_i≠ jE_ij^1 + ∑_t = 1^T-∑_i≠ j t E_ij^2, T-τ = t ≤ T∑_i≠ jE_ij^1 + ∑_t = 1^T-∑_i≠ j t E_ij^2 | T-τ = t. We decompose the remainder of the proof into two steps. In Step 1, we first show that there exist constants β_1 > 0, α_1> 0 such that for large enough T ℙ(E_ij^1) ≤β_1 T^-2 + e^-α_1(T-) ∀ i,j. In Step 2 we argue that there exist constants α_2 > 0, α_3 > 0 such that E_ij^2 | T-τ = t≤ 4 e^-α_2 t + e^-α_3 t ∀ i,j. Plugging (<ref>) and (<ref>) back into (<ref>), we then obtain that, for large enough T, (T)≤ ∑_i≠ j T (β_1 T^-2 + e^-α_1(T-)) + ∑_i≠ j∑_t = 1^T- t(4 e^-α_2 t + e^-α_3 t) ∈𝒪(1), which completes the proof of the theorem. Step 1: Bound ℙ(E_ij^1), for all i, j ∈ [N]. We state the following lemma, and defer its proof to Appendix <ref>. Consider any policy π that (a) sets (t) = 0 for t ≤ t_1 (b) sets (t) = 1 for t_1 < t ≤ t_2, where t_2 - t_1 ≥√(t_2log t_2). Define F_ij^1 = {t_2∈t_2| i = max_k∈[N]_k(t_2), j = min_k∈[N]_k(t_2), _i(t) ≠_j(t), ∀ t ∈{t_1, … , t_2}}. Then, ℙ(F_ij^1) ≤βt_1^-2 + e^-α_1 (t_2 - t_1) ∀ i,j for some constants β > 0, α_1 > 0. Applying <ref> to event E_ij^1, with t_1 = and t_2 = T, for all i, j∈ [N] we have that ℙ(E_ij^1) ≤β^-2 + e^-α_1 (T-). Using the fact that = T - √(Tlog(T)) we obtain Step 1. Step 2: Bound E_ij^2 | T-τ = t for all i,j ∈ [N], t ∈{1,…,T-}. Again, we establish a general lemma for the probability bound and defer its proof to Appendix <ref>. Consider any policy π that sets (t) = 1 for t_1 < t ≤ t_2. Define F_ij^2 := { t_2∈t_2| i = max_k∈[N]_k(t_2), j = min_k∈[N]_k(t_2), _i(t) = _j(t) for some t ∈{t_1, … , t_2-1},_i(t_2) ≠_j(t_2)}. Then, for τ := max{t | _i(t) = _j(t), t ∈{t_1, … , t_2-1}}, we have ℙ(F_ij^2|t_2-τ = t) ≤ 4 e^-α_2 t + e^-α_3 t, ∀ i,j for some constants α_2 > 0, α_3 > 0. Applying <ref> to E_ij^2, with t_1 = and t_2 = T, we have that ℙ(E_ij^2| T-τ = t) ≤ 4 e^-α_2 t + e^-α_3 t ∀ i,j for some α_2, α_3 > 0, which completes the proof. §.§.§ Proofs of Auxiliary Results Let M = ∑_t = 1^t̅1_t = {i,j}. We have: 𝔼[Y_j(t̅)-Y_i(t̅)| M] = 𝔼[Y_j(t̅)| M] -𝔼[Y_i(t̅)| M] = M + ∑_t=1^t̅ℙ({t = {j,k} for some k ≠ i}∩{𝒜_ij(t) = j}| M) -∑_t=1^t̅ℙ({t = {i,k} for some k ≠ j}∩{𝒜_ij(t) = i}| M), where the second equality follows from the fact that, by construction, whenever t = {i,j} (this happens M times, by definition), the ball was allocated to bin j. By symmetry, i and j are equally likely to be included in F(t). Combining this with the 𝒜_ij construction, which places a ball into bin j whenever the static policy would have placed it into i rather than k, we have: 𝔼[Y_j(t̅)-Y_i(t̅)| M] ≥ M 𝔼[Y_j(t̅)-Y_i(t̅)] = 𝔼[𝔼[Y_j(t̅)-Y_i(t̅)|M]] ≥𝔼[M] = q/N2t̅ and we obtain <ref> (i). We now prove <ref> (ii). Let W_kt be the random indicator variable representing whether a ball in period t is a flex ball that goes into bin k, for k ∈ [N], t∈ [t̅]. Then, by definition, Y_k(t̅) = ∑_t = 1^t̅W_kt ∀ k∈[N]. Define moreover Z_τ = ∑_t = 1^τ (W_jt - W_it) - q/N2τ for τ = 1, 2, ..., t̅, with Z_0 = 0. Then, ℙ(Y_j(t̅)-Y_i(t̅) ≤q/2 N2t̅) = ℙ(∑_t = 1^t̅ (W_jt - W_it) ≤q/2 N2t̅) =ℙ(Z_t̅ - Z_0 ≤ -q/2 N2t̅). We argue that the sequence Z_0, Z_1, Z_2 ... is a sub-martingale: 𝔼[Z_τ+1|Z_0, ..., Z_τ] =𝔼[∑_t = 0^τ+1 W_jt - W_it - q/N2(τ+1) | Z_0, ..., Z_τ] = 𝔼[W_j,τ+1 - W_i,τ+1 -q/N2 + Z_τ | Z_0, ..., Z_τ] = 𝔼[W_j,τ+1 - W_i,τ+1 | Z_0, ..., Z_τ] -q/N2 + Z_τ. Note that a flex ball lands in bin j, i.e., W_jt-W_it = 1, if one of two events occurs: (1) t = {i,j}, in which case the ball is always thrown into bin j, or (2) t = {j,k} for some k ≠ i,j, and the ball is thrown into bin j. The first event occurs with probability q/N2, and the second with probability ℙ({t = {j,k} for some k ≠ i}∩{𝒜_ij(t) = j}). Thus, W_jt-W_it = 1 = q/N2 + ℙ({t = {j,k} for some k ≠ i}∩{𝒜_ij(t) = j}). Via similar reasoning, it follows that W_jt-W_it = -1 = ℙ({t = {i,k} for some k ≠ j}∩{𝒜_ij(t) = i}). Plugging this into (<ref>): 𝔼[Z_τ+1|Z_0, ..., Z_τ] =q/N2 + ℙ({τ + 1 = {j,k} for some k ≠ i}∩{𝒜_ij(τ + 1) = j} | Z_0, ..., Z_τ) - ℙ({τ + 1 = {i,k} for some k ≠ j}∩{𝒜_ij(τ + 1) = i} | Z_0, ..., Z_τ) - q/N2 + Z_τ ≥ Z_τ, where the final inequality follows from the same arguments as those used to derive <ref> (i) above. Having established (Z_τ)_t∈[T] is a submartingale, we apply Azuma's inequality <cit.> to (<ref>), and obtain: ℙ(Y_j(t̅)-Y_i(t̅) ≤q/2 N2t̅) ≤ e^-2 ·(q/2 N2t̅)^2/t̅≤ e^-αt̅ for some constant α > 0. Since (<ref>) holds for any Z_0, ..., Z_τ, we have ℙ({t = {j,k} for some k ≠ i}∩{𝒜_ij(t) = j} | Z_0, ..., Z_τ) ≥ℙ({t = {i,k} for some k ≠ j}∩{𝒜_ij(t) = i} | Z_0, ..., Z_τ). We first prove (i). We have: ℙ(|(X_j(t) - X_i(t)) - 𝔼[X_j(t) - X_i(t)]| ≥ϵ t) ≤ℙ(|(X_j(t) - X_j(t)| + |X_i(t) - X_i(t)| ≥ϵ t) ≤ℙ(|X_j(t) - 𝔼[X_j(t)]| ≥ϵ/2 t) + ℙ(|X_i(t) - 𝔼[X_i(t)]| ≥ϵ/2 t). By Hoeffding's inequality <cit.>, for all k ∈ [N], ℙ(|X_k(t) - 𝔼[X_k(t)]| ≥ϵ/2 t)≤ 2exp(-ϵ^2t/2). Plugging this back into (<ref>), we obtain: ℙ(|(X_j(t) - X_i(t)) - 𝔼[X_j(t) - X_i(t)]| ≥ϵ t) ≤ 4e^-ϵ^2t/2. For (ii), we similarly have that ℙ(|(X_j(t) - X_i(t)) - 𝔼[X_j(t) - X_i(t)]| ≥ϵ√(t log(t))) ≤ℙ(|X_j(t) - 𝔼[X_j(t)]| ≥ϵ/2√(t log(t))) + ℙ(|X_i(t) - 𝔼[X_i(t)]| ≥ϵ/2√(t log(t))). As before, by Hoeffding's inequality, for all k ∈ [N], ℙ(|X_k(t) - 𝔼[X_k(t)]| ≥ϵ/2√(t log(t))) ≤ 2exp(-ϵ^2 tlog t/2t) = 2t^-ϵ^2/2. We thus obtain: ℙ(|(X_j(t) - X_i(t)) - 𝔼[X_j(t) - X_i(t)]| ≥ϵ√(t log(t)))≤ 4 t^-ϵ^2/2. We prove the claim via the coupling between 𝒜^π and 𝒜_ij. We first introduce some additional notation. For bin k ∈ [N], let x_k'(t) denote the load in bin k and time t under the fictional allocation policy 𝒜_ij (see <ref>). We let Y_k denote the number of flex balls that land in bin k between t_1+1 and t_2 under 𝒜_ij, and define Y := ∑_k = 1^N Y_k. Note that Y∼ B(t_2-t_1,q). We use T = t_2-Y to denote the total number of random (non-flex) throws throughout the entire time horizon, and use Z_k to denote the number of balls that landed in bin k during the T random trials. Thus, for k∈ [N], x'_k(t_2) = Y_k + Z_k. Conditioned on the event _i(t) > _j(t) ∀ t ∈{t_1,...,t_2}, 𝒜_ij and 𝒜^π make identical decisions, and as a result x'_k(t) = _k(t) ∀ t ∈{t_1,...,t_2}. Thus, we have: F_ij^1 := {t_2∈t_2|_i(t_2) = max_k_k(t_2), _j(t_2) = min_k_k(t_2), _i(t) > _j(t) ∀ t ∈{t_1,...,t_2}, x_i'(t) > x_j'(t) ∀ t ∈{t_1,...,t_2}} ⊆{t_2∈t_2| x_i'(t) > x_j'(t) ∀ t ∈{t_1,...,t_2}} ⊆{Y_i + Z_i > Y_j + Z_j}. We use (<ref>) to bound F_ij^1:[We will leverage this approach of decomposing the respective loads of bins i and j into flex and non-flex throws in many of the remaining proofs.] F_ij^1 ≤Y_i + Z_i > Y_j + Z_j ≤Z_i - Z_j > q/2N2 (t_2-t_1)+ Y_j - Y_i < q/2N2 (t_2-t_1). Note that the last inequality holds because we need at least one of Z_i - Z_j > q/2N2 (t_2-t_1) and Y_j - Y_i < q/2N2 (t_2-t_1) for Y_i + Z_i > Y_j + Z_j to hold. Recall, t_2-t_1 ≥√(Tlog T) by assumption. Thus, for T∈{t_1,…,t_2} we have: Z_i - Z_j > q/2N2 (t_2-t_1) | T = t ≤Z_i - Z_j > q/2N2√(t_2log t_2) | T = t ≤Z_i - Z_j > q/2N2√(TlogT) | T = t = Z_i - Z_j > √(6)√(TlogT) | T = t, where (<ref>) follows from plugging in = 2√(6)N(N-1)/q. Recall, Z_i, Z_j respectively denote the number of random balls that landed in bins i and j during T periods. Thus, Z_i and Z_j are both binomially distributed, with Z_i = Z_j = T/N. Applying <ref> to (<ref>): Z_i - Z_j > q/2N2 (t_2-t_1) | T = t≤β t^-2≤β t_1^-2 for some constant β > 0. Moreover, applying <ref> to Y_j and Y_i, we have: ℙ(Y_j - Y_i ≤q/2 N2(t_2-t_1) )≤ e^-α_1(t_2-t_1) for some constant α_1 > 0.[Here we abuse notation when using Y_j and Y_i above in omitting their dependency on (t_2-t_1).] Plugging (<ref>) and (<ref>) into (<ref>), we obtain the result. ℙ[F_ij^1] ≤β t_1^-2 + e^-α_1(t_2-t_1), ∀ i,j. As for the proof of <ref>, we analyze 𝒜_ij. Conditional on τ, we fix bin k and we re-define Y_k to denote the number of flex balls that go into bin k between τ+1 and t_2 as a result of 𝒜_ij. As before, we let Y = ∑_k = 1^N Y_k. Note that Y∼ B(t_2-τ,q). Moreover, let Z_k denote the number of balls that landed in bin k from the random (non-flex) throws between τ+1 and t_2. Thus, for k∈ [N], the total number of balls that land in bin k from τ+1 to t_2 is x'_k(t_2) := Y_k + Z_k. Moreover, conditional on F_ij^2 we have _i(t) ≠_j(t), ∀ t ∈{τ + 1,t_2}. Combined with the fact that i and j are respectively the most- and least-loaded bins at t_2, we have that _i(t) > _j(t) ∀ t ∈{τ + 1,...,t_2}. By construction of the fictional allocation rule 𝒜_ij (see <ref>), conditional on F_ij^2, 𝒜_ij and 𝒜^π make identical decisions in {τ+1, .., t_2}. By the same argument as that used in the proof of <ref> (see (<ref>)), we obtain the following bound: F_ij^2| t_2-τ = t ≤Y_i + Z_i > Y_j + Z_j| t_2-τ = t ≤Z_i - Z_j > q/2 N2 (t_2-τ)| t_2-τ = t + Y_j - Y_i < q/2N2 (t_2-τ)| t_2-τ = t. Recall, Z_i, Z_j respectively denote the number of random balls that landed in bins i and j from τ+1 to t_2. Thus, Z_i and Z_j are both binomially distributed, with Z_i = Z_j = q t_2-τ/N. We then have: Z_i - Z_j > q/2N2 (t_2-τ) | t_2-τ = t ≤ 4e^-α_2 t for some constant α_2, where (<ref>) is an application of <ref> to the binomial random variables Z_i, Z_j. By <ref>, we also have: Y_j - Y_i < q/2N2 (t_2-τ)| t_2-τ = t≤ e^-α_3 t for some constant α_3. Plugging these two bounds into (<ref>), we obtain the lemma. §.§ Analysis of the dynamic policy §.§.§ Algorithm We provide a formal description of the semi-dynamic policy in <ref>. For t∈ [T], let _i(t) be the number of balls in bin i ∈ [N] under the semi-dynamic policy, and t be the bin in which the ball lands at time t. §.§.§ Proof of <ref> Let = inf{t':(t) ≥(T-t)q/N ∀ t ∈{t', … , T}} be the start of the final set of consecutive periods where flexing is exerted. As before, let E = {T∈T|(T) ≠ 0}. As in the proof of <ref>, it suffices to show 𝔼[(T)| E] ℙ(E) ∈𝒪(1). To do so, we decompose event E into ∪_i ≠ j (E_ij^1 ∪ E_ij^2), where E_ij^1 := { T∈T| i = max_k∈[N]_k(T), j = min_k∈[N]_k(T), _i(t) ≠_j(t), ∀ t ∈{, … , T}} E_ij^2 := { T∈T| i = max_k∈[N]_k(T), j = min_k∈[N]_k(T), _i(t) = _j(t) for some t ∈{, … , T-1},_i(T) ≠_j(T)} Under event E_ij^2, we denote by τ the last time that _i(t) = _j(t). Then, for any , E_ij^2 satisfies the condition of F_ij^2 in <ref> with t_1 = and t_2 = T. Thus, by <ref>, ℙ(E_ij^2| T-τ = t) ≤ 4 e^-α_2 t + e^-α_3 t ∀ i,j, for some constants α_2, α_3. Then, for any : (T) | E_ij^2E_ij^2 ≤∑_t = 1^T-(T) | E_ij^2 ∩ T-τ = tE_ij^2 ∩ T-τ = t ≤∑_t = 1^T- tE_ij^2 ∩ T-τ = t≤∑_t = 1^T- tE_ij^2 | T-τ = t ≤∑_t = 1^T- t (4 e^-α_2 t + e^-α_3 t) ≤ b_3 ∀ i,j for some constant b_3. Since 𝔼[(T)| E] ℙ(E) ≤∑_i ≠ j𝔼[(T)| E_ij^1] ℙ(E_ij^1) + 𝔼[(T)| E_ij^2] ℙ(E_ij^2), it suffices to show that 𝔼[(T)| E_ij^1] ℙ(E_ij^1) ≤ b_4 for some constant b_4 > 0. We have: 𝔼[(T) | E_ij^1] ℙ(E_ij^1) = ∑_t = 1^T𝔼[(T) | E_ij^1,T-T^⋆ = t] ℙ(E_ij^1,T-T^⋆ = t) ≤∑_t = 1^T𝔼[(T^⋆)+(T-T^⋆) | E_ij^1,T-T^⋆ = t]ℙ(E_ij^1,T-T^⋆ = t) ≤∑_t = 1^T𝔼[(T^⋆)+(T-T^⋆) | E_ij^1,T-T^⋆ = t]ℙ(E_ij^1 | T-T^⋆ = t) We first bound (T^⋆). By definition of T^⋆, (T^⋆-1) < (T-T^⋆+1)q/N. Since (T^⋆) ≤(T^⋆-1) + 1, this leads to: (T^⋆)< 1+(T-T^⋆+1)q/N = (T-T^⋆)q/N + q/N+1. The following lemma will help us bound ℙ(E_ij^1 | T-T^⋆ = t). We defer its proof to the end of the section. Consider any policy π that sets (t) = 1 for t_1 < t ≤ t_2. Define F_ij^1 = {t_2∈t_2| i = max_k∈[N]_k(t_2), j = min_k∈[N]_k(t_2), _i(t) ≠_j(t), ∀ t ∈{t_1, … , t_2}}. Suppose _i(t_1) - _j(t_1) ≤(t_2-t_1)q + a for some constants ≤1/5 N2, a > 0. Then, there exist constants α_0, α_1 and t_0 such that F_ij^1 | _i(t_1) - _j(t_1) ≤(t_2-t_1)q + a≤ 4 e^-α_0(t_2 - t_1)+e^-α_1(t_2 - t_1) ∀ t_2 - t_1 ≥ t_0. To see that satisfies the conditions stated in the lemma, note that: _i(T^⋆) - _j(T^⋆) ≤max_i'_i'(T^⋆) - min_i'_i'(T^⋆) ≤ N (T^⋆). Plugging (<ref>) into (<ref>), we have: _i(T^⋆) - _j(T^⋆) ≤ N ((T-T^⋆)q/N + q/N+1) =(T-T^⋆)q + q+N. Recall, E_ij^1 := {T∈T| i = max_k∈[N]_k(T), j = min_k∈[N]_k(T), _i(t) ≠_j(t), ∀ t ∈{, … , T}}. Applying <ref> to E_ij^1, with t_1 = , t_2 = T, = = 1/5 N2 and a = q+N, there exist constants α_0, α_1, t_0 such that: E_ij^1 |≤ 4 e^-α_0(T-T^⋆)+e^-α_1(T-T^⋆) ∀ T-T^⋆≥ t_0. We plug these upper bounds back into (<ref>) and obtain: 𝔼[(T) | E_ij^1] ℙ(E_ij^1) ≤∑_t = 1^T[ q/N(1+t)+1+t ]ℙ(E_ij^1 | T-T^⋆ = t) ≤∑_t = 1^t_0[( q/N+1)(1+t) ] + ∑_t = t_0+1^T[( q/N+1)(1+t) ](4 e^-α_0t+e^-α_1t) ≤ b_4. for some constant b_4 > 0. In this proof we similarly make use of the fictional allocation rule 𝒜_ij (see <ref>). We define the allocation rule 𝒜_ij(t) as 𝒜_ij(t):= j if t(t) = 1 and t = {i,j}, j if t(t) = 1, t = {k,j} for some k ≠ i and min_j' ∈{i,k}_j'(t) = i, 𝒜^d(t) otherwise. Fix bin k, and let Y_k denote the number of flex balls that go into bin k between t_1 + 1 and t_2 as a result of 𝒜_ij. We moreover let Y := ∑_k = 1^N Y_k. Let Z_k denote the number of balls that land in bin k during the random (non-flexible) throws between t_1 + 1 and t_2. Thus, for k∈ [N], the total number of balls that land in bin k is x'_k(t_2) := _k(t_1) + Y_k + Z_k. Moreover, under F_ij^1 we have _i(t) > _j(t) ∀ t ≥ t_1 + 1. That is, under event F_ij^1, 𝒜_ij and 𝒜^π make identical decisions in {t_1, .., t_2}. By the same argument as in (<ref>), this implies F_ij^1 | _i(t_1) - _j(t_1) ≤(t_2-t_1)q + a ≤ _i(t_1) + Z_i + Y_i > _j(t_1) + Z_j + Y_j | _i(t_1) - _j(t_1) ≤(t_2-t_1)q + a ≤ ℙ(_i(t_1) - _j(t_1)> (Z_j- Z_i) + (Y_j-Y_i) | _i(t_1) - _j(t_1) ≤(t_2-t_1)q + a) ≤ ℙ((t_2-t_1)q + a> (Z_j- Z_i) + (Y_j-Y_i)) ≤ ℙ(Z_i - Z_j ≥(t_2-t_1)q + a )+ ℙ(Y_j-Y_i ≤ 2((t_2-t_1)q + a)), where (<ref>) comes from the conditionality that _i(t_1) - _j(t_1)≤(t_2-t_1)q + a, and the last inequality comes from the fact that at least one of {Z_i - Z_j ≥(t_2-t_1)q + a} and {Y_j-Y_i ≤ 2((t_2-t_1)q + a)} must hold for (t_2-t_1)q + a> (Z_j- Z_i) + (Y_j-Y_i) to hold. Recall, Z_i, Z_j respectively denote the number of random balls that landed in bins i and j between t_1+1 and t_2. Thus, Z_i and Z_j are both binomially distributed, with Z_i = Z_j = (1-q) t_2-t_1/N. Thus, ℙ(Z_i - Z_j ≥(t_2-t_1)q + a ) = ℙ((Z_i - Z_j) - 𝔼[Z_i-Z_j] ≥(t_2-t_1)q + a ) ≤ℙ((Z_i - Z_j) - 𝔼[Z_i-Z_j] ≥(t_2-t_1)q ) ≤ 4 e^-α_0(t_2-t_1), for some constant α_0 > 0. Here, the last inequality follows from <ref> (i). We now analyze the probability bound on Y_j - Y_i. Since ≤1/5 N2, there exists a constant t_0 > 0 such that, for t_2 - t_1 ≥ t_0, 2((t_2-t_1)q + a) ≤q/2N2(t_2-t_1). Then, for all t_2-t_1 ≥ t_0, we have: ℙ(Y_j-Y_i ≤ 2((t_2-t_1)q + a)) ≤ℙ(Y_j - Y_i ≤q/2 N2(t_2-t_1)) ≤ e^-α_1(t_2-t_1). for some constant α_1 > 0, where the second inequality follows from <ref> (ii). Plugging these two bounds back into (<ref>), we obtain the result. §.§.§ Proof of <ref> To provide an upper bound on the expected number of flexes, we condition in the following manner: 𝔼[T-] ≤∑_k = 1^√(T)𝔼[T-| T-∈ [(k-1) √(T),k √(T))] ·ℙ(T-∈ [(k-1) √(T),k √(T))) ≤∑_k = 1^√(T) k √(T)·ℙ(T-∈ [(k-1) √(T),k √(T))) We assume without loss of generality that √(T) above is an integer as it changes 𝔼[T-] by an additive factor of at most √(T). We will show that ∑_k = 1^√(T) k ·ℙ(T-∈ [(k-1) √(T),k √(T))) ≤ b_5 for some constant b_5 > 0, so that (<ref>) ∈𝒪(√(T)). We have: ∑_k = 1^√(T) k ·ℙ(T-∈ [(k-1) √(T),k √(T))) = ∑_k = 1^√(T)ℙ(T-≥ (k-1) √(T))_(I). Below we will show that (I) ≤ N(η_k + b_6/√(T)), ∀ k for some values η_k that depend on k but not on T. Plugging this into the above, we obtain: ∑_k = 1^√(T) k ·ℙ(T-∈ [(k-1) √(T),k √(T)))≤∑_k = 1^√(T) N (η_k + b_6/√(T)) ≤ N b_6 + N ∑_k = 1^√(T)η_k_(II)≤ b_5, where it remains to be shown that (II) can also be bounded by a constant. We first bound (I). By definition, () ≥a(T-)q/N. Thus, for k ∈{1,…,√(T)}: T-≥ (k-1) √(T)() ≥a(k-1)√(T) q/N. Hence, we have: ℙ(T-≥ (k-1) √(T)) ≤ℙ(() ≥a(k-1) q/N√(T)) ≤ℙ(() ≥a(k-1) q/N√()), where the second inequality follows from ≤ T. Recall that () = max_i_i()-/N. Applying a union bound to (<ref>), we obtain: ℙ(T-≥ (k-1) √(T)) ≤ N ·ℙ(_1() - /N ≥a(k-1) q/N√()) Denote the standard normal distribution by 𝒩(0,1) and the corresponding CDF by Φ. Applying the Berry-Esseen Theorem (<cit.>, Chapter XVI.5, Theorem 2) we have: ℙ(_1() - /N ≥a(k-1) q/N√()|) = ℙ(_1() - /N/√(·1/N(1-1/N))≥a(k-1) q/√(N-1)|) ≤ 1-Φ(a(k-1) q/√(N-1))+b/√() = η_k + b/√(), where b > 0 is a constant (dependent on N) and η_k = 1-Φ(a(k-1) q/√(N-1)) depends only on k. Noting that ≥() ≥a(T-)q/N, we have that ≥a q/N+aqT. Plugging this back into (<ref>), and defining b_6 = b √(a q + N/a q), we have: ℙ(_1() - /N ≥a(k-1) q/N√()|) ≤η_k + b_6/√(T) ℙ(() ≥a(k-1) q/N√()) ≤ N(η_k + b_6/√(T)). We next show inequality (II). Since η_k = 1-Φ(a(k-1) q/√(N-1)), (II) ≤∑_k = 1^√(T)(1-Φ(a(k-1) q/√(N-1))) ≤⌈√(N-1)/aq⌉∑_k = 1^∞(1-Φ(k-1)) ≤⌈√(N-1)/aq⌉∫_x = -1^∞(1-Φ(x) )dx. Since ∫_x = -1^∞(1-Φ(x) )dx ≤ 1 + ∫_x = 0^∞(1-Φ(x) )dx = 1+ √(1/2 π), (II) is upper bounded by a constant. § APPLICATION: OPAQUE SELLING Notation. Given policy π, let 𝒞^π denote its long-run average inventory costs. Formally, 𝒞^π = ℋ^π + ℛ^π + 𝒟^π, where each of the three terms respectively correspond to of long-run average holding, replenishment costs ℛ^π, and discount costs 𝒟^π. Let R^π be the random variable representing the length of a replenishment cycle under π, and D^π=∑_t=1^R^πt(t) be the random variable that represents the number of times the opaque option is exercised during one cycle. Similar to Equations (1) and (2) in <cit.>, by the Renewal Reward Theorem <cit.> we have 𝒦^π=K/^π, ℋ^π = (2NS+1)^π-(^π)^2/2^π h, and 𝒟^π= D^π/^π δ. We can thus write: 𝒞^π = K/^π + h/2(2+1 - (^π)^2/^π) + D^π/^π δ. Given (<ref>), our objective is decreasing in the expected length of the replenishment cycles, i.e., in ^π. The following proposition formalizes that 𝒞^⋆ is indeed a lower bound on the cost of any policy. We defer its proof to Appendix <ref>. For any policy π we have 𝒞^π≥𝒞^⋆. In the remainder of the section, we use the superscripts nf, a, s and d to respectively refer to the “never-flex”, “always-flex”, static and policies. §.§ Benchmark comparisons We begin by stating our main technical results for this section, regarding the static and policies designed for the vanilla balls-into-bins model. <ref> shows that the expected length of a replenishment cycle under the static opaque selling policy is within an additive constant of the maximum possible cycle length N(S-1)+1. 𝔼[] = NS - , where ∈𝒪(1). Plugging <ref> into (<ref>), we obtain the following upper bound on the long-run average cost of . We include the proof in Appendix <ref>. 𝒞^s ≤K/NS- + h/2(NS+1 + ) + δ q ·, where ∈𝒪(1), ∈Θ(√(log S/S)). <ref> similarly establishes that the dynamic policy achieves long replenishment cycles in expectation. 𝔼[] = NS - , where ∈𝒪(1). Using this characterization of the expected replenishment cycle length, we obtain the following upper bound on the long-run average cost of the dynamic opaque selling policy. 𝒞^d ≤K/NS- + h/2(NS+1+) + δ q , where ∈𝒪(1), ∈𝒪(1/√(S)). We defer the proofs of the above two results to Appendix <ref>. Given our bounds on the costs of the static and the dynamic policy, we are now ready to compare them to the different benchmarks of interest. As our focus is on the regime in which S is large, we abuse big-O notation to characterize the scaling of the differences between policies and benchmarks as a function of K,h,δ. We begin by comparing the static and dynamic policies to each other. Our results in <ref> and <ref> tell us that the per-period cost of the dynamic policy asymptotically outperforms the static one by an additive difference of δ q Ω(√(log S/S)). <ref> below formalizes this comparison. 𝒞^s - 𝒞^d ≥ -K ·𝒪(1/S^2) - h·𝒪(1) + δ q ·Ω(√(log S/S)). The following results, shown in <cit.>, establish bounds on the expected cycle lengths of the never-flex and always-flex policies. 𝔼[] = NS - , where ∈Ω(√(S)). 𝔼[] = NS - , where ∈𝒪(1). Using these results, <ref> summarizes the costs incurred by every policy considered. We use OPT as a shorthand for the fictitious policy which achieves 𝒞^⋆. We relegate formal proofs of these results (which simply follow from plugging in the expected cycle lengths and discount costs previously derived), to <ref>. We use this to provide bounds on the performance of the static and dynamic policies relative to the three benchmarks, as a function of K, h, and δ. The proofs of the results below can be found in <ref>. For the static policy, we have: - 𝒞^s ≥ K ·Ω(1/S^3/2) +h·Ω(√(S)) -δ q ·𝒪(√(log S/S)) - 𝒞^s ≥ -K ·𝒪(1/S^2) - h ·𝒪(1) + δ q (1 - 𝒪(√(log S/S))) - 𝒞^⋆≤ K ·𝒪(1/S^2) + h·𝒪(1) + δ q ·𝒪(√(log S/S)) For the dynamic policy, we have: - ≥ K ·Ω(1/S^3/2) +h·Ω(√(S)) -δ q ·𝒪(1/√(S)) - ≥ -K ·𝒪(1/S^2) - h ·𝒪(1) + δ q (1- - 𝒪(1/√(S))) - 𝒞^⋆≤ K ·𝒪(1/S^2) + h·𝒪(1) + δ q ·𝒪(1/√(S)) §.§ Analysis of the static opaque selling policy We leverage the following insight, derived by <cit.>, for our results. Consider any policy π designed for the opaque selling problem, and let x^π_i(t) be the load of the analogous balls-into-bins model under π. If π is also used to govern the allocation rule for the analogous balls-into-bins model, then: ℙ(^π≤ t) = ℙ(max_i x^π_i(t) ≥) ∀ t ∈ []. §.§.§ Proof of <ref> Since (t) := max_j _j(t) - t/N, <ref> is equivalent to ℙ(R^π≤ t) = ℙ((t) ≥ S - t/N) ∀ t ∈ℤ^+. Let y(t) be the the vector where the i-th component denotes the load of the i-th most loaded bin minus the average load when one follows the always-flex policy. Then, <cit.> define a potential function on the load imbalance: Γ(t) = ∑_i = 1^N exp(c_1 ϵ y_i(t)) + ∑_i = 1^N exp(-c_1 ϵ y_i(t)), where c_1 and ϵ are constants that depend on N but not on S. Since (t) = y_1(t), we have Γ(t) ≥ e^c_1 ϵ(t). We also make use of the following proposition. There exists a constant c_2 > 0 such that 𝔼[Γ(t)] ≤c_2/ϵ^7N ∀ t ≥ 0. These results allow us to prove the following proposition, which can be viewed as a special case of Equation (18) in <cit.>. There exist constants ,β > 0 such that, for any η > 0, ℙ((t) ≥η) ≤β e^-·η ∀ t ≥ 0. For any η≥ 0, ℙ((t) ≥η) = ℙ(e^c_1 ϵ(t)≥ e^c_1 ϵη) ≤ℙ(Γ(t) ≥ e^c_1 ϵη). By <ref>, there exists a constant c_2 > 0 such that 𝔼[Γ(t)] ≤c_2/ϵ^7N. Using this fact, we obtain: ℙ((t) ≥η) ≤ℙ(Γ(t) ≥𝔼[Γ(t)] e^c_1 ϵη/c_2/ϵ^7N) ≤c_2/ϵ^7 N e^-c_1 ϵη, where the third inequality follows from Markov's inequality. Taking = c_1 ϵ and β = c_2/ϵ^7 N completes the proof. Finally, the following proposition relates the inventory model to the balls-into-bins model. For any policy π, 𝔼[R^π] = N(S-1)+2 - ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N). Observe that the maximum length of a replenishment cycle is N(S-1)+1, since in one cycle we sell at most S-1 products of each type plus 1 additional product of a certain type. Then, 𝔼[R^π] = ∑_t = 0^N(S-1)+1ℙ(R^π≥ t) = N(S-1)+2-∑_t = S^N(S-1)+1ℙ(R^π < t) ≥ N(S-1)+2 - ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N), 𝔼[R^π] = ∑_t = 0^N(S-1)+1ℙ[R^π≥ t] = ∑_t = 0^N(S-1)+1 (1-ℙ[R^π < t]) = N(S-1)+2-∑_t = S^N(S-1)+1ℙ[R^π < t] ≥ N(S-1)+2 - ∑_t = S^N(S-1)+1ℙ[(t) ≥ S- t/N], where the last inequality follows from (<ref>). Notice also that the second equality uses the fact that ℙ(R^π < t) = 0 ∀ t < S, since one cannot run out of inventory of any type of product before S products are sold. We are now ready to prove <ref>. Applying <ref> to the static policy, we obtain 𝔼[] = N(S-1)+2 - ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N). Thus, to prove <ref>, it suffices to show that ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N) ∈𝒪(1). Recall = 2√(6)N(N-1)(N+1)/q+4N, = 2√(6)N(N-1)/q, and = T - ·√(T log(T)). Moreover, as illustrated in <ref>, we decompose into and , where = -= 2√(6)N(N-1)N/q+4N. Let := T - ·√(T log(T)), so that T - = √(Tlog(T)) and - = √(Tlog(T)). By construction, opaque selling starts at . We leverage the definition of to prove (<ref>) by considering t ≤ and t > separately. For t ≤, ℙ((t) ≥ S-t/N) = ℙ(max_j _j(t) ≥ S) ≤ℙ(max_j _j() ≥ S). We make the following claim and defer its proof to Appendix <ref>. ℙ(max_j _j() ≥ S) ≤a_1/T^2 for some constant a_1 > 0. Then, we have: ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N) = ∑_t = S^ℙ((t) ≥ S- t/N) + ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤ T·a_1/T^2 + ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N) = a_1/T + ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N). Thus, it suffices to show that ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤ a_2, for some constant a_2 > 0. We now state the following lemma, whose proof is deferred to Appendix <ref>. Consider any policy π that sets (t) = 1 when t > t_1, where t_1 ≤ N(S-1)+1. Define, for period t_2 such that t_1 ≤ t_2 ≤ N(S-1)+1, and for every i and j F_ij^1 := {t_2∈t_2| i ∈max_k∈[N]_k(t_2), j ∈min_k∈[N]_k(t_2), _i(t) ≠_j(t), ∀ t ∈{t_1,...,t_2}}, F_ij^2 := {t_2∈t_2| i ∈max_k∈[N]_k(t_2), j ∈min_k∈[N]_k(t_2), _i(t) = _j(t) for some t ∈{t_1,...,t_2-1},_i(t_2) ≠_j(t_2)}. Then, there exists some constant a_3 > 0 such that ∑_t = t_2+1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤ a_3 + ∑_t = t_2+1^N(S-1)+1∑_i,jℙ(F_ij^1). Now, as in the balls-into-bins model, for i,j ∈ [N], we define E_ij^1 := {∈| i ∈max_k∈[N]_k(), j ∈min_k∈[N]_k(), _i(t) ≠_j(t), ∀ t ∈{,...,}} E_ij^2 := {∈| i ∈max_k∈[N]_k(), j ∈min_k∈[N]_k(), _i(t) = _j(t) for some t ∈{,...,-1},_i() ≠_j()} It follows that E_ij^1 and E_ij^2 satisfy the conditions for F_ij^1 and F_ij^2 in <ref> with t_1 = and t_2 =. Thus, applying <ref>, we obtain ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤ a_3 + ∑_t = +1^N(S-1)+1∑_i,jℙ(E_ij^1) for some constant a_3 > 0. To bound ℙ(E_ij^1), since - = √(Tlog(T))≥√(log()), we observe that E_ij^1 satisfies the condition for F_ij^1 in <ref> with t_1 = and t_2 = . Thus, applying <ref>, we obtain E_ij^1≤β^-2 + e^-α_1 ( - ). Since = T - √(Tlog(T)), there exists a constant β_1 such that ℙ(E_ij^1) ≤β_1 T^-2 + e^-α_1 ( - ) ∀ i,j, for large enough T. Plugging this result back into (<ref>), we obtain ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤ a_3 + ∑_t = +1^N(S-1)+1∑_i,j(β_1 T^-2 + e^-α_1 ( - )) ≤ a_3 +T ∑_i,j(β_1 T^-2 + e^-α_1 ( - )) ≤ a_2 for some constant a_2 and large enough T, where the second inequality comes from the fact that N(S-1)+1 - ≤ N(S-1)+1 = T. §.§.§ Proof of <ref> From <ref> (iii), <ref> (iii) and <ref> (iii), we have = + +≤K/NS- + h/2(NS+1 + ) + δ q ·, where ∈𝒪(1), ∈Θ(√(log S/S)). §.§.§ Proofs of Auxiliary Results Suppose F_ij^2 occurs, and let τ = max{t ∈{t_1,...,t_2-1}|_i(t) = _j(t)}. With slight abuse of notation, we denote by F_ij^2 ∩τ the event that F_ij^2 occurs and τ is the last time that _i(t) = _j(t). For t > t_2, we have: ℙ((t) ≥ S- t/N) = ℙ((t) ≥ S- t/N|(t_2) = 0) ·ℙ((t_2) = 0) + ℙ((t) ≥ S- t/N|(t_2) > 0) ·ℙ((t_2) > 0) ≤ℙ((t) ≥ S- t/N|(t_2) = 0) + ∑_i,j(ℙ((t) ≥ S- t/N|F_ij^1) ℙ(F_ij^1)+ℙ((t) ≥ S- t/N|F_ij^2) ℙ(F_ij^2) ) ≤ℙ((t) ≥ S- t/N|(t_2) = 0)_(I) + ∑_i,j(ℙ(F_ij^1) + ∑_τ = t_1^t_2ℙ((t) ≥ S- t/N|F_ij^2 ∩τ) ℙ(F_ij^2 ∩τ)_(II)) Consider first (I). Note that, conditional on (t_2) = 0, policy π takes the same actions as the always-flex policy would if it was initialized with all-empty bins at time t_2. (This observation will be re-used in the analyses that follow.) Thus, we have: (I) = ℙ((t) ≥ S- t/N|(t_2) = 0) = ℙ((t - t_2) ≥ S - t/N), Similarly, we can check that F_ij^2, t_1 and t_2 satisfies the conditions in <ref>, which guarantees (II) ≤ℙ(F_ij^2 | t_2-τ = t) ≤ 4 e^-α_2 t + e^-α_3 t. Plugging these results back to (<ref>) and summing over all t > t_2, we obtain ∑_t = t_2+1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤∑_t = t_2+1^N(S-1)+1ℙ((t - t_2) ≥ S - t/N) + ∑_t = t_2+1^N(S-1)+1∑_i,jℙ(F_ij^1) + ∑_t = t_2+1^N(S-1)+1∑_i,j∑_τ = t_1^t_2ℙ((t) ≥ S- t/N|F_ij^2 ∩τ) ×(4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)). We now bound each of these three terms. By <ref>, there exists some constant a_4 > 0 such that the first term is bounded above by ∑_t = t_2+1^N(S-1)+1ℙ((t - t_2) ≥ S - t/N) ≤∑_t = t_2+1^N(S-1)+1β e^-· (S-t/N)≤ a_4. Moreover, for the third term, we can change the order of summation and obtain, for fixed i and j: ∑_t = t_2+1^N(S-1)+1∑_τ = t_1^t_2ℙ((t) ≥ S- t/N|F_ij^2 ∩τ) (4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) = ∑_τ = t_1^t_2(4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) ∑_t = t_2+1^N(S-1)+1ℙ((t) ≥ S- t/N|F_ij^2 ∩τ) To bound (<ref>), we formally state Claim <ref> below and prove it in Appendix <ref>. Consider any policy π that sets (t) = 1 when t > t_2. Given a>0, let F' be any subset of the history before t_2 such that F' ⊆ F_a = {t_2∈t_2|(t_2) ≤ a}. Then, ℙ((t) ≥ S- t/N| F') ≤ℙ((t) ≥ S- t/N - a|(t_2) = 0) ∀ t ≥ t_2. By definition of F_ij^2 ∩τ, (t_2) ≤ t_2 - τ, i.e., it satisfies the condition in Claim <ref> with F' = F_ij^2 ∩τ, t_2 and a =t_2-τ. Thus, we have ℙ((t) ≥ S- t/N| F_ij^2 ∩τ) ≤ℙ((t) ≥ S- t/N - (t_2-τ)|(t_2) = 0), ∀ t ≥ t_2. Plugging back (<ref>) and Claim <ref> into (<ref>), we have: (<ref>) =∑_i,j∑_t = t_2+1^N(S-1)+1∑_τ = t_1^t_2ℙ((t) ≥ S- t/N|F_ij^2 ∩τ) (4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) ≤ N^2 ∑_τ = t_1^t_2(4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) ∑_t = t_2+1^N(S-1)+1ℙ((t) ≥ S- t/N - (t_2-τ)|(t_2) = 0) ≤ N^2 ∑_τ = t_1^t_2(4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) ∑_t = t_2+1^N(S-1)+1ℙ((t-t_2) ≥ S- t/N - (t_2-τ)) ≤ N^2 ∑_τ = t_1^t_2(4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) (∑_t = t_2+1^N[S-(t_2-τ)]β_3 e^-(S- t/N - (t_2-τ))+∑_t = N[S-(t_2-τ)]+1^N(S-1)+1 1) ≤ N^2 ∑_τ = t_1^t_2(4 e^-α_2 (t_2-τ) + e^-α_3 (t_2-τ)) (a_5 + N(t_2-τ-1)+1) ≤ a_6, where a_5 and a_6 are positive constants. (<ref>) follows from the same arguments as those used to establish (<ref>). Then, in (<ref>), since the application of <ref> requires S- t/N - (t_2-τ) > 0, we bound the N(t_2-τ-1)+1 terms that do not satisfy this requirement by 1 and the other terms by <ref>. Then, in obtaining (<ref>) we observe that β_3 e^-(S- t/N - (t_2-τ)), when ordered from t = N[S-(t_2-τ)] to t = t_2+1, is a decreasing sequence of exponentially small values whose sum converges to a positive constant a_5. Then, putting (<ref>) and (<ref>) together, we have ∑_t = t_2+1^N(S-1)+1ℙ((t) ≥ S- t/N) ≤ a_3 + ∑_t = t_2+1^N(S-1)+1∑_i,jℙ(F_ij^1), where a_3 := a_4+a_6, which is the bound in the lemma statement. To prove the claim, we first show that ℙ(max_j _j() ≥ S) ≤ℙ(max_j _j() ≥ S - √(T log(T))). For (<ref>), note that max_j _j() ≥ S implies max_j _j() ≥ S - √(T log(T)), since in the worst case all balls from to go into _j _j(). Thus, ℙ(max_j _j() ≥ S) ≤ℙ(max_j _j() ≥ S - √(T log(T))). Since the no-flex policy and the static policy are the same before , we also have: ℙ(max_j _j() ≥ S - √(T log(T))) = ℙ(max_j _j() ≥ S - √(T log(T))) ≤ℙ(max_j _j() ≥ S - √(T log(T))), where the inequality follows from the fact that ≥max_j _j() ≥max_j _j(). We now argue that ℙ(max_j _j() ≥ S - √(T log(T))) ≤a_1/T^2 for some constant a_1 > 0 to conclude the proof of the claim. Recall, = 2√(6)N(N-1)N/q+4N and = 2√(6)N(N-1)/q, so = N(+4). For large enough T, ℙ(max_j _j() ≥ S - √(T log(T))) ≤ N ℙ(B(T - √(T log(T)),1/N) ≥T-N ·√(T log(T))/N) = N ℙ(B(T - √(T log(T)),1/N) ≥ (1+ϵ) T-√(T log(T))/N), where ϵ = (-N) √(T log(T))/T - √(T log(T)) = 4N√(Tlog T)/T-c_static^1√(Tlog T). The first inequality above follows from SN > T = N(S-1)+1, which yields S - √(T log(T)) = NS-N ·√(T log(T))/N≥T-N ·√(T log(T))/N. Then, applying the Chernoff bound to (<ref>) we obtain: ℙ(max_j _j() ≥ S - √(T log(T))) ≤exp(-ϵ^2/2+ϵT-√(T log(T))/N) = exp(-( - N )^2 T log(T)/2N (T - √(T log(T))) + N ( - N )√(T log(T))) ≤exp(-16 N^2 T log(T)/2NT + 4N^2 √(T log(T))) ≤ e^-16 log(T)/6≤a_1/T^2 for some constant a_1 > 0. For the second inequality above we plug in values of , , and upper bound (T - √(T log(T))) by T. We first construct two bin configurations at t_2. Let _k(t_2) = t_2/N + a, ∀ k ∈ [N]. Then, consider any sample path t_2∈ F' ⊆ F_a = {t_2∈t_2|(t_2) ≤ a for some a≥ 0} and denote the loads corresponding to t_2 at t_2 by _k(t_2), ∀ k ∈ [N]. In both of these two configurations balls follow (t) = 1, ∀ t > t_2, so we assume the same realization of randomness, i.e., the same t, t and t ∀ t ≥ t_2. We shall show that, for any t_2∈ F', we have _k(t) ≤_k(t), ∀ t ≥ t_2, ∀ k ∈ [N], as long as both _k(t) and _k(t) develop based on the static policy being applied to the same arrivals t_2,…,t. Assuming this holds, then, for any t_2∈ F' we have max_k _k(t) - t/N ≤max_k _k(t)- t/N ∀ t ≥ t_2 assuming the same realization of randomness. This would then imply: ℙ((t) ≥ S- t/N|F') = ∑_t_2∈ F'ℙ(max_k _k(t) - t/N ≥ S- t/N|t_2)ℙ(t_2) ≤∑_t_2∈ F'ℙ(max_k _k(t)- t/N ≥ S- t/N|t_2)ℙ(t_2) = ℙ(max_k _k(t)- t/N ≥ S- t/N) ≤ℙ((t)+a ≥ S- t/N|(t_2) = 0) = ℙ((t) ≥ S- t/N-a|(t_2) = 0), ∀ t ≥ t_2. What is left to conclude the proof is inequality (<ref>), which we now show by induction. Base case: t = t_2. For this case we get from the definitions of F' and (t_2) that _k(t_2) ≤max_k'_k'(t_2) ≤ t_2/N + a = _k(t_2), ∀ k ∈ [N]. Inductive step. Fix t ∈{t_2+1,…,T}, and suppose _k(t) ≤_k(t), ∀ k ∈ [N]. We show that _k(t+1) ≤_k(t+1), ∀ k ∈ [N], by discussing the following cases. * Suppose t = 0 and t= k_1 ∈ [N]. Under such an arrival we have _k_1(t+1) = _k_1(t)+1 and _k_1(t+1) = _k_1(t)+1, while the loads of other bins do not change. Thus, _k(t+1) ≤_k(t+1), ∀ k ∈ [N]. * Suppose t = 1, with t = {k_1, k_2}, for some k_1, k_2 ∈ [N]. * If _k_1(t)<_k_2(t) then a flex ball lands in bin k_1 in the fictional configuration. If the flex ball also goes into bin k_1 in the real configuration, the induction holds. Else, i.e., if the flex ball goes into bin k_2 in the real allocation, then it must be the case that _k_2(t) ≤_k_1(t), which implies that _k_2(t) ≤_k_1(t)≤_k_1(t)<_k_2(t). The induction also holds in this case because _k_2(t+1) = _k_2(t)+1 ≤_k_2(t) = _k_2(t+1), and all other bin loads remain unchanged. * For _k_1(t)>_k_2(t), an argument symmetric to case 2.1 shows that the induction holds. * If _k_1(t)=_k_2(t) and k_1 < k_2, then a flex ball goes into bin k_1 in the fictional configuration. In this case, if _k_1(t) ≤_k_2(t) then the flex ball would go into bin k_1 in the real allocation and the induction still holds. Else, i.e., if _k_1(t) > _k_2(t), then the flex ball goes into bin k_2. In that scenario we must have _k_2(t) < _k_1(t)≤_k_1(t)=_k_2(t). This leads to _k_2(t+1) = _k_2(t)+1 ≤_k_2(t) = _k_2(t+1), and all other bin loads remain unchanged, so the induction still holds in this case. We omit the proof for the case where k_1 > k_2, as it is symmetric. §.§ Analysis of the dynamic opaque selling policy §.§.§ Proof of <ref> Applying <ref> to the dynamic policy, we obtain 𝔼[] = N(S-1)+2 - ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N). Thus, to prove <ref>, it suffices to show that ∑_t = S^N(S-1)+1ℙ((t) ≥ S- t/N) ∈𝒪(1). Recall that = 1/10 N2 and define = inf{t : (t) ≥(T-t)q/N}. By definition, the policy always attempts to flex after , but never before. Moreover, as illustrated in <ref>, we define := + T-/2, i.e., is the mid-point of T and . Since T = N(S-1)+1, one can verify that (T-t)q/N < S - t/N for all t ≤ T. Thus, given T^⋆: (t) < (T-t)q/N ∀ t < T^⋆(t) < S - t/N ∀ t < T^⋆, and as a result: ∑_t=S^N(S-1)+1(t) ≥ S-t/N | T^⋆ = ∑_t=T^⋆^N(S-1)+1(t) ≥ S-t/N | T^⋆ = ∑_t=T^⋆^(t) ≥ S-t/N | T^⋆_(I) + ∑_t= + 1^N(S-1)+1(t) ≥ S-t/N | T^⋆_(II). We show that for any value of each of these two terms is upper bounded by a constant, which will complete the proof. Step 1: Bound (I) Consider first (I), and let denote the event that (T^⋆) < (T-)q/N + q/N+1. Putting together the facts that: (i) (T^⋆-1) < (T-(-1))q/N by definition, and (ii) (T^⋆) ≤(T^⋆-1) + 1, we have: () ≤ (T-)q/N + q/N+1. Thus, = 1, and: (I) = ∑_t = ^ℙ((t) ≥ S- t/N | () < (T-)q/N + q/N+1, T^⋆) Further, (I) satisfies the condition in Claim <ref> with F' = E_, t_2 = and a = (T-)q/N + q/N+1. Thus, from Claim <ref> we have ℙ((t) ≥ S- t/N | () < (T-)q/N + q/N+1, T^⋆) ≤ℙ((t) ≥ S- t/N-( (T-)q/N + q/N+1) | () = 0, T^⋆), ∀ t ≥. Then, we have: (I) ≤∑_t = ^ℙ((t) ≥ S- t/N-( (T-)q/N + q/N+1) | () = 0, T^⋆) ≤∑_t = ^ℙ((t-) ≥ S- t/N-( (T-)q/N + q/N+1) | T^⋆) (<ref>) again follows from the observation that, conditional on () = 0, our dynamic policy takes the same action as the always-flex policy beginning at . Let := + T-/2, i.e., is the mid-point of T and . We have: S- t/N-( (T-)q/N + q/N+1) = S - /N - ( (T-)q/N + q/N+1) + -t/N = S - + T-/2/N - ( (T-)q/N + q/N+1) + -t/N ≥T-/N(1/2- q) - q/N + 1 + -t/N. The inequality above follows from: S - + T-/2/N = NS -/N-T-/2N≥T -/N-T-/2N = T-/N1/2. Using this, we upper bound (I) as follows: (I) ≤∑_t = ^ℙ((t-) ≥ S- t/N-( (T-)q/N + q/N+1) | T^⋆) ≤∑_t = ^- qℙ((t-) ≥ S- t/N-( (T-)q/N + q/N+1) | T^⋆) + ∑_t = - q+1^ 1 ≤∑_t = ^- qβ e^-[S- t/N-( (T-)q/N + q/N+1)] + q ≤ a_1 for some constant a_1 (that is independent of ). We may assume without loss of generality that -c_semiq is an integer since this changes our constant bound by at most 1. For the third inequality above, we apply <ref> again. Finally, in the last inequality we observe that 1/2- q = 1/2 - q/10 N2 > 0, and thus S- t/N-( (T-)q/N + q/N+1) ≥T-/N(1/2- q) - q/N + 1 + -t/N≥ 0, ∀ t ≤ - q. Thus, β e^-[S- t/N-( (T-)q/N + q/N+1)], when ordered from t = - q to t =, is a decreasing sequence of exponentially small values, and its sum converges to a positive constant. Step 2: Bound (II) Now, to bound (II), we re-define, for i,j ∈ [N], E_ij^1 :={∈| i ∈max_k∈[N]_k(), j ∈min_k∈[N]_k(), _i(t) ≠_j(t), ∀ t ∈{,...,}} E_ij^2 := {∈| i ∈max_k∈[N]_k(), j ∈min_k∈[N]_k(), _i(t) = _j(t) for some t ∈{,...,-1}, _i() ≠_j()} E_ij^1 and E_ij^2 above satisfy the conditions for F_ij^1 and F_ij^2 in <ref> with t_1 = and t_2 =. Thus, applying <ref>, we obtain ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N|)≤ a_3 + ∑_t = +1^N(S-1)+1∑_i,jℙ(E_ij^1|) for some constant a_3. To bound ℙ(E_ij^1|), recall from (<ref>) that max_k_k(T^⋆) - min_k_k(T^⋆) ≤ N (). By (<ref>), we obtain max_k_k(T^⋆) - min_k_k(T^⋆) ≤ N ( (T-)q/N + q/N+1) = (T-T^⋆)q + q+N =2(-T^⋆)q + q+N = 1/5 N2(-T^⋆)q + q+N . Thus, E_ij^1 satisfies the definition of F_ij^1 in <ref>, with t_1 = and t_2 = . Moreover, from (<ref>), with = 1/5 N2 and a = q+N we satisfy all conditions in <ref>, and there exist constants α_0, α_1 and t_0 such that ℙ(E_ij^1 | T^⋆) ≤ 4 e^-α_0(-T^⋆)+e^-α_1(-T^⋆) ∀-T^⋆≥ t_0. Since -T^⋆ = 1/2(T-T^⋆), we then have ℙ(E_ij^1 | T^⋆) ≤ 4 e^-α_0/2(T-T^⋆) + e^-α_1/2(T-T^⋆) ∀ T-T^⋆≥ 2 t_0. Plugging this result back to (<ref>), we obtain (II) = ∑_t = +1^N(S-1)+1ℙ((t) ≥ S- t/N|) ≤ a_3 + ∑_t = +1^N(S-1)+1∑_i,j(4 e^-α_0/2(T-T^⋆) + e^-α_1/2(T-T^⋆)) ≤ a_3 + ∑_i,j(T-)(4 e^-α_0/2(T-T^⋆) + e^-α_1/2(T-T^⋆)) ≤ a_2 ∀ T-T^⋆≥ 2 t_0 for some constant a_2, where the second inequality comes from the fact that N(S-1)+1 - = T-≤ T-. Then, ∑_t=S^N(S-1)+1(t) ≥ S-t/N | T^⋆ = (I) + (II) ≤ a_1 + a_2 ∀ T-T^⋆≥ 2 t_0. Recall from (<ref>) that ∑_t=S^N(S-1)+1(t) ≥ S-t/N | T^⋆ = ∑_t=T^⋆^N(S-1)+1(t) ≥ S-t/N. Thus, for T-T^⋆ < 2 t_0 we have ∑_t=S^N(S-1)+1(t) ≥ S-t/N | T^⋆ = ∑_t=T^⋆^N(S-1)+1 1 ≤ 2 t_0, which completes the proof. §.§.§ Proof of <ref> From <ref> (iv), <ref> (iv) and <ref> (iv), we have = + +≤K/NS- + h/2(NS+1 + ) + δ q ·, where ∈𝒪(1), ∈𝒪(1/√(S)). §.§ Benchmark comparisons §.§.§ Proof of <ref> Recall, from (<ref>), that 𝒞^π= K/^π + h/2(2NS+1 - (^π)^2/^π) + ^π/^π·δ ≥K/^π + h/2(2NS+1 - ^π) ≥K/N(S-1)+1 + h/2(NS+N) where the first inequality follows from Jensen's inequality for the second term, and non-negative discount costs for the third. Noting that ^π≤ N(S-1)+1 for all π, we obtain the last inequality. §.§.§ Proof of <ref> From <ref> (i), <ref> (i) <ref> (i) we have 𝒞^nf≥K/NS- + h(NS)^2-NS/2/2(NS - ), where ∈Ω(√(S)). Since 𝒞^s ≤K/NS- + h/2(NS+1 + ) + δ q ·, where ∈𝒪(1), ∈Θ(√(log(S)/S)), by <ref>, we have 𝒞^nf - 𝒞^s ≥ K ·-/(NS-)(NS-) + h/2NS(/2--1)++/(NS-) -δ q · ≥ K ·-/(NS)^2 + h/2NS(/2--1)/NS -δ q · Similarly, from <ref> (ii), <ref> (ii) <ref> (ii): 𝒞^a ≥K/NS + h/2 N(S+1) + δ· q. Thus, 𝒞^a - 𝒞^s ≥ -K /NS(NS - ) - h/2( - N + 1) + δ q (1 - ) ≥ -K /(NS)^2 - h/2 + δ q (1 - ). Finally, plugging in the definition of 𝒞^⋆, 𝒞^⋆ - 𝒞^s ≥ -K /(NS)^2 - h/2 - δ q ·. §.§.§ Proof of <ref> By <ref>: 𝒞^d ≤K/NS- + h/2(NS+1+) + δ q , where ∈𝒪(1), ∈𝒪(1/√(S)) 𝒞^nf - 𝒞^d ≥ K ·-/(NS-)(NS-) + h/2NS(/2--1)++/(NS-) -δ q · ≥ K ·-/(NS)^2 + h/2NS(/2--1)/NS -δ q ·. Similarly, 𝒞^a - 𝒞^d ≥ -K /NS(NS - ) - h/2( - N + 1) + δ q (1 - ) ≥ -K /(NS)^2 - h/2 + δ q (1 - ), and 𝒞^⋆ - 𝒞^d ≥ -K /(NS)^2 - h/2 - δ q ·. §.§.§ Proof of <ref> By <ref> (iii), <ref> (iii) <ref> (iii): 𝒞^s ≥K/NS + h/2 N(S+1) + δ q ·𝒞^s - 𝒞^d ≥ -K /(NS)^2 - h/2 + δ q ( - ). §.§.§ Proof of <ref> For the dynamic policy, based on <ref> we can bound - = + + - K/N(S-1)+1 - h/2(NS+N) ≤ K(1/NS-- 1/N(S-1)+1) + h/2(NS + 1 + - NS-N) + δ q where ∈𝒪(1) and ∈𝒪(1/√(S)). This expression simplifies to - ≤Θ(1/S) + δ q 𝒪(1/√(S)). Thus, when δ∈𝒪(1/√(S)), we have - ≤𝒪(1/S), which completes the proof of part (i). For part (ii), with δ∈𝒪(1), the expression simplifies to - ≤𝒪(1/√(S)). Thus, we first show that the no-flex policy incurs Ω(1/√(S)) loss relative to OPT, and then compare the dynamic policy to the static and the always-flex policy. For the no-flex policy, we have from <ref> (i), <ref> (i) and <ref> (i): - = + + - K/N(S-1)+1 - h/2(NS+N) ≥ K(1/NS-- 1/N(S-1)+1) + h/2((NS)^2-NS/2 + 𝒪(N^2S)/NS - - NS- N) = K -N+1/(NS - )(N(S-1)+1) + h/2NS/2-N^2S +N +𝒪(N^2S)/NS - ≥ K /(NS)^2 + h/2NS/2/NS - h/2N^2S -N -𝒪(N^2S)/NS, where ∈Ω(√(S)). Substituting for K and h, the above simplifies to Ω(1/√(S))+Ω(1/√(S))-𝒪(1/S). Thus, - ≥Ω(1/√(S)). Then, for the flexing policies, based on <ref> we can lower bound - = + + - K/N(S-1)+1 - h/2(NS+N) ≥ = δ q, and similarly - ≥ = δ q Θ(√(log(S)/S)). Since Θ(1/S) + δ q 𝒪(1/√(S)) ≤δ q Θ(√(log(S)/S)) ≤δ q for any δ∈Ω(1/√(S)), we conclude that the dynamic policy has the best performance relative to OPT out of the four flexing policies. This completes the proof of part (ii). §.§ Cost bound summary We summarize the other cost bounds in the propositions below, and their proofs can be found in Appendices <ref>, <ref> and <ref>, respectively. For the ordering cost, we have * = K/NS-, where ∈Ω(√(S)) * = K/NS-, where ∈𝒪(1) * = K/NS-, where ∈𝒪(1) * = K/NS-, where ∈𝒪(1) For the holding cost, we have * ≥ h(NS)^2-NS/2/2(NS - ), where ∈Ω(√(S)) * h/2 N(S+1) ≤≤h/2 (NS+1+), where ∈𝒪(1) * h/2 N(S+1) ≤≤h/2 (NS+1+), where ∈𝒪(1) * h/2 N(S+1) ≤≤h/2 (NS+1+), where ∈𝒪(1) For the holding cost, we have * = 0 * = δ q * = δ q ·, where ∈Θ(√(log(S)/S)) * = δ q ·, where ∈𝒪(1/√(S)) §.§.§ Proof of <ref> Similar to Proposition 2 (a) of <cit.>, we derive the long-run ordering cost per time unit for an algorithm with replenishment cycle length R^π. Specifically, lim_M →∞∑_i = 1^M K/∑_i = 1^M R^π_i = K/lim_M →∞∑_i = 1^M R^π_i/M = K/𝔼[R^π], where the last equality comes from the strong law of large numbers. Thus, by <ref> and <ref>: = K/𝔼[] = K/NS - and = K/𝔼[] = K/NS - , where , ∈𝒪(1). By Lemma 2 in <cit.>, = K/𝔼[] = K/NS - θ_n, where θ_n ∈Ω(√(S)). Moreover, by Lemma 3 in <cit.>, = K/𝔼[] = K/NS - , where ∈𝒪(1). §.§.§ Proof of <ref> The bound for the no-flex policy directly comes from (24) in <cit.>. For the other policies, similar to Proposition 2 (b) of <cit.>, we derive the long run holding cost per time unit for an algorithm with replenishment cycle length R^π. Specifically, lim_M →∞∑_i = 1^M ∑_t = 1^R^π_i (NS - t+1)h/∑_i = 1^M R^π_i = lim_M →∞∑_i = 1^M ∑_t = 1^R^π_i (NS - t+1)h/M/lim_M →∞∑_i = 1^M R^π_i/M = h𝔼[∑_t = 1^R^π_i (NS - t+1)]/𝔼[R^π] = h𝔼[(2NS + 1) 𝔼[R^π] - 𝔼[(R^π)^2]]/𝔼[R^π], where the second equality comes from the strong law of large numbers. Moreover, ℋ^π = (2NS + 1)𝔼[R^π] - 𝔼[(R^π)^2]/2 𝔼[R^π]h ≤𝔼[R^π] - 𝔼[R^π]^2/2 𝔼[R^π]h = 2NS +1 - 𝔼[R^π]/2h, where the second inequality follows from Jensen's inequality. Thus, for the always-flex policy, by Lemma 3 in <cit.>, ≤2NS +1 - 𝔼[]/2h = h/2(NS+), where ∈𝒪(1). Similarly, by <ref>: ≤2NS +1 - 𝔼[]/2h = h/2(NS+), where ∈𝒪(1), and by <ref>: ≤2NS +1 - 𝔼[]/2h = h/2(NS+), where ∈𝒪(1) For the lower bounds of holding costs, observe that the OPT benchmark has a per-period holding cost of h/2N(S+1), as derived in <ref>. Since the OPT benchmark is a trivial lower bound for all policies that we construct, the bounds above are tight up to a constant. §.§.§ Proof of <ref> Let D^π_i be the cost of discounts in the i^th replenishment cycle and R^π_i the length of the i^th replenishment cycle for a given policy π. We first compute that the long-run discount cost per time unit is lim_M →∞∑_i = 1^M D^π_i/∑_i = 1^M R^π_i = lim_M →∞∑_i = 1^M D^π_i/M/lim_M →∞∑_i = 1^M R^π_i/M = 𝔼[D^π]/𝔼[R^π]. For the no-flex policy, we trivially have = 0. For the always-flex policy, we denote by X_j ∼ B(1,q) the Bernoulli random variable that takes value 1 with probability q. Then, we have = 𝔼[]/𝔼[] = 𝔼[δ∑_j = 1^_i X_j]/𝔼[]. Since (i) X_j's are i.i.d. Bernoulli random variables, (ii) _i has finite expectation, (iii) _i is a stopping time, and (iv) ∑_j = 1^∞𝔼[|X_j| ·1_{_i ≥ j}] ≤ N(S-1)+1 < ∞, we apply Wald's identity to the above and obtain: = δ q 𝔼[]/𝔼[] = δ q. For the static policy, we can lower bound = 𝔼[]/𝔼[] = 𝔼[δ∑_j = 1^(_i-(T-√(Tlog(T))))^+ X_j]/𝔼[] ≥𝔼[δ∑_j = 1^_i-(T-√(Tlog(T))) X_j]/𝔼[] = δ q (𝔼[]-(T-√(Tlog(T))))/𝔼[], where the last step comes from the Wald's identity, since _i is a stopping time. We again assume without loss of generality that _i-(T-√(Tlog(T))) is an integer since it changes our objective by at most 1. Since 𝔼[] = NS-, where ∈𝒪(1) and T = N(S-1)+1, 𝔼[] - T is lower bounded by 𝒪(1). Thus, we have ≥δ q (√(Tlog(T)) +N-1-)/T. Moreover, for the upper bound we have: = 𝔼[δ∑_j = 1^(_i-(T-√(Tlog(T))))^+ X_j]/𝔼[]≤𝔼[δ∑_j = 1^√(Tlog(T)) X_j]/𝔼[] = δ q √(Tlog(T))/NS - , where the last step is an application of the Wald's identity. Thus, combining (<ref>) with (<ref>), we obtain = δ q, where ∈Θ(√(log(S)/S)). Finally, for the dynamic policy, we recall that = inf{t: (t) ≥(T-t)q/N} and let _i denote the value of in the i^th replenishment cycle. Moreover, because _i = inf{t: (t) ≥ S - t/N}, where S - t/N≥(T-t)q/N, we always have ≤_i. Then, = 𝔼[]/𝔼[] = 𝔼[δ∑_j = 1^_i-_i X_j]/𝔼[]≤𝔼[δ∑_j = 1^T-_i X_j]/𝔼[] = δ q 𝔼[T - ]/𝔼[], where the last equality is again an application of the Wald's identity. Noting that the dynamic policy mimics the no-flex policy until = T_ = inf{t:(t) ≥(T-t)q/N}, we apply <ref> with a = to obtain 𝔼[T-T^⋆] ∈𝒪(√(T)). Moreover, by <ref>, = NS - , where ∈𝒪(1). Thus, = δ q, where ∈𝒪(1/√(S)). §.§ Additional Experiments In this section we investigate the robustness of our results to the choice of constants parameterizing the policy.[We omit results for the static policy, as they are entirely analogous.] Since our theoretical analysis does not optimize for constants, it is likely that the chosen constants are too pessimistic, and exerting flexibility a constant factor fewer times may suffice to achieve the benefits of full flexibility (up to a constant gap). We focus on the opaque selling model, though identical results hold for the vanilla balls-into-bins model; we moreover use the terminology “system balancedness” to refer to the gap of the system. We simulate each instance 100 times. In <ref>, we compare (i) system balancedness, and (ii) the number of flexes across different values of . In particular, we find in <ref> that =0.7 suffices for a gap of 𝒪(1) at time T, and <ref> shows that the number of flexes remains small for these smaller choices of . Meanwhile, for the time horizons studied, the constants defined for our theoretical results are so small that the policy immediately start exerting flexibility, i.e., it is exactly the always-flex policy . § APPLICATION TO PARCEL DELIVERY: A CASE STUDY §.§ Additional Details on Inputs Derivation of transportation cost . This is based on an average truck speed of 15.75 km per hour in the dataset, a cost of gas of $0.956 per liter <cit.>, and an average gas consumption of 0.35-0.5 liters per km <cit.>. Derivation of overtime cost . This is based on an average hourly wage of $22-$28 per hour for California truck drivers <cit.>, with an overtime payment multiplier of 1.5 <cit.>. §.§ Creating a Default Truck Assignment via Clustering To infer packages' default truck assignments, we propose a heuristic for the capacitated K-means problem. Namely, we first apply a standard K-means clustering technique, letting K = N, to partition the packages into N different zones. Given the geographic centers of each of these N zones, we then re-assign packages to the zones, minimizing the total distance between packages and the center of their assigned zones, subject to a capacity constraint. To formalize this latter step, we introduce some notation. Let L denote the total number of packages that need to be clustered, and z_ij be the binary variable that captures whether package j is assigned to zone i, ∀ i = 1,...,N, j = 1,...,L. We impose that each zone should be roughly balanced, i.e., the number of packages assigned to each zone should be within ϵ of the average number of packages per truck L/N, taking ϵ = 200. Finally, let c_ij be the distance between package j and the center of zone i. Then, the re-assignment problem is given in (<ref>). min ∑_i = 1^N∑_j = 1^L c_ij z_ij s.t. L/N - ϵ≤∑_j = 1^L z_ij≤ L/N + ϵ, ∀ i ∑_i = 1^N z_ij = 1, ∀ j, z_ij∈{0,1}, ∀ i, j. §.§ Comparison of Historical and Synthetic Datasets Throughout the period of interest, 448 historical routes originated from DLA8, serving a total of L = 57,359 packages. Though routes took less than 7.50 hours on average to complete, approximately 30% of them exceeded 8 hours. Assuming inputs of =$38 per hour and = $6.3 per hour, this resulted in average overtime and transportation costs of $4.79 and $14.24, respectively, per route. Our re-sampled distribution of route completion times is roughly consistent with that of the historical data: the average completion time per route for our synthetic dataset is 7.42 hours, with 28.5% of routes exceeding 8 hours. Since the re-sampled data typically spans a larger geographic area per day, a larger fraction of the time is spent on travel, with average overtime and transportation costs of $6.14 and $25.12, respectively, per route. Despite the re-sampled data leading to both higher overtime and transportation costs, it preserves the feature that both overtime and transportation costs are important components of the cost objective, and the split of the costs among overtime and transportation is comparable with that of the historical dataset. <ref> includes additional summary statistics on historical and synthetic routes. §.§ <ref> guarantees in the balls-into-bins setting Depending on the modified dynamic policy that we end up with, this result may no longer be relevant Consider <ref> applied to the balls-into-bins setting (i.e., ignoring travel time considerations). Then, <ref> achieves 𝔼[(T)| E] ∈𝒪(1), and 𝔼[M^f] ∈𝒪(√(T)). Proof. We start by showing that 𝔼[(T)] ∈𝒪(1). As before, let E = {T∈T|(T) ≠ 0}. It suffices to show 𝔼[(T)| E] ℙ(E) ∈𝒪(1). To do so, for any two bins i,j ∈ [N], we let T_ij^⋆ be the first time at which the difference in bin loads exceeds a (properly tuned) constant fraction of the remaining periods, and remains above this threshold for the rest of the horizon. Formally: T_ij^⋆:= inf{t':_i(t) - _j(t) ≥(T-t)q ∀ t ∈{t', … , T}} if _i(T) ≥_j(T), +∞ otherwise. and decompose event E into ∪_i ≠ j E_ij, where E_ij := { T∈T| i = max_k∈[N]_k(T), j = min_k∈[N]_k(T), _i(t) ≠_j(t), ∀ t ∈{_ij, … , T}}. Notice that when _i(T) < _j(T) we trivially have E_ij = 0. Recall, as in the proof of <ref> 𝔼[(T)| E] ℙ(E) ≤ ∑_i ≠ j𝔼[(T)| E_ij] ℙ(E_ij) ≤ ∑_i ≠ j𝔼[_i(T) - _j(T)| E_ij] ℙ(E_ij). Hence it remains to show that, for all i ≠ j, 𝔼[_i(T) - _j(T)| E_ij] ℙ(E_ij) ≤ b for some constant b > 0. Since this is trivially true when E_ij = 0, in the analyses that follow we assume without loss of generality that _i(T) ≥_j(T). We have: 𝔼[_i(T) - _j(T) | E_ij] ℙ(E_ij) = ∑_t = 1^T𝔼[_i(T) - _j(T) | E_ij,T-T_ij^⋆ = t] ℙ(E_ij,T-T_ij^⋆ = t) ≤∑_t = 1^T𝔼[_i(T_ij^⋆) - _j(T_ij^⋆)+(T-T_ij^⋆) | E_ij,T-T_ij^⋆ = t]ℙ(E_ij,T-T_ij^⋆ = t), where (<ref>) follows from the fact that, in the worst case the load of i increased by one in every period between T_ij^⋆ and T. For any sample path where _i(T) ≥_j(T), by definition of T_ij^⋆ we have _i(T_ij^⋆-1) - _j(T_ij^⋆-1) < (T-T_ij^⋆+1)q. Using the fact that, for all k ∈ [N], t ∈ [T], _k(t-1) ≤_k(t) ≤_k(t-1) + 1, this leads to: _i(T_ij^⋆) - _j(T_ij^⋆) < 1+(T-T_ij^⋆+1)q = (T-T_ij^⋆)q + q +1. Moreover, under the fully-dynamic policy (t) = 1 and (t) = (t) for all t ∈{_ij, … , T}, where recall (t) denoted the bin chosen by the semi-dynamic policy in period t. In words, the fully-dynamic and semi-dynamic policies are coupled after T_ij^⋆. Applying <ref> to E_ij, with t_1 = _ij, t_2 = T, = = 1/5 N2 and a = q+1, there exist constants α_0, α_1, t_0 such that: E_ij|_ij≤ 4 e^-α_0(T-T_ij^⋆)+e^-α_1(T-T_ij^⋆) ∀ T-T_ij^⋆≥ t_0. Plugging (<ref>) and (<ref>) back into (<ref>), we obtain: 𝔼[_i(T) - _j(T) | E_ij] ℙ(E_ij) ≤∑_t = 1^T[ q(1+t)+1+t ]ℙ(E_ij| T-T_ij^⋆ = t) ≤∑_t = 1^t_0[( q+1)(1+t) ] + ∑_t = t_0+1^T[( q+1)(1+t) ](4 e^-α_0t+e^-α_1t) ≤ b, for some constant b > 0. This completes the proof that 𝔼[(T)] ∈𝒪(1). Next, for the bound on M^f, we observe that for all t∈[T], i,j∈[N]: _i(t) - _j(t) ≤max_i'_i'(t) - min_i'_i'(t) ≤ N (t). Thus, whenever _t(t) - min_k∈t_k(t) ≥(T-t)q we must also have (t) ≥_i(t) - _j(t)/N≥(T-t)q/N. That is, _ij≥, ∀ i ≠ j. Since <ref> guarantees that M^d∈𝒪(√(T)), we know that we also have M^f∈𝒪(√(T)). §.§ Cost Minimization Heuristic In this section, we outline a cost minimization heuristic as a point of comparison to the balancing policies examined in <ref>. Recall that in <ref> a flex is exerted if the difference in route completion times between two trucks exceeds a dynamic threshold that scales with T-t. Here we follow a similar idea but pivot our approach slightly by opting to flex when the projected cost difference between flexing and not flexing surpasses the dynamic threshold. Specifically, when determining whether to flex from t to truck i, we use y_j^r(t) and y_j(t) to denote the predicted travel and route completion times of truck j without flexing, and ŷ_j^r(t) and ŷ_j(t) to denote those times if the flex is implemented. Then, the expected cost difference between not flexing and flexing can be approximated by: ((y_t(T+1)-8)^++(y_i(T+1)-8)^+-(ŷ_t(T+1)-8)^+-(ŷ_i(T+1)-8)^+) + (y_t^r(T+1)+y_i^r(T+1)-ŷ_t^r(T+1)-ŷ_i^r(T+1)). We trigger a flex into a truck that maximizes (<ref>) if and only if this difference is at least T-t/M_2. To find the distribution of y_j(T+1) for a truck j considered in (<ref>), we assume future packages arriving in zone j are drawn from a binomial distribution with T-t trials and success probability p_j, with p_j being bootstrapped from historical data. Let N_j(T-t) denote the number of such packages. Then, we approximate y_j(T+1) as: y_j(T+1) = y_j(t) +(_j + _j) · N_j(T-t) + 1_j = t·((S_t,t)+(t)), where _j and _j are bootstrapped estimates of the incremental travel and unloading times for the no-flex policy. Distributions of y_j^r(T+1),ŷ_j(T+1) and ŷ_j^r(T+1) can be derived in a similar manner. The performance of this heuristic is presented in <ref> and <ref>. While this cost minimization heuristic effectively reduces average overtime, as shown by area II in <ref>, it does not provide additional cost savings relative to the Patient-Dynamic policy, which, recall, achieves cost savings of up to 7% relative to the no-flex policy. A closer comparison of the cost minimization heuristic and the Patient-Dynamic policy in <ref> reveals that the former leads to a reduction in travel time but an increase in overtime. This is likely because the heuristic strongly depends on relatively accurate predictions of future travel and route completion times, which is challenging, despite access to bootstrapped data with i.i.d. package arrivals. The policy, on the other hand, achieves strong performance with the need for significantly less information (namely, only approximations to the incremental travel time a package creates for a route). §.§ Additional Experiments We further demonstrate the effectiveness of the policy by applying it to historical data at two additional stations, DLA9 and DBO3. We observe a comparable degree of success in cost reduction at these stations, relative to the no-flex policy. As illustrated in <ref>, the application of the Patient-Dynamic policy effectively curtails overtime. Additionally, <ref> reveals an average cost reduction of up to 5% for DLA9 and 6% for DBO3, further affirming the efficacy of the policy. §.§ On the uncapacitated assumption We conclude this section by verifying the limitations of the uncapacitated assumption. In our experiments, a truck under the no-flex policy visits at most 117 stops per day, with a 90th percentile of 98 stops; under <ref>, a truck visits at most 117 stops per day and has a 90th percentile of 99 stops per day. Given the similarities in these values, we can reasonably deduce that if a truck does not encounter capacity constraints in the absence of balancing measures, it should also not face such constraints when implementing balancing policies.
http://arxiv.org/abs/2306.03923v1
20230606180003
Glitch systematics on the observation of massive black-hole binaries with LISA
[ "Alice Spadaro", "Riccardo Buscicchio", "Daniele Vetrugno", "Antoine Klein", "Davide Gerosa", "Stefano Vitale", "Rita Dolesi", "William Joseph Weber", "Monica Colpi" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "astro-ph.IM" ]
[email protected] [email protected] Detecting and coherently characterizing thousands of gravitational-wave signals is a core data-analysis challenge for the Laser Interferometer Space Antenna (LISA). Transient artifacts, or “glitches”, with disparate morphologies are expected to be present in the data, potentially affecting the scientific return of the mission. We present the first joint reconstruction of short-lived astrophysical signals and noise artifacts. Our analysis is inspired by glitches observed by the LISA Pathfinder mission, including both acceleration and fast displacement transients. We perform full Bayesian inference using LISA time-delay interferometric data and gravitational waveforms describing mergers of massive black holes. We focus on a representative binary with a detector-frame total mass of 6 × 10^7 M_⊙ at redshift 7, yielding a signal lasting ∼ 30 h in the LISA sensitivity band. We explore two glitch models of different flexibility, namely a fixed parametric family and a shapelet decomposition. In the most challenging scenario, we report a complete loss of the gravitational-wave signal if the glitch is ignored; more modest glitches induce biases on the black-hole parameters. On the other hand, a joint inference approach fully sanitizes the reconstruction of both the astrophysical and the glitch signal. We also inject a variety of glitch morphologies in isolation, without a superimposed gravitational signal, and show we can identify the correct transient model. Our analysis is an important stepping stone toward a realistic treatment of LISA data in the context of the highly sought-after “global fit”. Glitch systematics on the observation of massive black-hole binaries with LISA Monica Colpi 0000-0002-3370-6152 July 31, 2023 ============================================================================== § INTRODUCTION The Laser Interferometer Space Antenna (LISA) <cit.>, currently planned to be launched in the early 2030s, will detect gravitational waves (GWs) from space. LISA will extend the exploration of the GW spectrum in the milliHertz band – from about 10^-4 to 1 Hz – providing observations of astrophysical sources ranging from Galactic white-dwarf binaries to mergers of massive black-holes at high redshift <cit.>. The detection and characterization of different astrophysical sources is an extremely challenging problem of data-analysis. This is due to the combined effect of the all-sky detector sensitivity and the large number, 𝒪(10^4), of long-lived GW signals overlapping both in time and frequency. Maximizing the payoff of the LISA mission requires an accurate, efficient, and global analysis <cit.>, simultaneously fitting data models for an unknown number of detectable GW sources and uncertain detector noise. In addition to the abundance of astrophysical sources, the LISA data stream will be polluted by noise transients. These artifacts, also called “glitches” from a terminology borrowed from ground-based detectors, have been observed at a rate of about one per day and extensively characterized by the LISA Pathfinder (LPF) mission <cit.>. Efforts are ongoing to understand the origin of the LPF glitches by capitalizing on the collected data and eliminating them by design in the LISA hardware. Previous studies stressed the need to assess their impact on the scientific return of the LISA mission <cit.>. The physical nature of glitches in LPF still needs to be fully understood, with possible interpretations including outgassing phenomena, electronics events, and eddy current transients <cit.>. Moreover, new types of unexpected noise artifacts can appear in LISA because of the increased complexity of both spacecraft and payload design compared to LPF. Because the occurrence and morphology of glitches in the full LISA setup are uncertain, a conservative approach is to prepare a robust data analysis strategy to mitigate their impact downstream. Tackling the fundamental challenge of including glitches in parameter-estimation pipelines is well recognized by the LISA Consortium as part of the core preparation activities for the imminent mission adoption. To this end, a set of LISA Data Challenges (LDCs) <cit.> are in progress to develop and demonstrate data-analysis readiness. Among others, the LDC nicknamed Spritz is devoted to investigating glitches and gaps in the reconstruction of signals from massive black-hole binaries (MBHBs). A recent analysis suggests the adoption of heavy-tailed likelihoods to mitigate the effect of noise transients upon the inference of GW sources <cit.>. In this work, we instead assess for the first time the impact of glitches on short-lived MBHB signals performing direct, joint parameter estimation. We present a complete analytical derivation of the LISA response to two types of instrumental artifacts as detected by LPF, namely force and displacement transients of the test masses. We then report results by including both models in a large, multi-source parameter estimation framework for LISA data analysis. This infrastructure, called Balrog, is currently under active development and has already been tested against different astrophysical sources (see e.g. Refs. <cit.>). The paper is organized as follows. In Sec. <ref>, we introduce the phenomenology of the expected instrumental artifacts. In Sec. <ref>, we present our glitch models and provide a brief summary of the fiducial GW-source and glitch parameters. In Sec. <ref>, we derive an alternative set of time-delay interferometric (TDI) variables suitable for the simultaneous treatment of glitches and GW signals. In Sec. <ref>, we provide definitions of relevant statistical quantities and details on our parameter-estimation runs. In Sec. <ref>, we present our inference results. Finally, in Sec. <ref>, we summarize our findings and describe future developments. Throughout this paper, we use units where c=1. § LPF GLITCHES IN LISA DATA §.§ Phenomenology of LPF glitches Glitches are observed as additional signals in the data stream. They can be thus modeled and subtracted from the data as such. The strategy here is to (i) get a consistent estimate of the power spectral density (PSD) of the underlying quasi-stationary noise over the entire data stream and thus (ii) improve the astrophysical signal inference by making it robust against glitch-induced biases. The latter constitutes a key element of the LISA data processing pipeline in view of the targeted “global fit” <cit.>. The properties of glitches, namely amplitude, duration, and time morphology, depend both on the measurement system and the originating physical process. LPF observed two main kinds of glitches: a first class treated as an effective displacement-measurement artifact in the optical metrology chain and another class due to spurious forces acting on the test masses (TMs). Displacement glitches have been rarely observed in nominal conditions, have a typical duration comparable with the LISA sampling cadence, and carry negligible impulse per unit of mass as compared to the typical forces acting on the TMs <cit.>. As a consequence, fast, low-impulse glitches could be expected to affect the geodesic motion of the LISA constellation only mildly. On the contrary, force events result in impulse-carrying glitches lasting from tens of seconds to several hours, have a significant impact on the noise performance, and can potentially contaminate GW detection and parameter estimation. During its ordinary runs, LPF observed 102 impulse-carrying glitches and 81 of these were visible in the data stream as a sharp, positive offset of the residual force-per-unit-mass (henceforth loosely referred to as “acceleration”) <cit.>. These acceleration glitches correspond to the two TMs moving toward each other along the sensitive axis of the pair, i.e. the direction joining their respective centers of mass. The rate of these events has been estimated to be about 1 per day and compatible with a Poisson distribution <cit.>. Several possible physical origins for glitches have been vetoed by extensive cross-checking and correlation analysis on LPF data, with the most plausible explanation pointing to either gas outbursts or virtual leaks in the vacuum chamber and the material surrounding the TMs. Dedicated experimental studies are underway to corroborate this hypothesis <cit.>. §.§ Guiding principles for LISA differential acceleration measurements We now list a few guiding principles behind our modeling choices: * Long-lived glitches related to force phenomena such as those observed by LPF are the most relevant for LISA. For these, we adopt a phenomenological parameterization suitable to describe their temporal evolution in terms of differential test-mass accelerations. * Constructing the corresponding signal model for fractional phase observables in the frequency domain is more complex, although doable. * Long-lived transients present in a displacement (optical phase) or velocity (optical frequency) observable disappear in an acceleration observable, with the signal disturbance limited to the duration of the external force transient. Likewise, glitch parameters related to the initial conditions – position and velocity – are eliminated with an acceleration observable. * In a realistic operational setup, systematic errors arising from force disturbances (e.g. stiffness coupling) could be subtracted directly in acceleration. Thus, our fitting model does not require any additional integration or whitening filter. * When the effective glitch “signal” has spectral content mainly near the low-frequency end of the LISA sensitivity range, differentiation is numerically safer than integration. In this regime, data correction from systematics in the displacement variables is still viable. * The corresponding TDI variables written in acceleration allow for a straightforward inclusion of LPF glitches in a Bayesian inference framework. * GW signal models can be easily rewritten as effective accelerations by differentiating those already available in phase or fractional frequency. These broad considerations are mostly inspired by the observational equivalence between GWs and tidal forces accelerating TMs relative to their local inertial frames <cit.>. We thus opt to implement our joint inference for glitches and GWs with suitable acceleration TDI variables. § TRANSIENTS MODELING The fundamental observable in LISA is the phase evolution Δϕ of a one-way propagating laser along each of the six links connecting the satellites. This can be equivalently written as an optical pathlength L=Δϕ/ω_l , where ω_l is the central frequency of the laser signal, which is assumed to be constant. We now focus on three different mechanisms perturbing the phase readout. §.§ Acceleration transients The two TMs housed in each of the LISA satellites are expected to independently exchange momentum with their surrounding environment (see Fig. <ref> for a schematic representation). We model the resulting transient acceleration profile a⃗_i of the i-th test mass as in Ref. <cit.>. We use a two-damped exponential model inspired by glitches observed in LPF, namely g(t;A,β_1,β_2,τ)=A/β_1 -β_2(e^-t-τ/β_1-e^-t-τ/β_2)Θ(t-τ), which we refer to as Model A1. Equation (<ref>) integrates to the net transferred momentum per unit mass: ∫_-∞^+∞ g(t;A,β_1,β_2,τ) dt = A . The parameters β_1,β_2 describe the typical timescales of the two exponentials while τ is the glitch onset time entering the Heaviside step function Θ. The corresponding Fourier-domain representation is g(ω;A,β_1,β_2,τ) = -A e^-i τω/(β_1ω -i) (β_2ω -i) . Accommodating glitches of unknown shape requires a more flexible model. We construct this using a superposition of S Gabor-Morlet shapelets g(t) = ∑_i^Sσ(t ; A_i, τ_i, β_i, n_i), where σ(t ; A, τ, β, n) = c_n ψ_n(t-τ/β), ψ_n(t) = 2t/n e^-t/n L_n-1^(1)(2t/n)Θ(t), c_n = (-1)^n-1A/2β n^2, and L_n^(α)(t) is the n-th generalized Laguerre polynomial <cit.>. We refer to these expressions as Model A2. Comparing to Ref. <cit.>, we use a different normalization c_n for the individual shapelets such that ∫_-∞^+∞σ(t;A,τ,β,n) dt = A , ∀ n ∈ℕ . In the frequency domain Eq. (<ref>) reads σ(ω; A,τ,β,n) = (-1)^n e^-i ωτA (n βω + i )^n-1/(n βω -i)^n+1 . Shapelets in this parametric family are quasi-orthogonal, i.e. ∫_-∞^+∞σ(ω; A,τ,β,n) σ^*(ω; A^',τ,β, m) dω = δ_nmπ A A^'/2nβ , ∫_-∞^+∞σ(ω; A,τ,β,n) σ^*(ω; A^',τ^',β,n)dω = π A A^'/2n^2β^2 e^-|τ -τ^'|/nβ(nβ+|τ -τ^'|) . From Eqs. (<ref>) and (<ref>) it is immediate to show that Model A1 tends to Model A2 with n=1 in the limit where β_1→β_2. §.§ Displacement transients The interferometer readout system is also expected to generate transient phase fluctuations. From Eq. (<ref>), we model these as effective displacement transients with the same agnostic shapelet parameterization used in Eq. (<ref>). We use a superposition of S shapelets Δ L(t) = ∑_i^Sσ(t ; D_i, τ_i, β_i, n_i) , where ∫_-∞^+∞dt Δ L(t) = ∑_i^S D_i is the net integrated displacement experienced by the test mass before returning asymptotically to its free-fall condition. We refer to this parametric family of glitches as Model D. The frequency domain representation follows from Eq. (<ref>) and reads σ(ω; D,τ,β,n) = (-1)^n e^-i ωτD (n βω + i )^n-1/(n βω -i)^n+1 . §.§ GW transients Among the large variety of typical sources populating the LISA sensitivity band, the most massive binary systems detectable produce hours to years-long transient signals. To leading-order, the binary time to merger t_m from a reference frequency f_ref is <cit.> t_m∼(3/4η) (f_ref/0.1 mHz)^-8/3(M_z/10^7M_⊙)^-5/3days , where η≡ m_1m_2/(m_1 + m_2)^2 is the symmetric mass ratio and M_z= (1+z) (m_1 + m_2) is the solar-system barycenter frame total mass for a source of component masses m_1 and m_2. By contrast, glitches observed by LPF have typical durations of seconds to hours and are positively correlated with the transferred momentum per unit mass ranging from 10^-2 to 10^3 pm/s <cit.>. Their broadband, short-lived morphology makes them the most likely to impact parameter estimation for GW transient sources of comparable duration. We select three fiducial noise transients and superimpose them on a short-lived (t_m = 30 hours) high-mass (M_z = 6×10^7 M_⊙, η=3/16) MBHB at redshift z = 5. We assume zero sensitivity below 0.1 mHz <cit.>. We consider a short-duration Model D glitch (β = 5 s), a moderate-duration Model A2 (β = 40 s), and a long-duration Model A1 glitch with β_1 +β_2 = 3300 s. All three glitches have peak amplitudes close to the merger time of the GW source, as shown in Fig. <ref>. For a conservative approach, we fine-tune the glitch onset times to maximally impact the reconstruction of GW source parameters. This is done by maximizing the match between the glitch and GW waveforms as shown in Fig. <ref> (see Sec. <ref> for more details). We model the GW signal with the IMRPhenomXHM <cit.> waveform approximant which captures the full coalescence of a quasi-circular, non-precessing black-hole binary. The implementation of the LISA response to this GW signal in the Balrog code has been presented in Ref. <cit.>. We choose to parametrize the GW signal injected as follows: m_1z,2z and χ_1,2 denote the binary component redshifted masses and aligned dimensionless spin, respectively; t_m, ϕ_0, ψ denote the time to merger introduced in Eq. (<ref>), initial phase and polarization, respectively; sinβ, λ denote the (sine-)ecliptic latitude and longitude; d_L and ι denote the source luminosity distance and inclination. Tables <ref>, <ref> and <ref> list the parameter values of our fiducial GW source which has an SNR of 187 and is common across all of our runs. § ACCELERATION TDIS We use Eqs. (<ref>), (<ref>), and (<ref>) to model the TDI variables <cit.> s̃_k(f;θ) entering the likelihood, cf. Sec. <ref>. We work in the constant equal-armlength approximation and label the three TDI variables M_X, M_Y, and M_Z, respectively. In this approximation, one needs a single time-delay operator D 𝒟[f(t)] = f(t-L) . This is applied to the single-link phase measurements y_ijk. Signals denoted by y_ijk or y_ij^' k are emitted by the i-th satellite, received by the k-th satellite, therefore traveling along either L_j or L_j^' (see Fig. <ref> for a schematic representation). The indexes j and j^' are used to denote cyclic and anti-cyclic permutations of 123, respectively. We thus to obtain the TDI variables M_X = y_231 + 𝒟y_13^'2 - y_32^'1 -𝒟 y_123 , M_Y = y_312 + 𝒟y_32^'1 - y_21^'3 -𝒟 y_231 , M_Z = y_123 + 𝒟y_21^'3 - y_13^'2 -𝒟 y_231 . Incorporating Model A1 and Model A2 signals into Eqs. (<ref>), (<ref>), and (<ref>) requires integrating the single-link differential accelerations twice. However, any non-zero total transferred momentum necessitate artificial regularization or ad-hoc approximations to construct a Fourier-domain representation of the signal. We solve this problem by introducing a set of “acceleration TDIs” G_X,Y,Z which are trivially related to Eqs. (<ref>), (<ref>), and (<ref>) by double differentiation. In the frequency domain one has ℱ[G_X] = (2 π f)^2 ℱ[M_X] G_X = g_231 + 𝒟g_13^'2 - g_32^'1 -𝒟 g_123 , where ℱ denotes the Fourier transform operator and g_ijk(t) = d/dt^2[y_ijk(t)] . Similar definitions hold for G_Y, G_Z upon cyclic permutation of indices. The key advantage of introducing a new set of TDIs lies in its instrumental robustness. Equation (<ref>) also allows us to conveniently recycles signal models available in fractional displacement by including both Model D glitches and GW signals. Furthermore, Eq. (<ref>) does not require a transfer function to model acceleration glitches. Following the conventions shown in Fig. <ref>, the single-link perturbation g_ijk(t) is obtained from the instantaneous accelerations g⃗_i(t) and g⃗_k(t-L) which are experienced by sender i and receiver k along the link j, and projected along the unit-vectors â_j(t-L) and â_j^'(t), respectively. We associate a unit vector â_j to each test mass M_j pointing in the direction opposite to L_j. For simplicity, we denote the associated vector components a_j. Given the choice of the local reference system, a positive value a_i corresponds to a negative displacement Δ L_i. The three TDI observables in terms of the individual test mass accelerations are G_X = (1+𝒟^2)(a_2^' - a_3) + 2𝒟(a_2 - a_3^'), G_Y = (1+𝒟^2)(a_3^' - a_1) + 2𝒟(a_3 - a_1^'), G_Z = (1+𝒟^2)(a_1^' - a_2) + 2𝒟(a_1 - a_2^') . It is importante to note how the acceleration TDI variable G_X (G_Y, G_Z) is insensitive to glitches acting on links L_1 and L_1^' (L_2 and L_2^', L_3 and L_3^'). This would no longer be true if a single glitch affects more than one TM (or more optical phase measurements); further modeling on this point will be presented elsewhere. Following the standard procedure <cit.>, we combine G_X, G_Y, and G_Z into three noise-orthogonal variables G_A = G_Z - G_X√(2), G_E = G_X -2G_Y +G_Z√(6), G_T = G_X+G_Y+G_Z√(3). Equations (<ref>), (<ref>), and (<ref>) define the data pieces entering our inference pipeline. § INFERENCE The initial search of a GW in noisy data is achieved through matched-filtering techniques <cit.> which provide initial guesses on the signal parameters. If glitches are present, their preliminary detection and subtraction might not be sufficient to provide data that are sufficiently cleaned to accurately infer the parameters of the astrophysical source <cit.>. Previous studies presented a matching-pursuit algorithm for an automated and systematic glitch detection <cit.> showing that, while the search grid on the damping parameter is too coarse to accurately obtain the best-fit glitch, it provides a reliable initial guess. For practical purposes, here we assume that such guess has been identified from the data and can be used to inform our subsequent analyses. We perform a joint parameter estimation, fitting simultaneously for GW signals and noise artifacts. We construct posteriors on parameters θ p(θ|d)∝ℒ(d|θ)π(θ) through stochastic sampling of the likelihood ℒ(d|θ) under a prior π(θ). We employ a coherent analysis on the three noise-orthogonal TDI channels d = {d_k; k = M_A, M_E, M_T} when considering displacement variables and d = {d_k; k = G_A, G_E, G_T} when considering acceleration variables. We use a Gaussian likelihood <cit.> lnℒ(d|θ)=-∑_k(d_k-s_k(θ)|d_k-s_k(θ))_k/2+const., where s_k is the k-th TDI output frequency series associated to the injected signal s(f;θ). The output s_k represent either acceleration or fractional displacements depending on the chosen TDI variable set, thus containing acceleration glitches, displacement glitches, GW transients, or a combination of these (cf. Sec. <ref>). The noise-weighted inner product is defined as (a | b)_k=4∫_f_min^f_maxã^*(f)b̃(f)/S_k(f)df, where denotes the real part, a(f) is the Fourier transform of the time series a(t), and S_k(f) is the one-sided noise spectral density of the k-th TDI channel. We use the match between two signals M(a,b) = (a| b)/(a| a)^1/2(b| b)^1/2 to optimize the onset time of the injected glitches as discussed in Sec. <ref>. Model selection is performed using log-Bayes factors log_10ℬ_i^j = log_10𝒵_i - log_10𝒵_j, where i and j are labels identifying the competing models, and 𝒵(d) = ∫ d θℒ(d|θ) π(θ) is the evidence of each parameter estimation. We consider a LISA mission lifetime of T_LISA=4 years, roughly equivalent to a calendar observation time of 4.5 years with an effective duty cycle of 82%. Our frequency resolution is therefore Δ f≈1/T_LISA=1.7×10^-8 Hz. We set f_min=0.1 mHz and f_max= 30 mHz, which is well above the fiducial GW and the maximum frequencies of all glitch signals. We use a semi-analytical noise spectral density model S_k(f) <cit.> describing the superposition of LISA stationary instrumental noise and astrophysical confusion noise from unresolved Galactic binaries <cit.>. In order to reduce the computational cost, we evaluate inner products from Eq. (<ref>) using a Clenshaw-Curtis integration algorithm <cit.>, see e.g Ref. <cit.> for a summary of its application to LISA data. Parameter estimation is performed with the Balrog code, which is designed to work with different stochastic samplers. In particular, in this paper we use the nested sampling algorithm <cit.> as implemented in Nessai <cit.>. We choose uniform priors on each parameter over either its entire definition domain or a range that is sufficiently large to enclose the entire posterior. § RESULTS We perform two sets of parameter-estimation runs: (i) Joint inference runs on both GW signal and glitches (Sec. <ref>), listed with IDs 1 to 14 in Table <ref>. (ii) Inference runs where we inject and recover glitches without GW signal (Sec. <ref>), listed with IDs 15 to 32 in Table <ref>. §.§ Joint inference with glitches and GWs If a preliminary search fails to identify and remove a glitch from the data, it is important to assess its impact on the parameters of the overlapping GW source. We thus tackle the following cases for each of the three signals illustrated in Fig. <ref>: * Parameter estimation in the absence of glitch in the data (“reference” runs, with IDs 1 and 2). * Parameter estimation ignoring a glitch when present in the data (“glitch-ignorant” runs, with IDs 6-8). * Parameter estimation including in the signal model a glitch that is present in the data (“glitch-complete” runs, with IDs 9-11). Bayesian evidence for each run is listed Tab. <ref>. We report log_10ℬ_9^6, log_10ℬ_10^7, and log_10ℬ_11^8 much greater than 2, indicating a “decisive" evidence <cit.> in favor of a glitch being present in the data. Summaries are provided in Tables <ref>, <ref>, and <ref>. We find no appreciable differences in the posterior distribution of the GW-source parameters when comparing reference runs and glitch-complete runs, which is encouraging for LISA science. Individual parameters are well reconstructed, which is expected given the brightness of the source (SNR ≃ 187). In particular, the MBHB component masses, the primary aligned spin components, and time to merger are measured with an accuracy of Δ m_i /m_i ≈ 8-40%, Δχ_1 ≈ 0.2, and Δ t_m ≈ 600 s (where we quote the 90% credible interval of the marginal posterior distributions). Figures <ref>, <ref>, and <ref> show the posterior distribution for the fiducial MBHB of each glitch-complete run. Similarly, we do not report any appreciable difference with either fractional displacement or acceleration TDIs to model the same GW signal (see runs 1 and 2). On the contrary, glitch-ignorant runs point to a different conclusion. The resulting posterior depends on the chosen duration and amplitude of each transient (see runs 7, 8, and 9). We find a long-duration, small-amplitude Model A1 glitch massively contaminates the reconstruction of the GW parameters, to a point that the signal cannot be recovered at all. This is shown in Fig. <ref>, where the glitch-ignorant distribution (red) shows evident issues in the underlying stochastic-sampling procedure. This has to be contrasted with the regularity of the glitch-complete posterior distribution (blue), where instead the parameters of both GW signal and noise transient are successfully recovered. In particular, when the glitch is ignored we find that the posterior on the luminosity distance rails heavily against the lower bound of its prior, thus making the GW source reconstruction highly biased, even in a parameter space that largely encloses the posterior of the glitch-complete run. As shown in Fig. <ref>, a Model A2 glitch with moderate duration and amplitude induces milder biases. Although the posterior support is far from the prior boundaries, the injected values lie outside the 99% credible interval for both mass and spin parameters. For the merger time, the true value lies on the 97% confidence interval of the corresponding marginalized posterior distribution. The injected values of polarization, initial phase, inclination, and source position are within their one-dimensional 90% confidence interval. Equivalent runs for a Model D glitch are shown in Fig. <ref>. This is a noise transient that overlaps with the GW signal only for a small fraction of a cycle. As expected, we find such a glitch does not significantly impact the measurement of the GW parameters. Finally, we note that our glitch-complete runs do not exhibit significant cross-correlations between the glitch and GW parameters, thus effectively decoupling the inference on the two signals. §.§ Inference with glitches alone, without GWs We consider all three glitch models presented in Sec. <ref> and inject them separately in the LISA data stream. Results are shown in Figs. <ref>, <ref>, and <ref> as well as Tables <ref> and <ref>. We perform model selection with different (i) number and order of shapelet components, (ii) number of glitches, and (iii) injection point. In particular, in Tab. <ref> we report “strong” evidence in favor of the correct noise-transient model for the selection of the number and order of shapelets; these are discrete parameters we can confidently identify using log_10ℬ_15^j with j=16,…,20. We obtain a “substantial" evidence log_10ℬ_15^21=0.9 for selecting the correct number of glitches. Injection points are selected with a “decisive” evidence given by ℬ_22^n with n=23,…,27. All runs point to the same, encouraging result: glitch parameters are confidently reconstructed. In particular, we recover amplitudes across all models (i.e. A, A_0,1, D_0,1,2,3) with accuracies of 1%-30% at 90% credible level. Glitch-onset times are recovered with fractional accuracy ≲ 0.1%. The parameters β_i's in Model D glitches are recovered with an accuracy of 20%. On the other hand, Model A1 glitches exhibit correlation and multimodalities for the joint posterior on β_1 and β_2. This is expected given the waveform degeneracy upon exchange of these two parameters, cf. Eqs. (<ref>) and Eq. (<ref>). § CONCLUSIONS We presented a parameter-estimation strategy to simultaneously extract GWs from MBHBs and glitches from future LISA data. We developed several models for noise transients inspired by those observed by LPF. Crucially, we point out that dealing with glitches in the frequency domain greatly benefits from expressing the LISA response function (i.e. the TDIs) in terms of acceleration instead of displacement as usually done. Accounting for potential noise transients in the data leads to accurate reconstruction of all GW parameters without significant correlations with the glitch properties. On the contrary, ignoring glitches when present in the data might introduce significant systematic biases on the reconstructed parameters of the MBHB. Our analysis shows that the most crucial property is the length of the glitch, with results ranging from a complete loss of the GW signal to a negligible impact. When considering glitches in isolation, our procedure allows for confident identification of their number, location, and morphology in each of the models considered. It is important to stress that all glitch models in our suite have a relatively low number of parameters and these are largely uncorrelated to those of the GW source. The computational overhead of including potential glitches in the signal model is therefore negligible, thus making our approach promising for a future “global fit” procedure. This study is restricted to a single, fiducial GW source as well as glitches are conservatively placed at the time location that maximizes their matches with the GW signal. A broader injection-recovery study over the full MBHB and glitch parameter space is needed to forecast the impact of noise transients on GW signals in the future LISA catalog; this is left to future work. Overall, this paper showcases our readiness to model and precisely recover glitches when present in the LISA data stream, even when overlapping with GW sources of similar duration such as a MBHB. We thank Chris Moore, Federico Pozzoli, Eleonora Castelli, Natalia Korsakova, Stas Babak, Martina Muratore, and all Balrog developers for useful comments and inputs. A.S. and D.G. are supported by ERC Starting Grant No. 945155–GWmining, Cariplo Foundation Grant No. 2021-0555, and MUR PRIN Grant No. 2022-Z9X4XS. A.S., D.G., and R.B. are supported by the ICSC National Research Center funded by NextGenerationEU. R.D., M.C., S.V., D.V.,W.J.W. acknowledge funding from MUR under the grant PRIN 2017-MB8AEZ. R.B. acknowledges support through the Italian Space Agency grant Phase A activity for LISA mission, Agreement n. 2017-29-H.0. D.G. is supported by Leverhulme Trust Grant No. RPG-2019-350. Computational work was performed using University of Birmingham BlueBEAR High Performance Computing facility and CINECA with allocations through INFN, Bicocca, and ISCRA project HP10BEQ9JB. Software: We acknowledge usage of Mathematica <cit.> and of the following Python <cit.> packages for modeling, analysis, post-processing, and production of results throughout: Nessai <cit.>, matplotlib <cit.>, numpy <cit.>, scipy <cit.>.
http://arxiv.org/abs/2306.09011v1
20230615101202
CAD-Estate: Large-scale CAD Model Annotation in RGB Videos
[ "Kevis-Kokitsi Maninis", "Stefan Popov", "Matthias Nießner", "Vittorio Ferrari" ]
cs.CV
[ "cs.CV" ]
Kevis-Kokitsi Maninis Google Research Stefan Popov Google Research Matthias Nießner TUM Vittorio Ferrari Google Research July 31, 2023 =============================================================================================================================== empty We propose a method for annotating videos of complex multi-object scenes with a globally-consistent 3D representation of the objects. We annotate each object with a CAD model from a database, and place it in the 3D coordinate frame of the scene with a 9-DoF pose transformation. Our method is semi-automatic and works on commonly-available RGB videos, without requiring a depth sensor. Many steps are performed automatically, and the tasks performed by humans are simple, well-specified, and require only limited reasoning in 3D. This makes them feasible for crowd-sourcing and has allowed us to construct a large-scale dataset by annotating real-estate videos from YouTube. Our dataset offers 108K instances of 12K unique CAD models placed in the 3D representations of 21K videos. In comparison to Scan2CAD, the largest existing dataset with CAD model annotations on real scenes, has 8× more instances and 4× more unique CAD models. We showcase the benefits of pre-training a Mask2CAD model on for the task of automatic 3D object reconstruction and pose estimation, demonstrating that it leads to improvements on the popular Scan2CAD benchmark. We will release the data by mid July 2023. § INTRODUCTION Semantic 3D scene understanding from images and videos is a major topic in 3D scene understanding, crucial for many computer vision applications, ranging from robotics to AR/VR scenarios. The final goal is to detect all objects in the scene, recognize their class, reconstruct their 3D shape, as well as their pose within the overall scene coordinate frame. With the advances of scalable deep learning techniques, the field has progressed from reconstructing the 3D shape of one object in a simple image with trivial background <cit.>, to limited reasoning about object arrangements in simple multi-object scenes <cit.>, and finally to unrestricted multi-object 3D reconstruction in complex real-world scenes <cit.>. This evolution has been dependent on the availability of ever larger and more diverse data sets for training and evaluation <cit.> Existing datasets for Semantic 3D scene understanding fall broadly in two categories: synthetic and acquired from real images/videos. The former <cit.> feature artificial 3D scenes that are manually designed by human artists, and then rendered into synthetic images. While these datasets are relatively large, their images/videos expose a domain gap to real imagery <cit.>. Acquired datasets <cit.> annotate 3D objects on real images and videos (Table <ref>). Such datasets have been limited in size and diversity so far, partly due to limitations in their annotation process. They rely on specialized equipment to capture depth images (RGB-D) in order to get a high-quality 3D point cloud reconstruction of the scene. Humans then annotate objects on this 3D point cloud. However, it is very expensive and cumbersome to go and physically acquire RGB-D videos in the real world, which limits the number of scenes captured, as well as their variety (e.g. RGB-D sensors struggle outdoors due to sunlight, fail on glossy surfaces, and they have limited depth range). Moreover, annotating on 3D point clouds requires expert annotators able to reason in 3D. In this paper, we present the CAD-Estate dataset, which annotates real videos of complex scenes from Real Estate 10k <cit.> with globally-consistent 3D representations of the objects within them. For each object we find a similar CAD model from a database, and place it in the 3D coordinate frame of the scene with a 9-DoF pose transformation. We designed a semi-automatic approach which works on commonly-available RGB videos, without requiring a depth sensor, thereby opening the door to annotating many videos readily available on the web. In our approach many steps are performed automatically, and the tasks performed by humans are simple, well-specified, and require only very limited reasoning in 3D. This makes them feasible for crowd sourcing, enabling to distribute work to a large pool of annotators. In turn, this has allowed us to construct a truly large-scale data set. CAD-Estate contains 107,910 instances of 12,429 unique CAD models, covering 20,806 videos (Sec. <ref>). The models span 49 categories, 28 of which with more than 100 objects annotated. In comparison, the largest existing dataset with CAD model annotations on real multi-object scenes (Scan2CAD <cit.>) has 8× fewer objects (14,225), 4× fewer unique CAD models, 2× fewer categories with more than 100 objects (14) and 14× fewer videos (1,506). In our experiments, we show that pre-training a modern model for automatic 3D object reconstruction and pose estimation <cit.> on improves performance on the popular Scan2CAD benchmark <cit.>. Moreover, we establish baseline performance on our own test set, and provide ablation experiments to validate various choices of our annotation pipeline. § RELATED WORK Synthetic scene understanding datasets. Datasets of 3D object assets (without their poses on images) include ShapeNet <cit.>, 3D-FUTURE <cit.>, ABC <cit.> and ABO <cit.>. Most recently, Objaverse <cit.> released a large dataset of 818k 3D assets. Other synthetic datasets contain 3D objects placed in artificial 3D scenes designed by artists, usually indoor rooms <cit.>, and then rendered into images. Synthetic datasets are large scale (up to 818k objects of <cit.>), but require extra efforts to bridge the domain gap for applications on real imagery <cit.>. Real 3D scene understanding datasets. Several datasets have objects annotated on individual images (Table <ref>, top block). Sun RGB-D <cit.> provides image-depth pairs from an RGB-D sensor along with objects annotated with 3D bounding-boxes (no 3D shapes). PASCAL-3D+ <cit.> aligns simple CAD models to images by manually specifying the object pose and the focal length of the camera. They focus on simple images with fewer than 2 instances on average. IKEA Objects <cit.> and Pix3D <cit.> annotated one object per image by aligning a 3D CAD model on it. Moreover, their scale is limited by the requirement for having CAD models exactly matching the objects in the images, which are difficult to find. More recently, ABO <cit.> automatically estimated 3D object poses for part of their 3D assets, on automatically retrieved images (6.3k images with one object annotated in each). Other datasets annotate objects on videos (Table <ref>, bottom block). CO3D <cit.> and Objectron <cit.> have videos mostly featuring one object each, and provide either a reconstructed point cloud of the object <cit.> or a 3D bounding box <cit.>. Several works <cit.> use an RGB-D sensor to capture videos of rooms with multiple objects, then reconstruct a 3D point cloud scan of the scene by fusing the acquired depth maps. They then label this 3D scan with object class and instance labels, resulting in incomplete object shapes. Closer to our work, Scan2CAD <cit.> goes a step further, building on <cit.> by manually annotating posed CAD models on the 3D scan. These datasets heavily rely on a depth sensor, which limits their scale and applicability. In contrast, we propose an annotation method which works on RGB videos, enabling annotating videos readily available on the web. Moreover, our human annotation tasks are very simple, and require little reasoning in 3D. These two features make our approach more scalable. We construct , which annotates 108k objects with clean CAD models and full 9-DoF poses on pure RGB videos. This is larger than any other dataset of real imagery, and is 8× larger than Scan2CAD, which also offers posed CAD models (on RGB-D video). Multi-object 3D reconstruction Many works tackle multi-object 3D reconstruction from a single image <cit.>. They are either trained on synthetic data <cit.>, or on small real datasets <cit.>. Similarly, recent learning-based approaches reconstruct a scene from a video <cit.>, and use Scan2CAD as their main evaluation benchmark. Our dataset can benefit all of these works as it offers new, large-scale, diverse, real video data with annotated complex spatial arrangements of 3D objects into scenes. In Section <ref> we show that pretraining on boosts the results of <cit.> on the original dataset it has been trained for <cit.>. § DATASET CONSTRUCTION Given a video of a static scene, our goal is to create a globally-consistent 3D representation that contains all its objects. To achieve this, we propose a semi-automatic system that relies on a large database of CAD models. For each object in the scene, we find a similar-looking CAD model from the database and place it in the 3D coordinate frame of the scene by estimating its 9-DoF pose (i.e. 3D translation, 3D rotation, and 3D scale, allowing for independent scaling along each axis). We design the system so that many steps are performed automatically. We leave only a few, simple and well-specified tasks for human annotators. These are all decomposed over individual objects, removing the complexities of considering the whole scene, and involve only very limited reasoning in 3D. These characteristics make the tasks feasible for crowd sourcing, enabling to distribute work to a large pool of annotators, as opposed to few in-house experts <cit.>. This enables constructing a truly large dataset. We annotated videos of RealEstate10K <cit.>, which show multiple rooms of real estate properties. The videos are split into shots, and camera poses have been extracted using an SfM pipeline <cit.>. We use ShapeNet <cit.> as our CAD model database, which contains 51k objects over 55 classes. System overview. Our system receives an RGB video as input, with camera parameters for each frame (typically derived using SfM <cit.>). The output is the class, 3D pose (rotation, translation, scale), and 3D shape of each object in the video (represented as a CAD model from a database). The system amounts to a sequence of 5 stages: (1) We start by detecting objects in the video and tracking them over time, either automatically or with the help of humans (Sec. <ref>). Each track corresponds to one physical object in the scene and forms the unit of annotation. All further stages operate on one track at a time with the goal of reconstructing its pose, shape, and class. (2) For each track, we automatically select a few similar-looking CAD model candidates from the database, and then ask humans to choose the best match (Sec. <ref>). (3) We ask humans to annotate 3D ↔ 2D point correspondences between the chosen CAD model and the object in the video, on a few key-frames (Sec. <ref>). (4) We use the annotated correspondences together with the camera parameters of the key-frames to automatically estimate the 9-DOF pose of the object (Sec. <ref>). (5) Finally, we ask humans to verify the estimated pose for quality control (Sec. <ref>). §.§ 2D Object detection and tracking In this first stage we detect objects in the video and track them over time. Each track then corresponds to one physical object and forms the unit of annotation for all subsequent stages. We apply somewhat different procedures for the training and val/test sets of our dataset, in order to strike a good trade-off between automation (hence reducing human effort) and completeness of annotation (we want to capture all objects in the val/test set). Train set. We detect objects in each frame automatically using a SpineNet-based model <cit.>. We also extract an appearance descriptor for each detection box, by applying a Graph-Rise-based <cit.> model. Next, we associate detections over time, as common in tracking-by-detection approaches <cit.>. We compute various similarity scores between two detections in different video frames, including the similarity between their appearance descriptors, the difference in their class labels, and the spatial continuity of the box positions in adjacent frames. Then we cluster all detections across all frames into tracks based on these similarity scores using the Clique Partitioning approach of <cit.>. Val/Test sets. Automatic detection and tracking models can sometimes miss objects as they do not work perfectly. Since for the validation and the test sets we strive for a high degree of completeness, we annotate missing object tracks manually (in addition to the automatic ones). For this we developed an efficient custom interface that allows annotators to draw a whole object track in time, i.e. drawing a bounding-box <cit.> on each key-frame where a particular physical object appears. For efficiency, we automatically focus work on 6 frames regularly-spaced in time. The annotators see all current tracks already found by the automatic approach, and only draw missing ones. Note how we apply this manual annotation procedure only to a rather small subset of the data (val/test sets have fewer videos than train, Table <ref>). §.§ Selecting a CAD model The second stage is to select a suitable CAD model for a tracked object. We first select 10 candidates automatically from the database. We then ask a human to chose the one that looks the closest to the object in the video. This removes the need for annotators to search through the large database. Finding candidates automatically. We find candidate CAD models for an object track by considering both appearance similarity and class label similarity cues. During pre-processing, we render the CAD models in the database from 10 random viewpoints and compute an appearance descriptor for each view (the same as in Sec. <ref>). We then compute the appearance similarity between an object box in a frame of the object track and a CAD model view as the cosine similarity of their descriptors. For the class label similarity we need to take special care, as the label spaces of the CAD model database and the object detector are different and feature multi-way relationships (e.g. the CAD "cabinet" matches the detector's "filing cabinet", "wardrobe", and "chest of drawers"). Hence, we embed each class label name into a common semantic space using the Universal Sentence Encoder <cit.>, and compute the cosine similarity between any two class labels in this space. This is a general solution that can work with any label space. We combine the appearance and class similarity scores with a simple product. To compute the overall similarity between an object track and a CAD model, we aggregate the combined appearance-class similarity over all pairs of frames and CAD model views. We use this overall similarity score to rank CAD models and select the top 10 as candidates for an object track. In practice the class similarity act as a soft filter for the appearance similarity, so the best CAD models are the most similar-looking ones to the object in the track, among those that have a similar class label. Selecting the best candidate with a human. We ask annotators to choose the best matching candidate. We show them the detected object on a set of evenly spaced key-frames, next to the rendered CAD model candidates. Annotators can navigate between key-frames, to see the object from multiple views. Annotators can declare that none of the candidates are similar enough to the tracked object (hence that track is not passed on to the later stages). §.§ 3D ↔ 2D point correspondences We now ask humans to annotate point correspondences between the 3D surface of the CAD model and the video frames of the tracked object (Fig. <ref>). As for the CAD candidate selection case, the interface enables annotators to navigate between key-frames. We show the selected CAD model next to the key-frames. For each key-frame, we ask annotators to annotate 4-6 point correspondences between the CAD model and the frame. To make the task easier, they can rotate and flip the CAD model in 3D, in order to roughly match the orientation of the object in the frame. We will use these correspondences to recover the 9-DOF object pose in the next stage. Our approach consists of steps that are easy to understand and easy to master. Annotators control rotation with Orbit Controls <cit.>, which translates 2D mouse movements to view-local object rotation in 3D in an intuitive way. Afterwards, clicking on CAD-to-image point correspondences is very easy and is similar to other familiar 2D annotation tasks. Most importantly, this approach is object-centric and requires no reasoning in 3D in the global coordinate frame of the scene. Instead, this harder task is done automatically in the pose estimation stage of our system. Finally, annotating point correspondences is decoupled between frames: the annotator is free to pick different points in every frame. This makes it easy even for objects with complex shapes. §.§ Object 3D pose estimation We use the 3D ↔ 2D point correspondences to automatically estimate a global 9-DOF pose for the object. We apply a non-linear optimization method, which integrates evidence from all views in a track, and consists of multiple objectives. We express the object pose as a 9-DOF transformation that brings the CAD model from its canonical pose to the world coordinate frame of the scene. The transformation has 3 components: 3D translation T, 3D rotation R, and anisotropic 3D scale S (i.e. we allow independent scaling along each axis). The goal is to recover this unknown transformation (T,R,S). We setup below several objectives, which are functions of (T,R,S), and combine them into an overall objective. Finally we minimize that overall objective over (T,R,S). Point re-projection objective. We know the extrinsic and intrinsic camera parameters at each video frame. Given a potential (T,R,S) we can use it along with the camera parameters to project the 3D points on the surface of the CAD model to the video frame. Therefore, we setup a point re-projection objective L_repr(T,R,S) which measures the L1 distance between the projected 3D points and their corresponding 2D points in each frame (and sum over all frames, Figure <ref>). The correspondences are given by the 3D ↔ 2D annotations from Sec. <ref>, and we also take into account whether the annotator flipped the CAD model. Up-axis objective. Most objects in our videos are usually placed vertically in an upright position. We reflect this by imposing an L1 objective that penalizes 3D rotations that change the "up"-axis of the object with respect to the world. We do this directly on the target rotation matrix R by applying the additional objective L_up(R). This objective is applied to object classes that are usually found in upright position (e.g. chairs, tables, cabinets, etc.), whereas other classes such as pillows are excluded. For this objective to be applied, we need to know the up-axis for the objects in our CAD database, and in the world coordinate frame (which we do for ShapeNet and RealEstate10K). Front-of-camera objective. We encourage object pose transformations that place all annotated 3D points in front of their respective cameras (rather than behind), by penalizing 3D points that have a negative depth in the coordinate frame of that camera. Special scale parameterization for co-planar 3D points. Sometimes, all 3D points chosen in Sec. <ref> by the annotator on the CAD model are co-planar. This typically happens when the video shows only a planar part of the object, e.g. a table seen only from the top, or a cupboard seen only frontally. Co-planar 3D points prevent resolving all three dimensions of the target scaling transformation S. We detect such cases automatically during annotation. We then resolve them during pose estimation by constraining the scaling factor perpendicular to the annotated plane to be the average of the other two scale factors. This reduces the DOF of the scaling transformation S down to 2. Special rotation/scale parameterization for symmetric objects. In many cases the retrieved CAD models are symmetric, which typically leads to inconsistent point correspondence annotations across frames (e.g. an annotator picking a particular 3D point on a rotation-symmetric lamp corresponds to a point in the video in a frame, but then picking a different 3D point in a different frame, as these are equivalent up to symmetry). We handle these cases by optimizing for a rotation w.r.t any of the symmetries of the object in the reprojection objective. We consider the same symmetries as in Scan2CAD, i.e. 2-way (e.g. a rectangular table), 4-way (e.g. a square table), and 36-way (e.g. a round table). We detect symmetries automatically directly on each CAD model. For fully symmetric objects (36-way symmetric), we further constrain the two scaling factors around the up-axis to be identical. Optimization We combine the above objectives in an overall one: L_pose(T,R,S) = L_repr(T,R,S) + α· L_up(R) +β· L_front(T,R,S) We minimize this objective over (T,R,S) with Adam <cit.>. α and β are hyperparameters set empirically. §.§ Pose verification by humans In this last stage, we verify whether the pose computed in the previous stage matches the image contents in the video. This is necessary as pose estimation can fail for several reasons, including limited/degenerate camera motion, occlusion, and truncated objects. We render the CAD model as overlay on top of the video frames in a track, using the camera parameters and the estimated object pose (T,R,S). We then ask human annotators to judge whether the rendered CAD aligns well with the object in the video. If it aligns well in all key-frames, we mark the pose as correct. § DATASET ANALYSIS General statistics. Table <ref> compares general statistics of to the closest existing video dataset Scan2CAD <cit.>. We further split the stats of our dataset into training set and val/test test sets. is an order of magnitude larger than Scan2CAD (20.8k vs. 1.5k scenes, and 107.9k vs. 14.2k posed objects). The annotated objects cover more classes (49 vs. 35 in Scan2CAD). Figure <ref> shows the distribution of annotated objects over classes. Despite the long tail, there are many more classes that have a large number of objects (13 classes with >1000 objects vs 4 in Scan2CAD, and 28 classes with >100 objects vs 14). also offers greater diversity of object 3D shapes. It is annotated with 12.4k CAD models vs 3k for Scan2CAD (noting that in both datasets the CAD shapes are a close match rather than exactly matching the shape of the object in the image). Camera framing. There is a qualitative difference between the video captures of Scan2CAD (from ScanNet <cit.>) and (from RealEstate10K <cit.>). The videos of <cit.> were captured with an RGB-D sensor, taking close-up views which facilitates acquiring good quality depth maps. Instead, the videos of are captures of real estate properties with more distant views that depict a larger part of each room, as the goal was to showcase the space for selling it. The video shots are also shorter (143 frames per video in vs. 1.6k in Scan2CAD). As a consequence of the more distant views, several key statistics are different in , compared to Scan2CAD: (1) More objects are visible in one video frame at the same time: on average, 7.9 in vs 3.3 in Scan2CAD. (2) More objects are further way from the camera and thus appear smaller on the images: on average, the bounding-box of a CAD-Estate object covers 7.5% of the image area vs. 16.5% in Scan2CAD. (3) The dynamic range of the Z position of objects is larger: in CAD-Estate the farthest object is 4.5× farther from the camera than the nearest one, vs. 2.3× in Scan2CAD. (4) Object truncation is much higher in ScanNet compared to , where most objects are completely visible (Figure <ref>). This is also a consequence of the capture process, as ScanNet needs close-up captures due to the range of the depth sensor. The camera framing statistics above highlight how poses a different challenge than Scan2CAD for automatic scene understanding methods, as they need to handle more complex views with more objects visible at the same time, many smaller objects, a higher variability of their distance to the camera, but also less truncated by the image frame. § EXPERIMENTS We first perform several experiments by training a learning-based method for CAD model alignment <cit.> on (Sec. <ref>), demonstrating that it leads to performance improvements on the Scan2CAD test set, and establishing that our test set offers a harder challenge. Then in Sec. <ref> we provide ablation experiments for the different components of our annotation pipeline, showing their relative merit and demonstrating that they are all necessary to achieve high quality. §.§ Training Mask2CAD on In this section, we showcase how can be used to train Mask2CAD <cit.>, a deep learning method for single-image 3D object reconstruction and pose estimation. We start by studying the benefits of having a large training set by pre-training <cit.> on and then fine-tuning and evaluating on Scan2CAD <cit.> (where Mask2CAD was originally benchmarked on). Then we establish baseline results for Mask2CAD trained and tested on . From to Scan2CAD. Mask2CAD has been extensively evaluated <cit.> by training and testing on the Scan2CAD dataset <cit.>, whose training set consists of 9.5k objects over 19k frames on 1194 scenes. We run the same experiment, but first pre-train Mask2CAD on a much larger training set of 45k objects over 150k frames sampled from 11k scenes of 's trainval. Then we fine-tune on the train set of Scan2CAD, and evaluate on the test set with the popular metrics AP_mesh, AP_mesh^50, and AP_mesh^75 <cit.>. Table <ref> presents the results on all 3 metrics above, and additionally per-class AP_mesh. As the results show, pre-training on our large dataset improve the performance of Mask2CAD significantly, for almost all classes. We observe that the improvement is greater for classes for which has many objects (cabinet, table, bed). Train and test on . We now establish baseline results for Mask2CAD on (training on our trainval set, and evaluating on the test set). We use the same classes as Scan2CAD for this experiments, and the same evaluation metrics, enabling approximate comparisons across datasets. The results in Table <ref> show that Mask2CAD achieves considerably lower performance on than on Scan2CAD. Especially on the strict IoU threshold AP_mesh^75, the performance is much lower (5.7 vs. 2.4). This indicates that our test set might offer a harder challenge. provides more complex scenes that are difficult to reconstruct, and objects are in general further away from the camera, which makes pose estimation harder. §.§ Optimization objectives for 3D pose estimation We study the influence of the object pose optimization objectives of Section <ref> on pose estimation quality. We evaluate by asking annotators to verify the poses produces by different versions of the pose estimator (as in Sec. <ref>, but on a subset of the data). A higher percentage of positively verified object poses indicates a better pose estimator. Starting from 52.2%, the percentage of positively verified poses improves steadily as we add the special parameterization of the re-projection objective for handling co-planar 3D points (57.6%), the one for handling symmetric objects (62.9%), and the up-axis objective (74.9%). This demonstrates that all of them contribute to the quality of our dataset, as they enable to estimate a correct pose for a greater number of objects. The largest contribution is made by the up-axis objective, as it affects many objects. Instead, 27.8% of all objects in our dataset are symmetric, and only 15.5% received co-planar 3D point annotations. § CONCLUSIONS We introduced a new way to annotate 9-DoF pose of CAD models on monocular RGB videos. As a result of our method, we obtained the dataset, which features 108K instances of 12K unique CAD models placed in the 3D representations of 21K videos. This dataset is an order of magnitude larger than existing CAD annotation efforts facilitated by our new annotation method. We have shown experimentally that the quantity and diversity of such data significantly benefits the modern CAD alignment technique Mask2CAD, leading to improved performance on Scan2CAD. However, we believe that this is only a first step, and is an important stepping stone towards leveraging CAD priors for 3D scene reconstruction and understanding in the context of a wide range of downstream tasks. Acknowledgements: We thank Prabhanshu Tiwari, Sweety Chaudhary, Abha Dwivedi, Ashlesha Shantikumar, Umesh Vashisht, Mohd Adil for coordinating the annotation process, and Weicheng Kuo who helped us with running Mask2CAD on our dataset. ieee_fullname
http://arxiv.org/abs/2306.12327v1
20230621151117
Learning the galaxy-environment connection with graph neural networks
[ "John F. Wu", "Christian Kragh Jespersen" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.CO", "astro-ph.GA" ]
[ Learning the galaxy-environment connection with graph neural networks equal* John F. Wu1,2 Christian Kragh Jespersen3 1Space Telescope Science Institute, 3700 San Martin Dr, Baltimore, MD 21218 2Johns Hopkins University, 3400 N. Charles St, Baltimore, MD 21218 3Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA John F. [email protected] Galaxy-Halo Connection, Graph Neural Networks, Galaxy Evolution, Cosmological Simulations 0.3in ] Galaxies co-evolve with their host dark matter halos. Models of the galaxy-halo connection, calibrated using cosmological hydrodynamic simulations, can be used to populate dark matter halo catalogs with galaxies. We present a new method for inferring baryonic properties from dark matter subhalo properties using message-passing graph neural networks (GNNs). After training on subhalo catalog data from the Illustris TNG300-1 hydrodynamic simulation, our GNN can infer stellar mass from the host and neighboring subhalo positions, kinematics, masses, and maximum circular velocities. We find that GNNs can also robustly estimate stellar mass from subhalo properties in 2d projection. While other methods typically model the galaxy-halo connection in isolation, our GNN incorporates information from galaxy environments, leading to more accurate stellar mass inference. § INTRODUCTION In the current ΛCDM paradigm of hierarchical galaxy formation, the galaxy-halo connection is crucial for understanding how galaxies form and evolve, and for constraining the small-scale clustering of matter <cit.>. Techniques for modeling the co-evolution of galaxies and dark matter range from simple, non-parametric approaches to full-physics magnetohydrodynamic simulations which require >10^8 CPU hours of computation <cit.>. Detailed simulations contribute important insights into galaxy formation, but due to their complexity and heavy computational costs, they are hard to analyze and cannot be performed for cosmologically significant volumes. Machine learning (ML) is a natural option for making progress on both of these problems. We present an equivariant Graph Neural Network (GNN), which takes as its input a graph composed of halos linked on a linking scale of 5 Mpc, and predicts baryonic properties. The GNN incorporates the effects of a galaxy's environment, thereby improving the prediction of its baryonic properties compared to traditional methods. We are also able to train a network on the Illustris TNG300-1 box in 10 minutes on a single NVIDIA A10G GPU; inference takes one second. In this work, we focus on estimating stellar mass from a catalog of subhalo positions, velocities, M_ halo, and V_ max. § RELATED WORK The connection between galaxies and their dark matter halos has been characterized via abundance matching or halo occupation distribution (HOD) models of central halos <cit.>, conditional luminosity or mass functions <cit.>, subhalo abundance matching <cit.>, and empirical models of the galaxy-halo connection <cit.>. Several works have also attempted to perform abundance matching or paint baryons (i.e., stars) onto dark matter maps by using classical machine learning algorithms <cit.> and/or neural networks <cit.>. In general, these previous methods treat halo/galaxy systems as unrelated entities with no formation history. To rectify this, <cit.> construct mathematical graphs to represent group halos, and train a GNN to learn the central halo mass, which was later applied to estimate the halo masses of local Group galaxies <cit.>. GNNs have also been successfully used to model the dependence of galaxy properties on merger history <cit.>, and generate synthetic galaxy catalogs <cit.>. In cosmology, several works have already demonstrated the representational power of GNNs, and have used it for simulation-based inference (likelihood-free inference). <cit.> employ GNNs to infer the cosmological parameters Ω_m and σ_8, using 3d galaxy positions and stellar properties from the CAMELS simulation suite <cit.>. <cit.> show that GNNs can optimally extract and compress catalog data for cosmological parameter inference. <cit.> and <cit.> train GNNs to infer cosmological parameters from dark matter-only simulations, and then validate their robustness on other N-body and hydrodynamic simulations. § COSMIC GRAPHS §.§ Simulation data We use z=0 subhalo catalogs <cit.> derived from the Illustris TNG300-1 hydrodynamic simulation <cit.>. We split the full cosmological box into 6^3 = 216 subvolumes in order to fit into 16 GB of memory, such that each subvolume is about (50  Mpc)^3. For consistency with the TNG simulations, we adopt the <cit.> cosmology and set H_0 = 67.74  km s^-1 Mpc^-1. We select unflagged subhalos that have more than 50 star particles, log(M_⋆ / M_⊙) > 9, and log(M_ halo / M_⊙ )> 10. Due to cosmic variance, some subvolumes only have a few hundred subhalos, while others have thousands. In Figure <ref>, we show an example of a typical subvolume. §.§ Equivariant graph neural networks We construct a mathematical graph for each TNG300 subvolume, such as the one depicted in Figure <ref>. We designate 𝒱_i=(* x_i, * v_i, M_ halo,i, V_ max, i) as the eight node features. Subhalos within a linking length of L = 5 Mpc are connected with edges. Subvolumes are padded by 2.5Mpc on each side, such that subvolumes do not share connections that would be relevant for the linking length. We allow nodes to be connected to themselves (i.e., self-loops). On each edge ℰ_ij, we compute three features: the squared Euclidean distance d_ij≡ ||* x_i - * x_j||, the inner product between unit vectors * e_i ·* e_j, and the inner product between unit vectors * e_i ·* e_i-j, where unit vectors * e_i ≡ (* x_i - *̅ ̅x̅ ) / ||* x_i - *̅ ̅x̅||) are defined using positions * x_i relative to the centroid of the point cloud distribution *̅ ̅x̅, and * e_i-j is the unit vector in the direction of *x_i - *x_j. We use a message-passing GNN based on interaction networks <cit.>, similar to the model used by <cit.>. By design, the GNN is equivariant to permutations and invariant under the E(3) group action, i.e., invariant to rotations, reflections, and translations. For more details about equivariant GNNs, see the appendices of <cit.> and Sections 3.1 and 3.2 of <cit.>. We aggregate layer inputs at each node by max pooling over information from neighboring nodes.[We do not find significant improvements by using a concatenation of sum, max, mean, and variance aggregations, or by using learnable aggregation functions.] Our GNN has one set of fully connected layers with 256 latent channels and 128 hidden channels. We predict two quantities for each node, which correspond to the logarithmic stellar mass y_i ≡log(M_⋆,i / M_⊙) and the logarithmic variance, logΣ_i (i.e., the logarithm of the squared uncertainty on stellar mass). §.§ Optimization Our loss function is composed of two terms: the mean squared error on the logarithmic stellar mass ||*̂ ̂ŷ - * y||^2, and the squared difference between the predicted and measured variance ||*̂Σ̂ - (*̂ ̂ŷ - * y)^2||^2. The latter term ensures that the variance is appropriately estimated <cit.>. We stabilize training by taking the logarithm of each loss term before summing them. We monitor the loss as well as the root mean squared error (RMSE) on log (M_⋆/M_⊙). We perform k=6-fold cross-validation. For each fold, we train on 180 subvolumes and validate on 36 subvolumes, such that the validation set forms a ∼ 50 × 300 × 300 Mpc^3 subbox. We augment the training data set by adding random noise, sampled from a normal distribution with 10^-5 times the standard deviation, for each node variable. Based on a preliminary hyperparameter search, we implement a simple optimization schedule over a total of 1000 epochs using the optimizer <cit.> and a batch size of 36. We begin with a learning rate of 10^-2 and weight decay of 10^-4, and then decrease both by a factor of 5 at 500 epochs, and again decrease both by a factor of 5 at 750 epochs. We inspect the training and validation losses to ensure that the optimization is converged and does not overfit the training data. § RESULTS Overall, we find that the GNN can infer the stellar mass from subhalo properties with remarkable accuracy. We recover the galaxy stellar mass to within RMSE =0.129 dex of its simulated value by using a GNN. The predictions are largely unbiased as a function of mass. §.§ Comparisons against baseline models In Figure <ref>, we compare the performance of different models trained and cross-validated on the same TNG300 data set. The panels show, from left to right: (a) a subhalo abundance matching (SHAM) model, (b) a random forest (RF) trained using M_ halo as input, (c) a RF trained using V_ max, (d) a RF trained using both M_ halo and V_ max, and (e) a GNN trained using 3d positions, 3d velocities, M_ halo, and V_ max. In Table <ref>, we list performance metrics for various RF and GNN models, including the RMSE, mean average error (MAE), normalized median absolute deviation (NMAD),[We define NMAD( x) ≡ k ·median(|x - median( x)|)), where k ≈ 1.4826 ensures that the NMAD and standard deviation are equal for a normally distributed x.] Pearson correlation coefficient (ρ), correlation of determination (R^2), bias, and outlier fraction (>3× NMAD). The SHAM model constructs separate monotonic relationships between M_ halo or V_ max and M_⋆ for centrals and satellites. Another difference between the SHAM model and other approaches considered here is the former's explicit treatment of subhalo centrality. In order to facilitate an apples-to-apples comparison, we also train an abundance matching (AM) model that does not distinguish between satellites and centrals; however the AM model performs considerably worse than the SHAM counterpart. We note that the AM and SHAM models are trained and evaluated on the same data set, so their performance metrics may be overinflated. We also train several RF models, which serve as reasonable proxies for AM or conditional luminosity function models <cit.>. By comparing panels (b) and (c), we observe that V_ max is more physically connected to M_⋆ than M_ halo, in agreement with previous findings (i.e., ; we find this to be true for the RF, AM, and SHAM models). A RF trained on both M_ halo and V_ max provides an even better reconstruction (RMSE = 0.148 dex). Ultimately, we find that the GNN strongly outperforms all baseline models. While the GNN does not distinguish between centrals and satellites, it may be able to learn whether a given subhalo is a central based on surrounding subhalo properties (see Section <ref>). §.§ Centrals versus satellites Satellite dark matter halos are preferentially stripped relative to stars in a host halo's tidal field <cit.>. In Appendix <ref>, we show the stellar mass-halo mass relation for satellite and central galaxies in TNG300 (Figure <ref>). Indeed, we observe that satellite galaxies exhibit significantly more dispersion than centrals M_⋆–M_ halo relation. Our 3d GNN is also worse at predicting log(M_⋆/M_⊙) for satellites than for centrals (see bottom two rows of Table <ref>), but this is due to the inherently larger scatter in the satellite-halo relation. We find that there is an overall negative bias for satellites and and positive bias for centrals, because the GNN must learn separate offset relations for both centrals and satellites. §.§ Cosmic substructure in projection We also construct cosmic graphs in projection, i.e. projected coordinates x_1 and x_2, and radial velocity v_3, instead of the full phase space information (see Appendix <ref>). This 2d GNN model achieves RMSE = 0.135 dex scatter, which still exceeds the performance of the best RF estimator (see Table <ref>). Because the 2d GNN encode projected large scale structure information, it outperforms the RF models that can only learn isolated subhalo information. § DISCUSSION We have presented a novel method for populating dark matter subhalos with galaxy stellar masses. Mathematical graphs combine individual halo properties and environmental parameters in an equivariant representation, resulting in robust predictions for both central and satellite galaxies. As shown in Table <ref> and Figure <ref>, the cosmic graphs outperform random forests trained on V_ max and M_ halo. For galaxies with log(M_⋆ / M_⊙ ) ≥ 9 and log(M_ halo / M_⊙) ≥ 10, we recover the logarithmic stellar mass to within a root mean squared error (RMSE) of 0.129dex. §.§ Inductive biases of GNNs We note that previous works have employed convolutional neural networks (CNNs) for painting stars onto dark matter maps <cit.>. Unlike abundance matching models and RFs, CNNs are able to represent local spatial information. However, CNNs and GNNs have different inductive biases: CNNs are well-suited for representing fields discretized onto a Cartesian grid, while GNNs are well-suited for representing objects and relationships between them. Galaxies have small sizes (∼kpc) relative to their typical separations (∼Mpc), and they interact with each other (and their surrounding media) through multiple physical mechanisms (e.g., gravitational attraction, tides, ram pressure, etc). Therefore, cosmic structures naturally conform to a graphical representation, motivating our use of GNNs in this work. §.§ Galaxy environments We note that a GNN with no edges except self-loops would essentially model the galaxy-halo connection in isolation; all environmental information is contained and passed along the edges. However, if we remove self-loops from the GNN, then the GNN is still able to infer log (M_⋆/M_⊙) to within RMSE ∼ 0.145 dex. A GNN without self-loops must estimate galaxy stellar mass solely from neighboring halo information, which demonstrates that galaxy environments are informative for modeling the galaxy-halo connection. We find that the GNN with max-pooling aggregation function achieves 0.001 dex lower RMSE than a GNN with sum-pooling. This result suggests that the GNN selects the largest value for some combination of M_ halo, V_ max, and distance to neighboring subhalos in order to best make predictions. We can speculatively interpret this as evidence that the largest and most nearby subhalo is most informative to a GNN. The largest subhalo might dominate environmental effects (e.g. tides and ram pressure) and control a given subhalo's stellar mass. Meanwhile, the summed information should capture all of the forces, and we expect it to be more robust or transferable across domains. This interpretation requires addition testing and an exhaustive hyperparameter search over GNN architecture and optimization procedures, which we aim to do in a follow-up work.[The linking length is a particularly important hyperparameter. In our preliminary tests, we have found 5 Mpc to give good results.] §.§ Applications to observations The strong performance of 2d GNNs (<ref>) is promising for facilitating comparisons to observations beyond the Local Group, where we can only reliably measure projected positions and line-of-sight velocities rather than full phase space information. Our method can be used to quickly estimate galaxy properties of constrained N-body <cit.> and Gpc-scale N-body simulated volumes <cit.> for comparison with wide-area galaxy surveys in the low-redshift Universe <cit.>. §.§ Limitations and caveats While we have shown that the GNN outperforms other methods, this demonstration does not definitively prove that GNNs are exploiting environmental information. Indeed, we have used a linking length of 5 Mpc, but this hyperparameter may be suboptimal and should be tuned. It is also possible that intrinsic scatter imposes a RMSE floor <cit.>, although GNN results using merger trees have shown that galaxy stellar mass can be recovered to even lower scatter <cit.>. Finally, it may be that merger history is more important than environmental information, and that the clustering information learned by a GNN only incrementally improves performance relative to other approaches. Our results will depend on choice of halo finder, i.e. if we were to use an alternative to the algorithm (e.g. ; ). We have not tested our results using different halo finding tools, and it is unclear whether a GNN trained using one halo finder catalog will properly generalize to another catalog produced by a different halo finder. We also note that our results, while promising, must be tested on dark matter only simulations with halo catalogs matched to the hydrodynamic simulation catalogs before we can rely on GNNs to paint galaxies onto dark matter subhalos. Additionally, domain adaptation will likely be needed to ensure simulated results can transfer to other simulations (e.g., while varying cosmological parameters; ) or to observations <cit.>. As a preliminary test, we repeat our experiment by training on TNG300 and validating on TNG50 data, and vice versa; in both cases the results are poor (> 0.2 dex). However, by training on a subset both simulations, we can recover log(M_⋆/M_⊙) to ∼ 0.13 dex for TNG300 and ∼ 0.14 dex for TNG50 <cit.>. This test suggests that cross-domain applications, such as transferring GNN results from simulations to observations, will necessitate some form of domain adaptation. § SOFTWARE AND DATA Our code is completely public on Github: <https://github.com/jwuphysics/halo-gnns/tree/halos-to-stars>. We have used the following software and tools: <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. We only use public simulation data from Illustris, which can be downloaded from <https://www.tng-project.org/data/>. § ACKNOWLEDGMENTS JFW and CKJ thank Peter Behroozi, Haley Bowden, Francisco Villaescua-Navarro, Tjitske Starkenburg, and Risa Wechsler for valuable discussions that sharpened this work. We also thank the two anonymous reviewers who provided excellent comments and suggestions that improved this manuscript. This research has made use of NASA’s Astrophysics Data System Bibliographic Services. The authors are grateful to the Kavli Institute for Theoretical Physics “Building a Physical Understanding of Galaxy Evolution with Data-driven Astronomy” program, where this work began. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. icml2023 § THE STELLAR MASS-HALO MASS RELATION FOR SATELLITES AND CENTRALS In Figure <ref>, we show halo masses and stellar masses for central galaxies (red) and satellites (blue) from the TNG300 catalogs. Our GNN is able to learn the offset relationships for both central and satellite subhalos. § COSMIC GRAPHS IN PROJECTED COORDINATES In <ref>, we trained a GNN to learn the galaxy-halo connection using projected positions and radial velocity, in addition to M_ halo and V_ max. In Figure <ref>, we show a projected version of the subvolume that appeared in Figure <ref>.
http://arxiv.org/abs/2306.02320v1
20230604101054
Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
[ "Yusheng Su", "Chi-Min Chan", "Jiali Cheng", "Yujia Qin", "Yankai Lin", "Shengding Hu", "Zonghan Yang", "Ning Ding", "Zhiyuan Liu", "Maosong Sun" ]
cs.CL
[ "cs.CL", "cs.AI" ]
LoRa Backscatter Communications: Temporal, Spectral, and Error Performance Analysis Ganghui Lin, Graduate Student Member, IEEE Ahmed Elzanaty, Senior Member, IEEE, and Mohamed-Slim Alouini, Fellow, IEEE Ganghui Lin and Mohamed-Slim Alouini are with the Division of Computer, Electrical and Mathematical Sciences and Engineering, King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia (e-mail: [email protected], [email protected]). A. Elzanaty is with the 5GIC & 6GIC, Institute for Communication Systems (ICS), University of Surrey, Guildford, GU2 7XH, United Kingdom (e-mail: [email protected]). The source codes can be accessed at https://github.com/SlinGovie/LoRa-Backscatter-Performance-Analysis. July 31, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Parameter-efficient tuning (PET) methods can effectively drive extremely large pre-trained language models (PLMs) by only training minimal parameters. Different methods utilize different manually designed modules. In a small PLM, there are usually noticeable performance differences among methods. Nevertheless, when a PLM's scale grows up to tens of billions of parameters, all methods achieve almost the same performance and even perform on par with the full-parameter fine-tuning method. Hence, we hypothesize that model scaling can mitigate the design differences (the module structures and the number of trainable parameters) among methods. To study this hypothesis, we introduce a more flexible method – arbitrary () method – to be compatible with arbitrary module structures and any number of trainable parameters. Then, we experiment on 11 NLP tasks of 5 types and 2 representative PLMs. From our investigations, we find that the model scaling (1) mitigates the effects of the arbitrary module structure on the performance of tuning methods, and (2) enables the tuning methods to optimize fewer parameters to achieve the full-parameter fine-tuning performance. Intriguingly, we also observe that all tuning methods require almost the same number of trainable parameters to drive PLMs. We discuss this phenomenon and the above two findings collectively from optimization perspectives to fathom the mechanisms behind them. These conclusions not only demonstrate the positive impact of model scaling on tuning methods but disclose its mechanisms, which help us design more effective and efficient tuning methods on larger-scale PLMs. § INTRODUCTION Pre-trained language models (PLMs), such as GPT <cit.>, BERT <cit.>, and T5 <cit.>, have achieved great success on various natural language processing (NLP) tasks. Despite their effectiveness, fine-tuning (FT) these large-scale PLMs with full parameters incurs both unaffordable computational and storage costs. To solve this problem, researchers have proposed a series of parameter-efficient tuning () methods <cit.> which only update an assigned trainable module consisting of minimal parameters while freezing the rest parameters in a PLM during model adaptation. Although these existing representative methods can reduce computational and storage costs, there are usually noticeable performance gaps among these representative methods on downstream tasks. Intriguingly, we find that when a PLM's scale grows up to tens of billions of parameters, the performance gap among different methods shrinks and all methods can almost achieve full-parameter performance, as shown in fig:the_power_of_scale. These findings are interesting and worth exploring because the existing representative methods are designed with disparate philosophies, e.g., trainable modules that are composed of different module structures and numbers of trainable parameters. Hence, we hypothesize that model scaling mitigates the effects of the design differences among the methods on the performance. To validate this hypothesis, we further conduct two lines of ablation analyses: =-4pt (A1) The impact of model scaling on the module structure. (A2) The impact of model scaling on the number of trainable parameters. However, only investigating the four representative methods (fig:the_power_of_scale) might be insufficient to cover enough variations of module structure for ablation analyses (A1). Besides, the trainable modules of these four methods are limited to being composed of layer-level tensors or matrices, which is hard to precisely control the number of trainable parameters at the fine-grained (parameter) level in ablation analyses (A2). To facilitate the ablation analyses, we develop a more flexible method - Arbitrary Parameter-Efficient Tuning () method (<ref>) - to be compatible with arbitrary modules and any number of trainable parameters. In analysis (A1), we compare the performance of methods with different module structures under the same number of trainable parameters. From the experimental results, we find although methods with diverse structures require different training steps to reach convergence under the same training configurations, they can eventually achieve almost comparable performance on larger-scale models. This indicates that the model scaling can mitigate the effects caused by the structure difference on the performance but not on the convergence speed. In analysis (A2), we compare the performance of the same methods under different numbers of trainable parameters. From the experimental results, we find that the model scaling cannot mitigate the effect of the number of trainable parameters on tuning methods. Besides, we find two interesting phenomena when the number of trainable parameters reaches two thresholds: high parameter threshold and low parameter threshold. (1) When the number of trainable parameters equals the high threshold, all methods can achieve the full-parameter fine-tuning performance, and the high parameter threshold tends to be smaller on the larger models. Namely, we can optimize fewer necessary trained parameters for tuning methods to achieve full-parameter fine-tuning performance on the larger models. (2) On the other hand, when the number of trainable parameters exceeds the low parameter threshold, all methods outperform random guess performance. Furthermore, we find that the low parameter thresholds of all methods on the same models are almost the same, even for different tasks. This suggests that tuning methods require the same number of trainable parameters to drive the same PLMs. In summary, we design a more flexible methods - methods - to conduct the extensive ablation analyses and reveal the impact of model scaling on the designing differences among the tuning methods, e.g., (1) module structures (<ref>) and (2) number of trainable parameters (<ref>). (3) Furthermore, we discuss the further findings in ablation analyses from the perspective of optimization (<ref>). We hope these results not only provide guidance for designing tuning methods on the larger models but also encourage more researchers to explore the impact of model scaling on tuning methods from the theoretical perspective. § RELATED WORK Parameter-Efficient Tuning () Methods With larger PLMs continuously being developed, fine-tuning all of the parameters and storing the adapted weights become increasingly cumbersome. To address the issue, researchers propose methods which keep most of the parameters of PLMs frozen and optimize only a trainable module consisting of a few parameters during downstream adaptation. Over the recent years, many different designs of methods have emerged. For instance, some methods insert the external trainable modules after the feed-forward and attention layers in a PLM <cit.>; others prepend the trainable modules into attention layers <cit.> or the embedding layer <cit.>. Another line of method selects the existing parameters in a PLM <cit.> as the trainable module to optimize. To fathom the mechanisms behind the methods, <cit.> formalize methods as a unified framework to study the connections among methods. <cit.> also conduct the same study and further indicate that the optimization of different methods can be unified in a similar subspace. In this paper, we further explicitly demonstrate that the optimization of different methods requires a close number of trainable parameters (<ref>). The Power of Model Scaling With model size scaling, PLMs emerge many abilities, such as reasoning ability <cit.>, human-like behavior <cit.>, and can achieve state-of-the-art results on many understanding and generation tasks <cit.>. Besides, some researchers find that performing some methods <cit.> on large-scale models can almost achieve the full-parameter fine-tuning performance. In this paper, we verify this phenomenon is general in the existing methods (<ref>) and further study the impact of model scaling on the design differences among parameter-efficient methods (<ref>). Furthermore, we will explain the mechanism behind our findings from the optimization perspective (<ref>). § PRELIMINARY In this section, we first introduce the Transformer framework (<ref>) and the most representative parameter-efficient tuning methods (<ref>). §.§ Transformer Framework The Transformer model <cit.> is the mainstream architecture for most powerful PLMs. The model is stacked of L blocks, each of which consists of a sequence of layers, including self-attention and feed-forward network. During the forward pass through each block, the input hidden state is applied with the sequence of layers. For simplicity, we formalize the transformation of each layer as 𝐡^out=f(𝐡^in). Under the layer as the operator f, the input hidden state 𝐡^in∈ℝ^s × d_in is transformed into the output hidden state 𝐡^out∈ℝ^s × d_out, where s is the input length and d_in, d_out are dimensions. §.§ Parameter Efficient Tuning (PET) Different methods[More implementation details are left in <ref>] are equipped with diverse modules θ. These modules are composed of trainable parameters 𝐖 that modify the original layers and the corresponding transformations in PLMs. To make comparisons, we follow the unified view <cit.> to re-frame the transformations of all methods as the modifications Δ𝐡 of specific hidden states in the corresponding PLM's layers as shown in table:unified_view, 𝐡^out = f(𝐡^in)+Δ𝐡. In the training process, given a downstream task 𝒟={X,Y}, we only optimize all trainable parameters of the module θ for each method to generate desired outputs Y of a downstream task while freezing the rest of the parameters Φ in a PLM ℳ, as shown in fig:a-pet_methods. Formally, the training objective is to minimize ℒ as follows: min_θℒ(ℳ_(Φ,θ)(X), Y). § EXPERIMENT AND ANALYSIS To explore the impact of model scaling on these methods, we first introduce the investigated tasks, PLMs, and settings of the existing representative methods in the experiments (<ref>), and then report the main experimental results (<ref>). §.§ Experimental Settings Investigated NLP Tasks We investigate 11 tasks, which can be divided into 5 categories: (1) Sentiment Analysis (SA), including SST-2 <cit.>, IMDB <cit.>, and Rotten Tomatoes <cit.>; (2) Natural Language Inference (NLI), including MNLI <cit.>, QNLI <cit.>, and RTE <cit.>; (3) Paraphrase Identification (PI), including MRPC <cit.> and QQP <cit.>; (4) Question Answering (QA), including NQ-Open <cit.>; (5) Summarization (SUM), including SAMSum <cit.> and Multi-News <cit.>. More details are in <ref>. Investigated PLMs We will experiment on two series of PLM backbones: BERT <cit.> and T5 <cit.> representing masked language models and sequence-to-sequence models, respectively. Since masked language models are typically applied on discriminative tasks, we only investigate SA, PI, and NLI categories of tasks on the BERT backbone. Differently, sequence-to-sequence models have no fixed-length output limitation; thus, we investigate all tasks on the T5 backbone. Training Details of Methods We choose four representative methods, Prompt <cit.>, BitFit <cit.>, Adapter <cit.>, and LoRA <cit.>, to conduct analysis experiments. To guarantee the performance of methods, we keep the same design of each method, including the module structure and the number of trainable parameters, as reported in the original paper. Besides, we train each method on 11 tasks with 3 different random seeds and report their average performance. Details for the training configurations are in <ref>. §.§ Preliminary Experiments To observe the impact of the model scaling on methods, we range PLMs in ascending order of the model scale and report the performance of methods on each PLM. Model scaling impact on methods Results are reported in fig:main_exp. First, we can observe that the methods have noticeable performance gaps between each other on the general scale of models (and in the sub-figure ; and in the sub-figure ). This phenomenon is intuitive and shows the critical impact of design differences (the module structure and the number of trainable parameters) on the performance of methods, as it has been consistently found in many prior works <cit.>. However, we find that (1) as the model scaling increases (from to in the sub-figure ; from to in the sub-figure ), the largest and smallest performance gaps among methods all become minor (from 6%~4% to 4%~0% on T5 ; from 11%~2% to 6%~1% on BERT ); (2) all methods can even achieve the full-parameter fine-tuning performance when the model scale grows up to tens of billions parameters, i.e., . These two findings imply that the model scaling can mitigate the impact of the design differences (module structure and number of trainable parameters) on the performance of all tuning methods. § MAIN EXPERIMENTS To further verify whether the model scaling will respectively remove the effects of the above differences on methods, we conduct two ablations: the model scaling impact on the (1) module structure and (2) number of trainable parameters. However, only investigating the above four respective methods is insufficient to cover enough variations of module structure for ablation study (1). Besides, their trainable modules are composed of the layer-level weights as shown in fig:a-pet_methods. This limitation makes us hard to preciously control the number of trainable parameters at the fine-grained (parameter level) in (2). Hence, we develop a more flexible method, Arbitrary Parameter-Efficient Tuning () method. Its trainable module can be arbitrary structure (<ref>) that facilitates us to explore various module structures in the ablation study (<ref>) and easier control the number of trainable parameters in the ablation study of trainable parameters (<ref>). §.§ Arbitrarily Few Parameter Tuning () Similar to methods, the method can be re-framed in the unified form as shown in table:unified_view. Its trainable module θ is also composed of L trainable weights 𝐖, which can be expressed as θ= {𝐖_1, 𝐖_2, ..., 𝐖_L}. Differently, each trainable weight 𝐖 of the method is generated by 𝐖 = 𝐖⊙ m, where 𝐖∈ℝ^i× j is a weight, m∈{0,1}^i× j is a given binary pruning mask, and ⊙ is a Hadamard product. The i and j are dimensions. The trainable weight 𝐖 will be added or plugged into the PLM to modify the original PLM layers and the corresponding transformations[More implementation details are left in <ref>]. Furthermore, 𝐖 can be at any position of the PLM, and we can control its parameter number and distribution. Based on the parameter distribution as shown in fig:distribute_structure, we have two methods: The trainable parameters in 𝐖 are adjacent in rows or columns. The trainable parameters in 𝐖 are discrete. In the training process, we also follow Equation (<ref>) only to optimize θ= {𝐖_1, 𝐖_2, ..., 𝐖_L} of the method while freezing the rest of the parameters in a PLM. It is worth noting that the trainable modules of methods can be any arbitrary structure that is composed of the corresponding number of trainable parameters. In this sense, we can see the previously introduced methods as special cases of . §.§ The Impact of Model Scaling on The Module Structure To conduct the ablation study of module structure, we first freeze the other substantial factors (the number of trainable parameters) that might affect the performance of tuning methods in the experiments. Since we are hard to control the number of trainable parameters of the four representative methods, we set the number of trainable parameters of methods equal to four methods’ and then compare their performance in the same group (bar: h) as shown in each sub-graph of the fig:structure_and_position. In the same group, although these tuning methods have disparate module structures, they have the same number of trainable parameters (). Performance Comparison As shown in fig:structure_and_position, there are four groups of comparisons in each sub-graph. We can observe that as a PLM size scales (T5: from to ; BERT: from to ), the performance gaps between all tuning methods (, ) in the same group shrink. Although this phenomenon is also in both series of PLMs (T5 and BERT), the performance gaps become much smaller on . We argue that this is because the larger model has more powerful effectiveness in mitigating the impact of the module structure on the performance. In addition, we also find that even though tuning methods have different numbers of trainable parameters in the four groups (bar: h), they all almost achieve the same performance as well on the large-scale model, i.e., , as shown in . We will provide more discussions about this finding and explain the reasons behind this phenomenon. Convergence Comparison Although two methods can achieve the same final performance on the larger models, we find that and methods still require different training steps to achieve convergence. This phenomenon is not only on the smaller models (and ) but also on the larger models as shown in fig:apt_learning_rate_and_training_steps. Hence, we can infer that (1) the model scaling can only mitigate the effects of the structure on the performance, but not on the convergence speed; (2) the module structure with adjacent trainable parameters might be a better choice. We will discuss these inferences in <ref>. §.§ The Impact of Model Scaling on The Number of Trainable Parameters In this section, given the same tuning methods under different numbers of trainable parameters, we observe their performance to study the ablation. From the reported results in fig:ratio_threshold, we can find that (1) on the smaller models, e.g., (- - - , - - -), (- - - , - - -), when the trainable parameters of tuning methods are fewer than a certain number, the performance will drop to randomly guess performance; (2) similarly, this phenomenon still holds on the larger models, (—– , —–), (—– , —–). Strictly speaking, these findings demonstrate that the model scaling cannot sufficiently remove the effect of the number of trainable parameters on the performance of tuning methods. Interestingly, we find two parameter thresholds of trainable parameters on all models and name them as low parameter threshold of necessary trained parameters and high parameter threshold of necessary trained parameters, respectively. For a tuning method, when its trainable parameters are more than low parameter threshold, the tuning method can exceed random performance (e.g., 0% on T5 and 1× 100/Number of label types% on BERT); when the trainable parameters are more than high parameter threshold, the tuning method can almost achieve the full-parameter fine-tuning (FT) performance. Furthermore, we also find that the model scaling affects the two parameter thresholds. Hence, we explore this phenomenon in the following paragraphs. High threshold of necessary trained parameters We observe the high threshold of each tuning method in the sub-graph (SST2) of fig:ratio_threshold. From the experimental results, we find that the high threshold of the larger model, i.e., is always lower than the high threshold of the smaller model, i.e., . Except in the sub-graph (SST2), we observe the same phenomenon over all tasks (, , ), and on two series of models (T5 and BERT) as shown in sub-graphs (, , , , ). Hence, we can conclude that the model scaling enables the tuning methods to train fewer necessary parameters to achieve full-parameter performance. This conclusion can intuitively disclose the reason why tuning methods (and methods) perform equally and can almost achieve the full-parameter performance on larger models () in fig:main_exp and fig:structure_and_position. That is because the numbers of trainable parameters of overall tuning methods in fig:main_exp and fig:structure_and_position exceed the high parameter thresholds on ; hence, they all can achieve the full-parameter fine-tuning performance. Low threshold of necessary trained parameters From the results, as shown in the sub-graph of fig:ratio_threshold, we find that tuning methods will exceed the random performance (0% on T5; 50% on BERT) and immediately reach the 80~90% full-parameter fine-tuning performance when the trainable parameters are more than low thresholds. However, the low thresholds are relatively higher on . Namely, tuning methods require more trained parameters to exceed the random performance. This phenomenon is consistent over all tasks on two series of models. Hence, we can infer that the model scaling cannot reduce the number of necessary trained parameters to drive PLMs to perform downstream tasks. Furthermore, it is worth noting that the low parameter thresholds of all tuning methods almost lie in the same range on the same models. Specifically, the range of low thresholds are in [, ] on , [, ] on , [, ] on , and [, ] on . We will explain this phenomenon from the optimization perspective in <ref>. § DISCUSSION & CONCLUSION The objectives of all tuning methods (, ) can be expressed as min_θℒ(ℳ_(Φ,θ)(X), Y) as introduced in Equation (<ref>), where θ is a trainable module. The training module θ of different tuning methods is composed of different structures and the numbers of trainable parameters. In this paper, we explore the impact of model scaling on structures and the trainable parameters of training module θ in different tuning methods. We find that the model scaling can (1) mitigate the effects of module structures on the performance (<ref>) and (2) make tuning methods optimize fewer trainable parameters to achieve full-parameter fine-tuning performance (<ref>). To further fathom the reasons for these phenomena, we will explain them from the optimization perspective. (3) Besides, we also observe that all tuning methods can optimize almost the same number of minimal trained parameters to exceed random guessing performance on the same models (<ref>). Although phenomenon (3) is not caused by model scaling, we can also explain it from the optimization perspective. Hence, we together discuss it and the above two findings (1) and (2) in the following paragraphs. Why can model scaling mitigate the effects of the module structure of tuning methods? From the optimal control perspective, a trainable module (θ) of a tuning method can be seen as a controller <cit.> to drive PLMs towards downstream tasks. As the model scale increases, the larger model has higher parameter redundancy <cit.>, allowing arbitrary selection of trainable parameters for tuning without greatly degrading performance <cit.>; thus, controllers (modules) might have higher degrees of freedom. This might explain why the differences among structures of the trainable module (θ) have less impact such that all tuning methods can achieve the same performance on the larger models. It is worth noting that even though tuning methods can achieve the same performance, we found the module structures will affect converge speeds. Thus, finding a better module structure to improve the converge speeds for tuning methods is a direction worthy of exploring. Why can model scaling leverage the fewer trainable parameters to achieve fine-tuning performance? Training θ to steer a PLM towards downstream NLP tasks can be seen as adaptations. From the perspective of representation space, the adaptations of the tuning methods (, , and FT methods) can be re-parameterized into a unified low dimensional subspace <cit.>. <cit.> further demonstrate that adaptation on a larger PLM can be re-parameterized into the lower dimensional space; this implicitly explains why tuning methods can optimize fewer parameters on larger-scale models, e.g., , to meet the full-parameter fine-tuning performance on tasks. Why can tuning methods optimize the near numbers of trainable parameters to exceed random guessing? As stated above, the adaptations of the tuning methods can be re-parameterized into a unified subspace. <cit.> shows that this low dimensional subspace is shared among all NLP tasks for the same tuning methods. <cit.> further suggests that this subspace is also shared among various tuning methods. This might implicitly explain why all tuning methods can train the near numbers of necessary trained parameters to exceed the random guessing performance on the same models, even for the different tasks (<ref>). We hope that these discussions disclose the mechanisms behind the model scaling impact on tuning methods and can inspire more research toward exploring the advantages of the model scaling. § LIMITATIONS This paper might have some possible limitations as follows: (1) we only explore the effects of the scaling law on performance. There might be other research points worth exploring, such as the power of model scale to convergence speed; (2) we study the power of model scale with comprehensive empirical experiments and explain the findings from the optimization perspective. There might be more theoretical proofs to explain these exciting findings. acl_natbib § TASK AND DATASET We use various NLP tasks to evaluate the methods, which can be divided into the following 5 categories: Sentiment Analysis (SA) SA tasks evaluate if a model can correctly predict the sentiment labels of an input sentence. In this paper, we choose SST-2 <cit.>, IMDB <cit.>, and Rotten Tomatoes <cit.>. Natural Language Inference (NLI) NLI tasks evaluate a model's ability to correctly classify if a hypothesis can be entailed or not given a premise. In this paper, we choose MNLI <cit.>, QNLI <cit.>, and RTE <cit.>. Paraphrase Identification (PI) PI tasks evaluate if a model can correctly identify paraphrases, which means two sentences are identical in semantic meaning. In this paper, we choose MRPC <cit.>, and QQP <cit.>. Question Answering (QA) QA tasks evaluate a model's ability to answer questions. Context may be present. In this paper, we choose NQ-Open <cit.>, an open-world QA dataset without context. Summarization (SUM) SUM tasks evaluate a model's ability to summarize a long paragraph into a shorter abstract without loosing the semantics of the original text. In this paper, we choose SamSUM <cit.>, and Multi-News <cit.> in our experiments. § PARAMETER-EFFICIENT TUNING () METHODS Here, we first recap the PLM (transformer) layer. Then, we describe the detail and training configurations of all the methods mentioned in Table <ref>. §.§ Transformer Architecture A PLM is generally a stack of multiple Transformer layers, each composed of a multi-headed attention and a feed-forward network. The multi-headed attention contains h attention heads working in parallel. Specifically, given an input 𝐗∈ℝ^n × d, the i-th attention head works as follows: 𝐡_i = ((𝐗 𝐖^i_q) (𝐗 𝐖^i_k)^T/√(d / h) (𝐗 𝐖^i_v)), where n is sequence length, d is the hidden dimension, 𝐖^i_q ∈ℝ^n × d is query, 𝐖^i_k ∈ℝ^n × d is key, and 𝐖^i_v ∈ℝ^n × d is value. The output from each attention head will be concatenated and further transformed by 𝐖_o ∈ℝ^d × d and be denoted as: 𝐡_ = (𝐡_1, 𝐡_2, ..., 𝐡_h) 𝐖_o, where 𝐡_∈ℝ^n × d is the output hidden state of multi-headed attention layer. After that, 𝐡 will be fed into a two-layer feed-forward network 𝐡_ = σ(𝐡 𝐖_1 + 𝐛_1) 𝐖_2 + 𝐛_2, where 𝐖_1 ∈ℝ^d × d_m, 𝐖_2 ∈ℝ^d_m × d, 𝐛_1 ∈ℝ^d_m, 𝐛_2 ∈ℝ^d, and d_m > d is an integer. During the forward pass through each (transformer) block, the input hidden state is applied with the sequence of layers. For simplicity, we formalize the transformation of each layer as 𝐡^out=f(𝐡^in). Under the layer as the operator f, the input hidden state 𝐡^in∈ℝ^n × d is transformed into the output hidden state 𝐡^out∈ℝ^n × d, where s is the input length, and d is the dimension. §.§ Implementation Details of Methods We follow the unified view <cit.> to re-frame the transformations of all methods as the modifications Δ𝐡 of specific hidden states in the corresponding PLM's layers as: 𝐡^out = f(𝐡^in)+Δ𝐡. Prompt Prompt-tuning <cit.> prepends N_p trainable soft tokens, i.e. embeddings, to the input sentences and asks the model to predict the probability of the next word. During training, only the newly added embeddings are optimized and the backbone model is frozen. Given an input embedding 𝐗∈ℝ^n × d, prompt-tuning can be seen as performing the following operation: 𝐡^out = f([𝐗;𝐖_prompt]), where 𝐖_prompt∈ℝ^N_p × d, 𝐗∈ℝ^N × d, [;] means "concatenate", d is dimension, and N, N_p are the sequence lengths. The N_p = 100 and N_p = 256. To re-frame Prompt into the form of Equation <ref>, we concatenate a zero weight 𝐖_∈{0}^N_p × d and 𝐖_prompt∈ℝ^N_p × d, which can be denoted as 𝐖_prompt^'∈ℝ^(N+N_p) × d. Hence, the Equation <ref> becomes 𝐡^out = f(𝐗+𝐖_prompt^') and we can further reform it as the unified form: 𝐡^out = f(𝐗) + f(𝐖_prompt^'). BitFit BitFit <cit.> is a method that only tunes all the bias terms 𝐖_b ∈ℝ^d in the PLM, which lie in the self-attention and layer norm layers. For a Transformer layer f, BitFit performs the same operation as the normal Transformer model with no modification: 𝐡^out = f(𝐡^in) + 𝐖_b. LoRA LoRA <cit.> is a method that adapts a PLM in a low-rank space. It down-projects the attention weights into a lower dimension and up-projects it back to the original dimension. Only these projection weights are optimized. For a Transformer layer f, Lora computes the output hidden states as follows: 𝐡^out = f(𝐡^in) + α𝐡^in𝐖_down𝐖_up , where 𝐖_down∈ℝ^d × r denotes the down projection matrix, 𝐖_up∈ℝ^r × d denotes the up projection matrix, r denotes the rank and is set to 8, and α = 16. Adapter Adapter <cit.> is a method that only tunes the inserted adapter modules, which consist of down projection, non-linear transformation, up projection, and a skip-connection. For each existing Transformer layer in a PLM, the adapter modules are inserted at two locations: (1) after the first feed-forward layer, and (2) after the two consecutive feed-forward layers. During training, only the adapter modules are optimized and the rest of the PLM is frozen. For a Transformer layer f, Adapter can be seen as performing the following operation: 𝐡^out = f(𝐡^in) + σ(f(𝐡^in) 𝐖_down) 𝐖_up , where σ is a non-linear activation function, 𝐖_down∈ℝ^d × r_ denotes the down projection matrix, 𝐖_down∈ℝ^d × r_ denotes the down projection matrix, r_ = 24 denotes the bottleneck dimension. §.§ Training Configurations of Methods The trainable module of a method θ is composed of L trainable weights 𝐖 (all trainable weights) of the specific method, which can be expressed as θ= {𝐖_1, 𝐖_2, ..., 𝐖_L}. We also follow Equation (<ref>) to train the method. During training, we only optimize θ while freezing the rest of the parameters in the PLM. We adopt a batch size of 32 and a learning rate of 3e-4 with no warm-up for most of the models and tasks. However, here are two special cases soft-prompts and RTE datasets. Since soft-prompts are hard to optimize, we adopt a learning rate of 3e-2. Besides, due to the small size of RTE dataset, we set the learning rate for all methods as 5e-5 on RTE dataset. The maximum input length is 128 for single sentence tasks (SA) and 256 for multi-sentence tasks (NLI, PI, QA, SUM). The maximum generation length is 1 for classification tasks (SA, NLI, PI), 64 for Multi-News, and 128 for SAMSum. § ARBITRARY FEW PARAMETER () METHODS We introduce a more flexible method, Arbitrary Parameter-Efficient Tuning () method. Its trainable module can be arbitrary structure that facilitates us to explore various module structures and easier control the number of trainable parameters. §.§ Implementation Details of Methods As we previously introduced in <ref>, the trainable module of the method is composed of trainable weights. Each trainable weight 𝐖 of the method is generated by 𝐖 = 𝐖⊙ m, where 𝐖∈ℝ^i× j is a weight, m∈{0,1}^i× j is a given binary pruning mask, and ⊙ is a Hadamard product. Here, we have three operations to insert the trainable weight 𝐖 into the PLM to modify the specific layers and their corresponding transformations as follows: Add We will add the trainable weight 𝐖 = 𝐖⊙ m into the PLM layer. The corresponding transformation can be denoted as 𝐡^out: f(𝐡^in) + 𝐖_⊙ m_. This form is similar to the operation of Bitfit (Equation <ref>). Concatenate We will concatenate the trainable weight 𝐖 = 𝐖⊙ m and the hidden state or the layer in the PLM. The corresponding transformation can be denoted as 𝐡^out: f(𝐡^in) + { f(𝐖_⊙ m_) α𝐡^in𝐖_⊙ m_𝐖_⊙ m_. This form is similar to the operation of Prompt (Equation <ref>) and LoRA (Equation <ref>). Plug in We will plug the trainable weight 𝐖 = 𝐖⊙ m between PLM layers. The corresponding transformation can be denoted as 𝐡^out: f(𝐡^in) + σ(f(𝐡^in) 𝐖_⊙ m_) 𝐖_⊙ m_. This form is similar to Adapter (Equation <ref>). According to these operations and the corresponding transformations, we can express the methods as 𝐡^out: f(𝐡^in) + {𝐖_⊙ m_ f(𝐖_⊙ m_) α𝐡^in𝐖_⊙ m_𝐖_⊙ m_ σ(f(𝐡^in) 𝐖_⊙ m_) 𝐖_⊙ m_ ⋮. By comparing the Equation <ref> with the equations of the previously introduced methods, we can clearly find that the methods are special cases of methods. Besides, based on the distribution of the masked parameters in the 𝐖, we have two methods as shown in Figure <ref>: and . §.§ Training Configurations of methods The trainable module of a method θ is composed of L trainable weights 𝐖, which can be expressed as θ= {𝐖_1, 𝐖_2, ..., 𝐖_L}. We also follow Equation (<ref>) to train the method. During training, we only optimize θ while freezing the rest of the parameters in the PLM. Besides, we adopt a batch size of 32 and a learning rate of 3e-4 with no warm-up for most of the models and tasks. In addition, The maximum input length is 128 for single sentence tasks (SA) and 256 for multi-sentence tasks (NLI, PI, QA, SUM). The maximum generation length is 1 for classification tasks (SA, NLI, PI), 64 for Multi-News, and 128 for SAMSum. § MAIN EXPERIMENT RESULTS Due to the limitation space, we only report the average performance in fig:a-pet_methods. Here, we show all results of methods on all investigated tasks. § POWER OF MODEL SCALE TO TRANSFERABILITY Furthermore, to explore whether the power of model scale can also facilitate generalization ability of tuning methods, we explore the transferability between the NLP tasks in the zero-shot setting <cit.>. In the experiments, we first train the parameters of methods on the source tasks and directly reuse them on the target tasks in zero-shot setting. We will investigate two series of PLMs T5 (and ) and BERT (and ) and report the relative performance. Note that for different types of tasks, they are expected to share different groups of label sets (e.g. for task like SA, the labels are usually positive/negative, whereas, for tasks like NLI, the labels are usually entailment/not entailment). Reusing the parameters trained on the source task to test on the target task will naturally fail since the model is not able to generate the labels they have never seen in the training stage. To this end, we generally map the original label sets to a unified label set (e.g. negative/not entailment/false –> 0, positive/entailment/true –> 1). Utilizing a unified label set makes it feasible to evaluate the transferability of the AFP method among different types of tasks regardless of the divergence of original labels. The results are shown in fig:task_transfer, from which we can find that the (and ) methods can transfer to the same type of tasks demonstrated by the darker color alongside the diagonal of the matrix and generally perform well both on small-scale PLMs (fig:task_transfer (a): and ) and large-scale PLMs (fig:task_transfer (b): and ). However, the lighter color indicates that methods have difficulty performing different types of tasks overall, and both small-scale and large-scale PLMs share this phenomenon. This finding indicates that the power of scale does not necessarily facilitate the generalization ability of AFP methods which is in line with the prevalent assumption that fewer parameters often cause underfitting, whereas more parameters tend to cause overfitting. Nevertheless, the mechanism behind this phenomenon still arouses our deep concern and is worth expanding that we will systematically analyze it in our future work.
http://arxiv.org/abs/2306.10679v1
20230619030103
MB-HGCN: A Hierarchical Graph Convolutional Network for Multi-behavior Recommendation
[ "Mingshi Yan", "Zhiyong Cheng", "Jing Sun", "Fuming Sun", "Yuxin Peng" ]
cs.IR
[ "cs.IR" ]
Journal of Class Files, Vol. 14, No. 8, June 2023 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals MB-HGCN: A Hierarchical Graph Convolutional Network for Multi-behavior Recommendation Mingshi Yan, Zhiyong Cheng, Jing Sun, Fuming Sun^†, and Yuxin Peng, Senior Member, IEEE ^† Corresponding Author. ====================================================================================================================================================================== Collaborative filtering-based recommender systems that rely on a single type of behavior often encounter serious sparsity issues in real-world applications, leading to unsatisfactory performance. Multi-behavior Recommendation (MBR) is a method that seeks to learn user preferences, represented as vector embeddings, from auxiliary information. By leveraging these preferences for target behavior recommendations, MBR addresses the sparsity problem and improves the accuracy of recommendations. In this paper, we propose MB-HGCN, a novel multi-behavior recommendation model that uses a hierarchical graph convolutional network to learn user and item embeddings from coarse-grained on the global level to fine-grained on the behavior-specific level. Our model learns global embeddings from a unified homogeneous graph constructed by the interactions of all behaviors, which are then used as initialized embeddings for behavior-specific embedding learning in each behavior graph. We also emphasize the distinct of the user and item behavior-specific embeddings and design two simple-yet-effective strategies to aggregate the behavior-specific embeddings for users and items, respectively. Finally, we adopt multi-task learning for optimization. Extensive experimental results on three real-world datasets demonstrate that our model significantly outperforms the baselines, achieving a relative improvement of 73.93% and 74.21% for HR@10 and NDCG@10, respectively, on the Tmall datasets. Collaborative Filtering, Multi-behavior Recommendation, Graph Convolutional Network, Multi-task Learning. § INTRODUCTION Personalized recommendation is one of the most effective techniques for addressing the problem of information overload, and it has been widely deployed in various information systems <cit.>. Due to its simplicity and effectiveness, Collaborative Filtering (CF) <cit.> has become the mainstream approach in contemporary recommender systems. Over the past few decades, many CF-based models have been developed <cit.>, ranging from the early matrix factorization (MF) based methods <cit.>, to deep neural network (DNN) based methods <cit.>, and more recently, graph neural network (GNN) based methods <cit.>. The rapid advancement of recommendation techniques has greatly enhance the recommendation performance. Since CF-based models mainly rely on the interactions between users and items to learn user preference and make recommendations, an inherent limitation lies in those models is that their performance will degrade sharply when the available interactions are sparse. Most existing CF-based methods consider a single behavior, which is usually the target behavior on the platforms (e.g., buy on e-commerce platforms) for modeling. However, such behaviors are often very sparse in real-world systems, leading to a serious sparsity problem in these models. In reality, users engage in various types of behaviors (e.g., view and collect) to interact with items and gather information before making a final decision (i.e., engaging in the target behavior). Those behaviors also contain valuable user preference information, and their interactions are typically richer than those of the target behavior. Therefore, they can be leveraged to learn user preference and alleviate the sparsity problem. The utilization of other behaviors (also called auxiliary behaviors) to facilitate the recommendation of target behavior is called multi-behavior recommendation (MBR), which has gained increasing attention in recent years <cit.>. The key to MBR is how to utilize the auxiliary information to assist user and item embedding learning. Earlier approaches are straightforward to extend the traditional matrix factorization techniques operating on single matrix to multiple matrices <cit.> or enriched the training data with auxiliary behavior data using different sampling strategies <cit.>. With increasing evidence that MBR is effective, it has attracted more attention and recent advanced techniques have aslo been progressively introduced to this task. For instance, NMTR <cit.> combines DNNs to model the sequence of behaviors, and MATN <cit.> adopts the multi-head attention mechanism to model multiple behaviors. Furthermore, GCN-based methods employ various strategies on the unified graph constructed by all behaviors to learn user preferences <cit.>. MBGCN <cit.> constructs a heterogeneous graph that distinguishes different types of behaviors with different edges to model each behavior separately, and then aggregates user embeddings by its importance for prediction. The basic assumption behind MBR models is that the interaction information of different behaviors contains user preferences from different perspectives or to different extents. A common paradigm of existing DNN- or GNN-based MBR models is to first learn user and item embeddings from each behavior via a designed network and then aggregate the learned embeddings with different strategies for target behavior prediction (e.g., with or without attention mechanism) <cit.>. The difference lies in how to design the network structure to learn better embeddings from different behaviors and how to distill valuable information from each behavior to contribute to the target behavior prediction. In this work, we propose a novel MBR model with a hierarchical graph convolutional network (MB-HGCN) to utilize auxiliary behaviors for user and item embedding learning. Unlike previous GCN-based methods that directly learn user and item embeddings from the unified heterogeneous graph, our model adopts a different paradigm to learn user and item embeddings via a hierarchical network structure. Specifically, we first learn a global embedding for each user and item in a unified homogeneous graph constructed based on the interactions of all behaviors without differentiating the behavior types. We then take the global embeddings as initialized embeddings to each behavior-specific graph, which is constructed based on the interactions of each behavior, for subsequent behavior-specific embedding learning. By performing graph convolutional operations on the unified homogeneous graph, which contains all the interaction information of different behaviors, we can fully exploit all the interaction information to learn the global embeddings of the users and items. Although the global embeddings might be of coarse-grained as they have not differentiated different behavior types, they could be a good initial embeddings for the following behavior-specific embedding learning on each behavior graph. This can also alleviate the sparsity issue in each behavior, as a good embedding initialization is crucial for the representation learning in deep models. The behavior-specific embedding learning on each individual behavior graph aims to capture behavior-specific features for better preference learning. After the two-stage of embedding learning, we aggregate the behavior-specific embeddings with two different strategies for user and item embedding aggregations, respectively. For user embedding, to distill effective information from different behaviors for the target behavior prediction, we assign the weight to a behavior-specific embedding based on its similarity to the target behavior-specific embedding, with the intuition that the more similar the two embeddings are, the more they contribute to the target behavior. For item embeddings, we adopt a weighting scheme based on the interaction numbers of different behaviors <cit.>. The rationale behind this design is that item features should be consistent across different behaviors, and the difference between item embeddings learned from different behaviors is caused by the interactions of different users. Finally, we combine the global embedding and the aggregated behavior-specific embeddings for more comprehensive representations. Multi-task learning is adopted to treat each behavior as an independent task in optimization. To evaluate the effectiveness of our model, we perform extensive empirical studies on three large-scale real-world datasets. The experimental results demonstrate that our model outperforms the state-of-the-art MBR models by a large margin. For example, on the Tmall dataset, our model achieve an impressive improvement of 73.93% and 74.21% over the second-best baseline in terms of HR@10 and NDCG@10, respectively. We also conduct comprehensive ablation studies to carefully examine the utility of different designs in our model. In summary, the main contributions of this work are as follows: * We propose a hierarchical convolutional graph network for multi-behavior recommendation that learns user and item embeddings from the coarse-grained and global level with a unified graph to the fine-grained and behavior-specific level in each behavior graph. We deem this learning paradigm can better utilize the multi-behavior information to learn good user and item embeddings. * We emphasize the distinctiveness of the user and item behavior-specific embeddings, for which we design two simple-yet-effective aggregation strategies to aggregate the behavior-specific embeddings for users and items, respectively. This is quite different from mainstream aggregation methods that use the same mechanism for aggregation. * We conduct extensive experiments on three real-world datasets to evaluate the effectiveness of our MB-HGCN model and examine the validity of each component in MB-HGCN. Experimental results show that MB-HGCN achieves a remarkable improvement over the state-of-the-art models in terms of recommendation accuracy. Additionally, we release the codes and involve parameters to benefit other researchers [https://github.com/MingshiYan/MB-HGCN.]. The rest of this paper is structured as follows. Section <ref> reviews the related work, and Section <ref> describes our MB-HGCN model in detail. Next, Section <ref> introduces the experimental setup and reports the experimental results. Finally, Section <ref> concludes this paper. § RELATED WORK Multi-behavior recommendation refers to leveraging user-item interaction data of multi-type behaviors for recommendation <cit.>. Its advantage is to alleviate the data sparsity problem existing in single-behavior-based recommendation methods. Due to its excellent performance, it has attracted increasing attention in recent years <cit.>. Early multi-behavior recommendation methods are extended on traditional CF-based methods <cit.>, and the most direct approach is to apply the matrix factorization approach in single-behavior data into multi-behavior data. For example, Ajit et al. <cit.> proposed a collective matrix factorization model (CMF), in which entity parameters are shared among multiple matrix factorizations. This method is extended by Zhao et al. <cit.> to perform matrix factorization for different behaviors by sharing item embeddings. In addition, some researchers designed different sampling strategies to utilize the data from multiple behaviors. For example, Loni et al. <cit.> proposed a negative sampling strategy suitable for multiple behaviors to sample user-item interaction data of different behaviors. Ding et al. <cit.> put forward an improved negative sampling strategy to achieve better utilization of data, further extending this idea. Guo et al. <cit.> introduced a strategy of sampling based on similarity, which generates positive and negative samples from multiple auxiliary behaviors to help model training. Qiu et al. <cit.> proposed an adaptive sampling strategy according to the uncorrelated balance characteristics of samples between different behaviors. These methods supplement the training on the target behavior by exploiting the interaction data from the auxiliary behaviors. With the development of deep learning, multi-behavior recommendation methods based on deep neural networks (DNN) have been developed <cit.>. The main idea of such models is to design deep neural networks to learn the embedding of users and items separately from each behavior, and then aggregate them for recommendation. The difference between these methods is mainly reflected in the design of DNN and the aggregation strategies. For example, Xia et al. <cit.> designed a network consisting of transformers and multi-head attention mechanisms to learn embeddings in each behavior, and then aggregated them by adopting a fully connected network. Guo et al. <cit.> proposed a hierarchical attention mechanism to aggregate user preferences learned from different behaviors. Unlike other methods that aggregate information learned from different behaviors for prediction, Gao et al. <cit.> adopted a sequential modeling approach to explore the dependencies of different behaviors by passing the current behavior prediction score forward. The advantage of deep networks in representation learning makes the DNN-based MBR models achieve great progress on recommendation performance. Recently, with the success of graph convolutional networks in recommendation, many GCN-based MBR methods have also been proposed <cit.>. Similar to DNN-based models, the general paradigm for such methods is to model each behavior separately with GCN to learn the embeddings of users and items, and then aggregate them with different strategies. For example, Xia et al. <cit.> proposed a multi-behavior pattern encoding framework and a graph element network to explore complex dependencies between different types of user-item interactions. Jin et al. <cit.> performed user-item propagation and item-item propagation in different behaviors on a multi-behavior heterogeneous graph to learn the influence strength and semantics of different behaviors. Chang et al. <cit.> proposed a multi-interest learning framework, containing an interest-extracting module and a behavioral correlation module, to better model the complex dependencies among multiple behaviors. Gu et al. <cit.> designed different strategies to aggregate the embeddings of multi-behavior users and items separately, and adopted a star-shaped contrastive learning to capture the commonality between target behaviors and auxiliary behaviors. Different from those GCN-based methods, Yan et al. <cit.> proposed a cascaded residual network to explore the connection between different behaviors from the perspective of embedding propagation. In this work, we propose a novel hierarchical GCN structure to exploit the multi-behavior data for user and item embedding learning with a different paradigm. Our model first learn the global embeddings from a unified homogeneous graph constructed on all the behavior data, and then take them as initial embeddings for subsequent behavior-specific embedding learning. This learning strategy can well utilize the multi-behavior information and promise a good embedding initialization for the embedding learning in each behavior graph. Besides, we adopt two different strategies to aggregate the behavior-specific embeddings for users and items. The comparisons with existing advanced MBR models in empirical study demonstrates the effectiveness of our model. § METHODOLOGY §.§ Preliminaries Multi-behavior recommendation (MBR) is to utilize auxiliary behaviors (e.g., view and cart) when interacting with the platforms to help learn user preferences. These behaviors also reflect users' interests in items and thus contains rich user preference information, which can be leveraged to effectively alleviate the data sparsity problem. In this work, we aim to learn better user and item embeddings by exploiting auxiliary behaviors to improve the recommendation performance. Let 𝒰 and ℐ be the set of users and items, and the total number of users and items are M and N, respectively. K is the number of behavior types. We use k (1 ≤ k ≤ K) to represent the k-th behavior, and the K-th behavior is the target behavior. Let ℛ_k be the interaction matrix for the k-th behavior, which is a binary matrix. For r_ui∈ℛ_k, r_ui=1 if there was an interaction of the k-th behavior happened between user u and item i; otherwise r_ui=0. The studied problem is formulated as follows: Input: user set 𝒰, item set ℐ, and user-item interaction matrices {ℛ_1, ⋯, ℛ_K} for different types of behaviors. Output: a similarity score, which indicates the possibility that a user u will interact with an item i in the target behavior. Before describing our model, we would like to first introduce two types of graphs used in our model: * Behavior-specific graph, denoted by 𝒢_k=(V_k, E_k), which is a bipartite graph constructed based on the interactions of the k-th behavior type according to the interaction matrix ℛ_k. V_k consists of the user node u ∈𝒰 and the item node i ∈ℐ, and E_k denotes the user-item interaction edges in graph 𝒢_k. There is an edge between a user node and an item node if r_ui=1 for r_ui∈ℛ_k. * Unified graph, denoted by 𝒢=(V, E), which is constructed based on the interactions of all types of behaviors. It is a homogeneous graph, which means we do not differentiate the different types of interactions in this graph. For interactions of different types or multiple interactions of different behaviors between a user u and an item i, the edge is the same, namely, E=E_1 ∪ E_2 ∪⋯∪ E_K. §.§ Model Description Overview. Users' interaction behaviors with items reflect their interests. In multi-behavior recommendation, it is well-recognized that different types of behaviors disclose user's preference from different perspectives or to different extents <cit.>. Based on this common assumption, many MBR approaches have been proposed to extract valuable information from multiple behaviors to learn user preferences. Most previous MBR models first learn embeddings from different behaviors separately and then aggregate them with different strategies. The ultimate goal is to exploit the auxiliary behaviors to learn better user and item embeddings, thereby enhancing the recommendation accuracy of the target behavior. In this work, we propose a hierarchical graph convolutional network to exploit the multi-behavior information to learn the user and item embeddings. In particular, we first learn a global embedding by adopting the unified graph constructed based on interaction information of all behaviors. The global embedding is then used as the initialized embedding and fed into the behavior-specific graph to learn behavior-specific embeddings for each type of behavior. The intuition is that there is a general interest of users across different behaviors and each behavior contains some distinct features of user preference. The global embedding learned from the unified graph represents the general interests or coarse-grained preferences, and the behavior-specific embedding learned from each behavior-specific graph represents the refined or fine-grained user preferences for this particular behavior. In the next, two different strategies will be used to respectively obtain the final user and item embeddings for prediction. Multi-task learning is used for optimization. Fig. <ref> shows the overall structure of our MB-HGCN model, which mainly consists of three modules: 1) Embedding learning, which is designed to learn the embeddings of users and items via a hierarchical graph network structure; 2) Embedding aggregation, which adopts two different strategies to aggregate the embeddings of users and items. More specific, a novel weighting scheme is designed to adaptive distill valuable information from different behaviors for user embedding aggregation; and a linear aggregation approach is used for item embedding aggregation; 3) Multi-task learning is adopted to employ the interaction information of each behavior as supervision signals for user and item embeddings. In the following subsections, we will first brief the embedding initialization and then describe the three modules detailedly in sequence. §.§.§ Embedding initialization Following previous works <cit.>, we initialize the ID of user u ∈𝒰 and item i ∈ℐ as d-dimensional embedding vector e_u^0 and e_i^0, respectively. Let P∈ℝ^M × d and Q∈ℝ^N × d be the embedding matrices for the user and item embedding initialization, where M and N represent the number of users and items, respectively. Each user and item ID is represented as a unique embedding. Given the one-hot embedding matrix ID^𝒰 and ID^ℐ for all users and items, the embeddings of user u and item i are initialized as: e_u^0 = P·ID_u^𝒰, e_i^0 = Q·ID_i^ℐ, where ID_u^𝒰 and ID_i^ℐ represent the user u's and the item i's one-hot vector, respectively. §.§.§ Embedding Learning As mentioned, our model adopts a hierarchical GCN structure to exploit the multi-behavior for embedding learning. The interaction information of all behaviors are integrated into the unified graph 𝒢 to learn the general user preferences and item features, denoted as the global embedding e_u^g and e_i^g for user u and item i, respectively. The learned global embedding is then fed into each behavior-specific graph 𝒢_k to learn the behavior-specific embedding e_u^k and e_i^k for user u and item i in 𝒢_k, respectively. For the embedding learning in each graph, we employ the LightGCN <cit.> model, which is a lightweight CF-based single-behavior recommendation model. It simplifies the standard GCN and only retains the core neighborhood aggregation component, which has proven to be effective and superior in performance. It is worth mentioning that other GCN models can also be adopted, such as Ultra-GCN <cit.> and SVD-GCN <cit.>. The core of LightGCN is to recursively aggregate information from neighboring nodes for embedding update of the target node. The graph convolution operation in LightGCN is: e_u^(l+1) = ∑_i ∈ N_u1/√(| N_u|)√(| N_i|) e_i^(l), e_i^(l+1) = ∑_u ∈ N_i1/√(| N_i|)√(| N_u|) e_u^(l), where 1/√(| N_u|)√(| N_i|) denotes the normalization coefficient, N_u represents the set of items that are interacted with the user u, and N_i is the same. After l-layer propagation, LightGCN combines the embeddings obtained at each layer as the final user and item representation. Given the total number of layers as L, the representation of user u and item i after the LightGCN process are as follows: e'_u = ∑_l=0^Lα_l e_u^(l), e'_i = ∑_l=0^Lα_l e_i^(l), where α_l is a hyperparameter represents the importance of the l-th layer embedding, and e_u^(0) (e_i^(0)) is the initial embedding of user u (item i). As shown in Fig. <ref>, LightGCN with the same settings is used in both the unified graph 𝒢 and each behavior-specific graph 𝒢_k to learn the embeddings of users and items. Global embedding. Following the Eq. <ref>, we respectively obtain L embeddings to describe a user {e_u^(1), e_u^(2), ⋯, e_u^(L)} and an item {e_i^(1), e_i^(2), ⋯, e_i^(L)} after L-layer propagation. Before combining these embeddings, normalization is adopted to alleviate the impact of the embedding scale <cit.>. In our model, we apply L_2 normalization for simplicity: e^g_u = e_u^(0) + ∑_l=1^Lα_l e_u^(l)/‖e_u^(l)‖_2, e^g_i = e_i^(0) + ∑_l=1^Lα_l e_i^(l)/‖e_i^(l)‖_2, where e^g_u and e^g_i are the learned global embedding from 𝒢, they are also the sharing input of the following behavior-specific graph 𝒢_k. e_u^(0) and e_i^(0) are the input of graph 𝒢 (i.e., the initialized embedding e_u^0 and e_i^0). Intuitively, the farther neighbors have less important, thus, we set the α_l as 1/(l + 1). Behavior-specific embedding. Similarly, taking e^g_u and e^g_i as the initial embeddings for the embedding learning in each behavior-specific graph 𝒢_k, we can obtain K behavior-specific embeddings for each user u and item i ( i.e., {e_u^1, e_u^2, ⋯, e_u^K} and {e_i^1, e_i^2, ⋯, e_i^K}). §.§.§ Embedding Aggregation Through the embedding learning process, for each user u and item i, we obtain their global embeddings e^g_u and e^g_i, as well as behavior-specific embedding set {e_u^1, e_u^2, ⋯, e_u^K} and {e_i^1, e_i^2, ⋯, e_i^K}. In the next, we would like to aggregate the above user and item embeddings respectively by adopting different strategies to obtain the final embedding for recommendation. User embedding aggregation. Considering different behaviors may convey some distinct information of user preference, to distill valuable information from different behavior-specific embeddings for the target behavior prediction, we design a novel weighting scheme for user embedding aggregation. Taking the user u as example, the aggregation is formulated as: U = e_u^1 || e_u^2 || ⋯ || e_u^K, ẽ_u^k = Uδ^⊤, where || denotes the operation of stacking vectors to form a matrix. U∈ℝ^d × K is the matrix by stacking user embeddings learned from each behavior, in which d represents the embedding size and K is the number of behaviors. δ is the weight vector calculated based on the similarity between the embedding of each behavior and that of the target behavior. Formally, it is computed as δ = softmax(e_u^k^⊤U/√(d)). In this equation, e_u^k^⊤U calculates the embedding similarity between the k-th behavior and other behaviors. The denominator √(d) is used to prevent the vanishing gradient problem, and softmax(·) is adopted for normalization. The underlying rationality of the aggregation strategy is that the behavior with more similar embeddings contain more relevant preference information with the target behavior, and thus contribute more to the target behavior in the aggregation. With this aggregation strategy, our model can adaptively extract valuable information from other behaviors for the target behavior prediction. Moreover, together with the multi-task learning, it also avoids the behavior-specific embeddings to be optimized towards the target behavior in the learning process. Through the above operation, we can obtain the embedding set {ẽ_u^1, ẽ_u^2, ⋯, ẽ_u^K}. Item embedding aggregation. Since item features are consistent across different behaviors, we simply apply a linear combination to aggregate the embeddings learned from different behaviors. In different types of behaviors, the users who interacted with the items are different and the total number of interactions (i.e., the number of users who interacted with the items) are also different. Intuitively, with more interactions, the learned features should be more comprehensive. Accordingly, the weight assigned to the k-th behavior-specific embedding for an item i is defined as: γ_ik = w_k · n_ik/∑_m=1^Kw_m · n_im, where w_k is a learnable parameter for the k-th behavior; n_ik denotes the number of users that interacted with item i in the k-th behavior. The behavior-specific embeddings of item i are aggregated as: ẽ_i = ∑_k=1^Kγ_ik·e_i^k. Notice that both user embedding ẽ_u^k and item embedding ẽ_i are obtained from the behavior-specific information. In order to obtain more comprehensive representation, we combine them with the global embeddings to obtain the final embeddings: ê_u^k = ẽ_u^k⊕e_u^g, ê_i = ẽ_i ⊕e_i^g, where ⊕ denotes the element-wise sum. §.§.§ Multi-task Learning (MTL) MTL <cit.> is a learning strategy for jointly optimizing different-yet-related tasks. To better exploit the multiple-behavior information in user and item embedding learning, we treat each behavior as an independent training task. The inner product of user and item embeddings is adopted to estimate the prediction score. Take the k-th behavior as an example: y_ui^k = ê_u^k^⊤ê_i. The pairwise Bayesian Personalized Ranking (BPR) <cit.> loss is adopted in optimization for each task: ℒ_k = ∑_(u,i,j) ∈𝒪 -ln σ(y_ui^k-y_uj^k), where 𝒪={(u,i,j)|(u,i) ∈ℛ^+, (u,j) ∈ℛ^-} is defined as positive and negative sample pairs, ℛ^+ (ℛ^-) denotes the observed (unobserved) samples in the k-th behavior, and σ(·) is the sigmoid function. Following the Eq. <ref>, we obtain the loss function for all the K tasks, i.e., {ℒ_1, ℒ_2, ⋯, ℒ_K}, then the K loss functions are summed for joint optimization. Intuitively, the contribution of different tasks should be different. Assigning different weights to different losses may enhance the final performance, however, this is not our main focus in this study. Here we simply treat them equally to focus on studying the effectiveness of our embedding learning strategy and leave the study of different weights in the loss function as a future work. The final loss function is formulated as: ℒ = ∑_k=1^Kℒ_k + β·‖Θ‖_2, where Θ represents all trainable parameters in our model and β is the coefficient that controls the strength of the L_2 normalization to prevent over-fitting. To improve the generalization ability, two widely used dropout strategies <cit.> are also adopted in training: node dropout and message dropout, which are used to randomly drop out nodes in the graph and information in the embedding, respectively. § EXPERIMENT In this section, we conduct extensive experiments on three real-word datasets to evaluate the effectiveness of our model. In particular, we aim to answer the following research questions: * RQ1: How does our MB-HGCN model perform as compared with the state-of-the-art recommendation models that are learned from single- and multi-behavior data? * RQ2: How does the key designs in our MB-HGCN model affect the recommendation performance? * RQ3: How does the layer numbers of GCN setting in the LightGCN affect the performance of our model? * RQ4: Can MB-HGCN alleviate the cold start problem? * RQ5: How does the user embedding learned in the MB-HGCN model? §.§ Experiment Settings §.§.§ Dataset Three real-world datasets are adopted for experiments: * Tmall. This dataset is collected from Tmall[https://www.tmall.com/], which is one of the largest e-commerce platforms in China. It contains 41,738 users and 11,953 items with 4 types of behaviors, i.e., view, collect, cart, and buy. * Beibei. This dataset is collected from Beibei[https://www.beibei.com/], which is the largest infant product retail e-commerce platform in China. This dataset contains 21,716 users and 7,977 items with three types of behaviors, i.e., view, cart, and buy. * Jdata. This dataset is collected from JD[https://www.jd.com/], which is one of the most popular and influential e-commerce websites in the Chinese e-commerce field. This dataset contains 93,334 users and 24,624 items with 4 types of behaviors, i.e., view, collect, cart, and buy. For the above datasets, we follow the previous work to remove the duplicated records by keeping the earliest one <cit.>. The statistical information of the three datasets is summarized in Table <ref>. §.§.§ Evaluation Protocols We adopt the widely used leave-one-out strategy for model evaluation <cit.>. In training stage, the last postive item for each user is selected to construct the validation set for hyper-parameter tuning. In the evaluation stage, all the items in the test set are ranked according to the predicted scores by recommendation models. Meanwhile, two representative evaluation metrics in recommendation: Hit Ratio (HR@K) <cit.> and Normalized Discounted Cumulative Gain (NDCG@K) <cit.> are adopted to evaluate the performance: * HR@K: a performance metric used to evaluate the accuracy of a recommender system by measuring the proportion of test items for which the correct recommendation appears within the top K positions of the ranked list. * NDCG@K: a metric that measures the quality of the recommended items by considering both their relevance and their position in the ranked list. §.§.§ Baselines To demonstrate the performance of our model, we compare our MB-HGCN with several representative recommendation models, including three single-behavior models and six multi-behavior models. Single-behavior model: * MF-BPR <cit.>. BPR is a widely used optimization strategy, which assumes that the predicted scores of positive samples are higher than that of negative ones. MF-BPR has been widely used as a baseline to evaluate the performance of newly proposed models. * NCF <cit.>. It is a representative model combining neural network and CF, which combines shallow generalized matrix factorization model and deep multi-layer perceptron model to learn the interaction between users and items. * LightGCN <cit.>. It removes the feature transformation and nonlinear activation components in the standard GCN model, and only keeps the core neighborhood aggregation component, which simplifies the model structure and achieves a significant performance improvement over its counterpart. Multi-behavior model: * R-GCN <cit.>. R-GCN differentiates the relations between nodes via edge types in the graph and designs different propagation layers for different types of edges to model the relation information. This model can adapt to the multi-behavior recommendation. * NMTR <cit.>. It is a deep learning model for multi-behavior recommendation, which designs a neural network for each behavior. It sequentially passes the interaction score among behaviors and also adopts multi-task learning for joint optimization. * MBGCN <cit.>. This model constructs a heterogeneous graph to learn user preferences through user-item propagation and adopts a linear aggregation for feature fusion. In addition, item-item propagation is exploited to enhance item embedding learning. * GNMR <cit.>. This model designs a relation aggregation network to model interaction heterogeneity and attempts to explore the dependencies among different types of behaviors via recursive embedding propagation over the heterogeneous graph. * S-MBRec <cit.>. This model consists of a supervised and a self-supervised learning task, which separately learns the user and item embeddings from each behavior and adopts a star-style contrastive learning strategy to construct a contrastive view pair for the target and each auxiliary behavior. * CRGCN <cit.>. This model designs a cascaded residual network to explore the connection between different behaviors from the perspective of embedding propagation. The multi-task learning is also adopted for joint optimization. §.§.§ Parameter Settings Our model is implemented by Pytorch[https://pytorch.org/]. In the implementation of all methods, the mini-batch size and embedding size are set to 1024 and 64, respectively <cit.>. Adam <cit.> optimizer is adopted for the optimization. In addition, we employ grid search to tune the learning rate and regularization weights (i.e., β) in the [1e^-2, 3e^-3, 1e^-3, 1e^-4] and [1e^-2, 1e^-3, 3e^-4, 1e^-4] ranges, respectively. Meanwhile, we carefully tune the hyperparameters in the baselines according to their original papers, and an early stop strategy is adopted in the training stage. §.§ Overall Performance (RQ1) In this section, we report the performance comparisons between our MB-HGCN model and all the baselines. The results on the three datasets are shown in Table <ref>. Overall, the performance of multi-behavior methods outperforms that of single-behavior methods, which demonstrates the effectiveness of exploiting multiple behaviors. Among the multi-behavior methods, our MB-HGCN significantly outperforms other multi-behavior methods. Comparing with the best baseline, the average improvement of HR@K and NDCG@K across top-K for (K= 10, 20, 50) are 66.41% and 69.85% on Tmall dataset, 11.12% and 11.97% on Beibei dataset, 3.92% and 9.43% on Jdata dataset, respectively. This is a remarkable improvement in the recommendation accuracy, demonstrating the superiority of our model. Among the single-behavior methods, NCF generally outperforms MF-BPR due to its ability to model the complex and nonlinear relationships between user-item interactions using a neural network architecture. However, LightGCN exhibits the best performance among single-behavior methods. This result confirms the effectiveness of GCN-based approaches in capturing the user-item interactions information, as LightGCN uses a simplified GCN model that emphasizes the importance of neighborhood aggregation. The superior performance of LightGCN highlights the importance of leveraging graph-based modeling techniques for recommendation tasks. Among the multi-behavior methods, R-GCN, which directly combines embeddings learned separately from each behavior with a simple summation, exhibits poor performance, and in some cases, even performs worse than the single-behavior method LightGCN. This suggests that the straightforward aggregation of auxiliary behavior embeddings may have detrimental effects on recommendation accuracy. In contrast, MBGCN and GNMR adopt alternative strategies for embedding aggregation, and both achieve superior performance compared to R-GCN, which validates that different behaviors contribute differently to the target behavior. Moreover, NMTR and CRGCN consider the relationships among multi-behaviors through cascading modeling and both yield better performance than the aforementioned methods. NMTR models the cascading effects indirectly through interaction scores of different behaviors. In contrast, CRGCN directly incorporates cascading effects into the embedding learning process, leading to superior performance over NMTR. CRGCN is also the best-performing baseline in our experiments, leveraging multi-behavior relationships in embedding learning. However, MB-HGCN can outperform CRGCN by a large margin, mainly due to its hierarchical learning strategy and aggregation strategies. Our ablation studies provide further insights into the effectiveness of different components in MB-HGCN. It is worth noting that the improvement achieved on Tmall far exceeded that of the other two datasets. The primary reason for this substantial gap can be attributed to the greater variety of behavioral interactions among the different datasets. In comparison to Jdata, Tmall's collect behavior yielded a comparable amount of data as the buy behavior, which provided rich information. By comparison, the Beibei platform requires users to follow a strict sequence of behavior for making purchases, i.e., view→cart→buy. Consequently, the global embedding learned in our model reflected the view behavior, which limits the performance of our model. §.§ Ablation Study (RQ2) In this section, we conduct extensive ablation studies to examine the validity of different components in our model. §.§.§ Effect of the embedding learning in graph 𝒢 We design a hierarchical graph convolutional network for embedding learning, where a unified graph 𝒢 is utilized to learn a coarse-grained global embedding, which is then used as a shared initialization for refining embeddings in behavior-specific graphs. To validate the effectiveness of learning coarse-grained global embeddings, we conduct an experiment where we remove the unified graph component and compare the results to the original model that retain the unified graph. Specifically, we train the model without the unified graph 𝒢 and leverage the initialized embedding (i.e., e_u^0 and e_i^0) as the initialization of the behavior-specific graph 𝒢_k (k ∈ [1, K]). Experimental results are reported in Table <ref>. The experimental results suggest that removing the unified graph component leads to a significant decrease in performance. This result is attributed to the coarse-grained global embeddings learned in the unified graph component can provide better initialization for refining embeddings in behavior-specific graphs, which allows for more accurate learning in those graphs. Moreover, it is observed that there is a tremendous performance difference between the two models with and without the unified graph. In fact, without the unified graph, the model degenerates to a variant of R-GCN, where the difference is the embedding aggregation strategy. Compared with the results in Table <ref>, the model w/o. 𝒢 significantly outperformed R-GCN. The results provide evidence for the effectiveness of the proposed embedding aggregation strategies for users and items, and further verifies the validity of the unified graph design. §.§.§ Effect of the user embedding aggregation strategy Our intuition for designing user embedding aggregation strategies is that user interests vary across behaviors, and thus, not all user preferences contribute to the prediction of the target behavior. Therefore, we design a simple adaptive embedding aggregation strategy for user embeddings. To verify the effectiveness of the design for adaptive user embedding aggregation strategy, we conduct three experiments: 1) sum agg., we remove our adaptive user embedding aggregation module and directly sum different behavior-specific embeddings for information aggregation. 2) linear agg., we replace our adaptive user embedding aggregation module with linear aggregation, which assigns different weights based on the number of interactions for each behavior (the same to the item aggregation strategy). 3) adaptive agg., our proposed adaptive embedding aggregation strategy. Experimental results are reported in Table <ref>. The experimental results demonstrate that adopting the adaptive aggregation strategy achieves the best performance. The sum agg. method yields poor performance due to the varying interests that users exhibit in different behaviors, and the aggregation strategy lacks consideration of the importance of each behavior. Although the linear agg. method considers the importance of different behaviors, behaviors with more interactions may not necessarily reflect more accurate user preferences. In contrast, our adaptive aggregation strategy aggregates relevant information at the feature level based on the similarity between different behaviors, resulting in better aggregation of relevant information. It is worth mentioning that our aggregation scheme does not introduce any additional parameters into the model. This avoids the potential risks of negative impact on the embedding learning process from additional parameters introduced by the aggregation scheme. To verify this point, we perform an additional experiments. We pre-train our model to keep the optimal embeddings that learned from each behavior and remove the aggregation of global embeddings (i.e., the operation of Eq. <ref>) to eliminate the effects of global embeddings. The goal is to only retain the training of target behaviors to avoid the effects of multi-task learning. On this basis, we compare the performance of the following three variants: 1) M_unfix, which employs linear aggregation strategy for user embedding aggregation. 2) M_fix, which fixes the parameters in the embedding learning process based on the first experiment. 3) M_adap, which adopts our adaptive aggregation strategy for user embedding aggregation. The experimental results are reported in Fig. <ref>. According to Fig. <ref>, we can observe that for the two linear aggregation methods, the one with fixed parameters significantly outperforms the one with unfixed parameters. This is because the supervision signal cannot be transferred to the embedding learning process in the method of fixed parameters, in which only the parameters of linear aggregation process are optimized. This suggests that optimization of aggregation parameters may lead to locally optimal solutions for embedding learning. Furthermore, the method that adopts the adaptive aggregation strategy is better than the two methods which employ linear aggregation. The reason is that our adaptive aggregation strategy does not introduce any parameters, facilitating the embedded learning to be optimized on the right direction. §.§.§ Effect of the item embedding aggregation strategy In this experiment, we evaluate the effectiveness of a linear aggregation strategy for item embeddings aggregation. Considering that behaviors with more interactions may reflect more comprehensive features of items, we assign weights to each behavior based on its number of interactions (as shown in Eq. <ref>). To verify the effectiveness of this design, we conduct the following experiments: 1) fix γ_ik, we assign the same weight (i.e., set γ_ik=1) to each behavior for embedding aggregation. 2) w/o. w_k, we remove the learnable parameter w_k and strictly assign weights based on the number of interactions for each behavior. 3) w. w_k, which keeps the learnable parameter w_k allows for fine-tuning the importance of different behaviors (our approach). The experimental results are reported in Table <ref>. The results in Table <ref> indicate a significant improvement for the weight allocation method (w/o. w_k) over the non-weight allocation method (fix γ_ik), which supports our viewpoint that behaviors with more interactions reflect more comprehensive item features. In the two weight allocation methods, w/o. w_k and w. w_k, the method that fine-tuned the weights using the learnable parameter achieved better performance, indicating that the contribution of different behaviors varies for different items. Therefore, fine-tuning the weights via the learnable parameter can better aggregate the representation of items, further validating the effectiveness of our proposed strategy. §.§.§ Effect of the global embedding aggregation Aggregate global embeddings into the final embedding, as shown in Eq. <ref>, is to obtain more comprehensive representation. We conduct an ablation study to verify this point by comparing it with the variant without considering the global embeddings. The experimental results are reported in Table <ref>. It is shown that the method with global embedding aggregation, our model can gain a relative improvement of 16.23% and 15.79% on Tmall, 6.91% and 9.59% on Beibei, 15.89% and 18.22% on Jdata for HR@10 and NDCG@10, respectively. This demonstrates that aggregate global embedding can indeed improve performance. Global embeddings reflect coarse-grained user preferences, while behavior-specific embeddings reflect fine-grained user preferences. Combining these two types of embeddings can provide a comprehensive and hierarchical representation of user preferences, which can further improve recommendation performance. It again justifies the effectiveness of our hierarchical design by learning embedding from both global and behavior-specific levels. §.§.§ Effect of multi-task learning We adopt a multi-task learning (MTL) framework for joint optimization. To verify its effectiveness, we compare the method with and without MTL, in which the method without multi-task learning train the target behavior in a single-task. The experimental results are reported in Table <ref>. The experimental results, reported in Table <ref>, demonstrate that the MTL method outperforms the single-task method across all three datasets, indicating the effectiveness of MTL. The underlying reason for its effectiveness lies in the fact that our model treats each behavior as an independent task during training, and the better the behavior-specific embedding fits the user preferences exhibited in current behavior during the training process, the more accurately relevant information can be aggregated in the adaptive aggregation stage, leading to more precise predictions. It is worth noting that the scale of interaction data for different behaviors may affect the performance of multi-task learning. Therefore, considering the importance of different behaviors is necessary, but it is not the focus of our current research. We plan to research it in future work. §.§ GCN layer Study (RQ3) Our model adopts LightGCN as the backbone to perform convolution operations in each graph. From the perspective of the overall structure, the convolution operations on graph 𝒢 and graph 𝒢_k successively are similar to simply increasing the number of GCN layers. To this end, we compare the effects of GCN layers with different number settings. The results are reported in Fig. <ref> From Fig. <ref>, it can be seen that with the number of GCN layers increases, the performance will first increase with the increasing number of GCN layers and then drops when stacking more layers, which is consistent with the results observed in single-behavior methods LightGCN <cit.> and NGCF <cit.>. The best performance is obtained when the number of GCN layers is 2 in our experiments. §.§ Cold-start Problem (RQ4) The cold-start problem in recommender systems refers to the situation where a new user or item is added to the system, and there is insufficient historical data available to provide personalized recommendations. Multi-behavior recommendation is one approach to alleviate the cold-start problem by considering multiple behaviors data. Such behavior data may include rich information that can help better understand user preferences. In this section, we will verify the capabilities of our model to tackle this problem. We compare our MB-HGCN with two models, CRGCN and MBGCN, where CRGCN is the best baseline, and MBGCN designs a item-based scoring module to alleviate the cold start problem. To perform the study, we follow previous work <cit.> to randomly select 1,000 users from the test set as cold-start users and remove all of their buy behavior records from the training set. In addition, for other behaviors in the training set, we also remove the user-item pairs involved in buy behavior. These 1,000 users are simulated as the hard cold-start users with no buy behavior records. This process ensures that these 1000 users do not have any prior preference information about items, thus simulating them as hardcore cold-start users with no buy behavior records. We then train the model with the remaining records using the settings described in  <ref>. Finally, use the trained model to provide personalized recommendations for these 1,000 cold-start users. The experimental results are shown in Fig. <ref>. It can be observed that out MB-HGCN consistently outperforms CRGCN and MBGCN across all three datasets. Compared with CRGCN, the average improvement of our model are 19.94% and 35.30% on Tmall dataset, 18.62% and 20.75% on Beibei dataset and 11.28% and 22.08% on Jdata dataset in terms of HR@K and NDCG@K. This experimental result indicates that our model is able to better utilize multi-behavioral data to learn user preferences for the target behavior recommendation. This should be attributed to the design of hierarchical graph convolutional network, we learn user preferences from a coarse-grained global level to a fine-grained behavior-specific level in this design. Therefore, even if users do not have buy behavior, our model can still learn coarse-grained user preferences for the target behavior recommendation. In contrast, the sequential modeling of CRGCN fails to effectively learn random behaviors such as collect behavior that are uncertain to occur, resulting in suboptimal results. In addition, CRGCN shows a significant improvement compared to MBGCN due to its cascading design, which can effectively utilize the effect of cascading behaviors to refine user preferences, while the weighted aggregation strategy adopted by MBGCN may not be able to capture the complex interrelationships between behaviors. §.§ Embedding Learning Analysis (RQ5) In recommender systems, embeddings are commonly used to represent users. Each position in the embedding can be viewed as a potential interest feature for the user <cit.>, and these interest features collectively form the user's preferences. In our model, we first learn user preferences from a coarse-grained global level to a fine-grained behavior-specific level, and then adaptively aggregate relevant information from auxiliary behaviors based on their similarities. To explore the changes in user interests during this process, we visualize the user embeddings during the process. In this visualization, the darkness of color represents the importance of the feature, with darker colors indicating greater importance. The sum of all feature values in the embedding equals 1. Specifically, we randomly select one user from each of the Tmall, Beibei, and Jdata datasets, and display the first 8 positions of their global embeddings, behavior-specific embeddings and the final embeddings which used for the target behavior recommendation. The experimental results are shown in Fig. <ref>. Overall, the interest distribution exhibited by the global embeddings is relatively evenly distributed across all three datasets. Compare with the global embedding, the feature values of behavior-specific embeddings exhibit different changes. Taking the view behavior of Tmall dataset as an example, compared to the global embedding, the 0th, 1st, 3rd and 7th features in the view behavior-specific embedding have a darker color, indicating that users pay more attention to these features during browsing behavior. On the other hand, the 2nd, 4th, 5th and 6th features have a lighter color, suggesting that these features contribute less to user browsing behavior. This demonstrates that behavior-specific embedding refine and enhance the global embedding. Another interesting observation is that behavior-specific embeddings augment the degree of interest/disinterest (with darker colors becoming even darker and lighter colors becoming even lighter), without changing the properties of the feature (interested in becoming disinterested). This indicates that the global embedding can indeed represent users' coarse-grained preferences and further confirms that the behavior-specific embedding locally refines the global embedding. In addition, we observe that some behavioral features are consistent with global features (cart in Tmall dataset and collect and cart in Jdata dataset), which is due to the lack of user-item interaction records in the corresponding behaviors. It also confirms that MB-HGCN can address the cold-start problem to some extent, i.e., when there is no buy behavior, the model would retain the global embedding for recommendation. Finally, we adopt an adaptive aggregation strategy to obtain the final embedding for the target behavioral recommendation, which is obtained by aggregating features based on buy behavior. Taking the Tmall dataset as an example, the 0th and 3rd features in the final embedding are enhanced, while the 7th feature is slightly weakened, and other features with lower levels of interest (such as the 2nd, 5th, and 6th features) also adjusted to some extent. This result also validates the effectiveness of the adaptive aggregation strategy we proposed. § CONCLUSION In this work, we present a novel multi-behavior recommendation model named MB-HGCN, which can effectively exploit the multi-behavior information to learn user and item embeddings. In particular, a hierarchical graph network is designed to learn user preference from global to behavior-specific level. Moreover, two different aggregation strategies are applied to aggregate user and item embeddings learned from different behaviors. Extensive experimental results on three real-world benchmark datasets demonstrate the superiority of our model over the state-of-the-art MBR models. Further ablation studies verify the effectiveness of different components in our model. In the future, we plan to explore the relations among multi-behavior interactions in the embedding learning process, and conduct experiments on online systems with A/B testing to evaluate the performance of our proposed model. IEEEtran
http://arxiv.org/abs/2306.01484v1
20230602121737
Search for the Galactic accelerators of Cosmic-Rays up to the Knee with the Pevatron Test Statistic
[ "E. O. Angüner", "G. Spengler", "E. Amato", "S. Casanova" ]
astro-ph.HE
[ "astro-ph.HE" ]
firstpage–lastpage Robust and scalable rf spectroscopy in first-order magnetic sensitive states at second-long coherence time C.-H. Yeh^1,*, K. C. Grensemann^1, L. S. Dreissen^1, H. A. Fürst^1,2, T. E. Mehlstäubler^1,2 July 31, 2023 ========================================================================================================== The Pevatron Test Statistic (PTS) is applied to data from γ-ray observatories to test for the origin of Cosmic Rays (CRs) at energies around the knee of the CR spectrum. Several sources are analyzed within hadronic emission models. Previously derived results for RX J1713.7-3946, Vela Jr., and HESS J1745-290 are confirmed to demonstrate the concept, reliability, and advantages of the PTS. It is excluded with a significance more than 5σ that the sources RX J1713.7-3946 and Vela Jr. are Pevatrons, while strong indications exceeding 4σ are found for excluding HESS J1745-290 as a Pevatron. The importance to resolve source confusion with high angular resolution observations for Pevatrons searches is demonstrated using PTS for the region containing the SNR G106.3+2.7 and the Boomerang nebula. No statistically significant conclusion with respect to Pevatron associations could be drawn from this region, for the diffuse γ-ray emission around the Galactic Center, and the unidentified γ-ray sources LHAASO J2108+5157, HESS J1702-420A and MGRO J1908+06. Assuming the entire γ-ray emission from MGRO J1908+06 and the tail region of SNR G106.3+2.7 is hadronic, a statistical indication exceeding 3σ is found for the underlying proton spectrum to extend beyond 350–400 TeV as a power-law. This result can indicate that these sources are proton and helium Pevatrons, in which the accelerated particles contribute to the knee of proton and helium spectra observed at Earth. Acceleration of particles — (ISM:) cosmic rays — gamma-rays: general — Methods: statistical § INTRODUCTION The Cosmic Rays (CRs) that enter the atmosphere of the Earth have now been investigated for more than a century after their first detection <cit.>, for which the year 1936 Nobel prize was awarded. As, for example, reviewed in <cit.>, the flux of CRs detected on Earth is dominated by protons, with helium being the second most abundant nucleus. The energy spectrum above ∼ 30 GeV up to the so-called "knee" is very well approximated by a power-law with spectral index -2.7, although significant deviations from this simple model have been recently detected. The "knee" is a prominent feature seen in the CR energy spectrum at ∼3 PeV energies, where the spectral index steepens significantly to ∼-3.0. Although some recent evidence exists that the knee might be below 1 PeV when only the combination of protons and helium nuclei is considered <cit.>, it is clear that, at least for heavier elements, the spectral steepening occurs at energies well above 1 PeV <cit.>. The origin of the knee is debated ever since its first discovery <cit.>, with two interpretations being particularly popular. As reviewed in <cit.>, the first model identifies the knee energy with the maximum achievable energy of Galactic particle accelerators, while the second model proposes a connection between the knee and the maximum energy for which electrically charged particles are magnetically confined within the Galaxy. In addition to the origin of the knee, it remains to this date an open question whether the sites where CRs are accelerated up to or beyond the energy of knee are within the Galaxy. A Pevatron is in the following defined to be a source of CRs at energies around the knee of the CR spectrum. The localization of Pevatrons within the Galaxy would therefore positively decide the question of whether CRs are accelerated within the Galaxy up to the knee of the CR spectrum. From a theoretical side, multiple plausible astrophysical objects, with young remnants of Supernovae <cit.> above all, were proposed, as reviewed for example in <cit.>. However, no Galactic source showing firm evidence of hadronic acceleration to PeV energies and beyond has been identified to this date. The Pevatron Test Statistic (PTS), which offers a new approach to detect spectral signatures of Pevatrons, was recently introduced in <cit.> to estimate the sensitivity of the planned Cherenkov Telescope Array (CTA) to Pevatron sources. In this paper, the PTS <cit.> is applied for the first time to publicly available spectral data from different γ-ray observatories. The aim is to test whether the sources of the respective γ-rays are Pevatrons. The paper is structured as follows. Motivation for the stated definition of a Pevatron is briefly discussed in Sec. <ref>. The principle for the identification of Pevatrons by means of γ-ray spectra is discussed in Sec. <ref>, together with a brief assessment of the advantages of the PTS compared to other currently employed methods for the detection of Pevatrons. The calculation and interpretation of the PTS for public data from a selection of γ-ray sources is discussed in Sec. <ref>. The PTS profiles of Pevatron candidate sources are provided and discussed in Sect. <ref>. Finally, the conclusions are summarized in Sec. <ref>. § WHAT IS A PEVATRON? Two different definitions for a Pevatron are currently used in the literature and discussed in <cit.>. A Pevatron is defined in both cases as an astrophysical source in which individual particles are accelerated to energies beyond 1 PeV. However, in one case the name is reserved for hadronic accelerators while, in the other case, it is additionally used to denote leptonic accelerators. In the following, a Pevatron is defined to be a source of CRs at energies around the knee of the CR spectrum. This definition is briefly motivated and discussed in the following. The Tevatron, built at Fermilab <cit.>, was able to accelerate particles to TeV energies. This was indicated in the name 'Tevatron', which is a contraction of the metric prefix for the maximum achievable energy, and the Greek word 'tron' for 'tool'. Following this scheme, a Pevatron is literally a tool to accelerate particles to at least an energy of 1 PeV. The application of the term in astrophysics faces the problem that the astrophysical accelerators are not purposely used tools, but they are themselves the objects of study whose physical principles are under investigation. Instead, in recent astrophysical practice regarding Pevatrons, the maximum achievable particle energy of the accelerator is often considered to be eponymous. In this approach, an astrophysical Pevatron is an accelerator with a maximum energy of at least 1 PeV. This definition applies to accelerators of hadrons as well as electron accelerators such as the Crab nebula, which has been known for at least a decade to host PeV leptons (see e.g. <cit.> for a review) and from which photons with energies above 1 PeV were recently detected <cit.>. From a historical perspective, however, the term Pevatron is introduced in astrophysics to denote the putative sources of CRs at the knee of the CR spectrum. The focus here is not primarily on the maximum energy of the accelerator, but the introduction of the term Pevatron is justified by the presence of the knee in the CR spectrum which suggests a new physical effect on the scale of the Galaxy, as discussed in Sec. <ref>. As a consequence, the maximum energy of 1 PeV is not considered as the primary property of a Pevatron. Instead, a Pevatron is in the following defined to be a source of CRs with energies around the knee of the CR spectrum. The search for Pevatrons is then connected with the broader quest for the origin of CRs. In general, features in the CR spectrum might be related either to their acceleration or propagation <cit.>. As mentioned before, features exist in the CR spectrum detected on Earth also at energies lower than the knee, most notably a hardening observed in all nuclear species at around 300 GeV <cit.>. However, as testified by the differences between the spectra of primary and secondary nuclei <cit.>, these must be related to the physics of propagation in the Galaxy (see e.g. <cit.> for a detailed discussion). The knee is then the lowest energy feature that might be directly related to the properties of CR accelerators. For a long time, the general consensus has been in favour of the identification of this feature with the maximum energy achievable by CR protons in Galactic sources. The steepening observed at around 1 PeV would result from the superposition of the cutoffs of different CR elements, with heavier, less abundant elements reaching higher maximum energies thanks to the rigidity dependence of the acceleration mechanism. The above mentioned recent evidence for a knee at slightly lower energy than 1 PeV, when only protons and He nuclei are considered, does not change the picture much: the best estimate for the p+He knee is E_ knee(p+He)= 700 TeV <cit.>, less than a factor two different and still compatible with 1 PeV within the uncertainties. The conclusion is that, if the knee really is a signature of the maximum rigidity that Galactic accelerators can provide, the primary CR sources in the Galaxy must be able to accelerate particles at least up to 1 PeV. This definition has three important implications: 1: A Pevatron must accelerate hadrons. 2: Because the energy of the knee is, at least for heavy elements, well above 1 PeV, the maximum energy of a Pevatron must be much larger than 1 PeV. 3: It must be possible to explain the steepening of the CR spectrum at the knee by a combination of intrinsic properties of the Pevatron and propagation effects. § SEARCH FOR SPECTRAL SIGNATURES OF PEVATRONS WITH GAMMA-RAY OBSERVATORIES Deflection of charged particles by Galactic magnetic fields prevents direct localization of Pevatrons through CR measurements on Earth. Instead, indirect fingerprints of the presence of Pevatron activities must be searched for. Such fingerprints emerge from pp-interactions, namely the interactions of hadrons accelerated in a Pevatron with target material. The latter can easily be traced and determined from infra-red, sub-millimeter and radio observations <cit.> and it is an astronomical multi-messenger problem to detect the electrically neutral secondary particles, more concretely neutrinos and γ-rays, which are created in the pp-interactions. In the following, signatures of Pevatrons are searched for with a statistical test based on a hadronic model which reproduces the observed γ-ray emission, as discussed below in Sec. <ref>, and spectral data acquired from different γ-ray observations of various sources. The advantages of the method over other currently used search methods are discussed in Sec. <ref>. §.§ Spectral gamma-ray signatures of Pevatrons The differential energy distribution of accelerated hadrons, n(E_p), is in the following assumed to follow a simple power-law with spectral index Γ_P and an exponential cutoff at an energy E_cut, p, with sharpness described by the parameter β: n(E_p) ∼ E_p^-Γ_P exp(-(E_p/E_cut, p)^β) . The exact shape of the cutoff, namely the value of β, depends in principle on what limits the acceleration. Assuming that the main mechanism responsible for CR acceleration is Diffusive Shock Acceleration (DSA), as is the case for the most commonly invoked potential sources, such as SNRs <cit.> or Young Massive Star Clusters <cit.>, the most stringent limitation is usually provided by the size of the accelerator compared to the diffusion distance of the highest energy particles. This translates into the condition D(E_ max)=v_s L, where D is the diffusion coefficient, v_s is the shock velocity and L is the size of the accelerator (i.e. the radius of the SNR or of the wind termination shock in the case of a star cluster). Writing the diffusion coefficient as D(E)=D_0 E^δ, it is possible to show that the particle spectrum turns out to be the one in Eq. <ref> with β=δ <cit.>. In particular, an exponential cutoff is found for Bohm diffusion (δ=1), while sub-exponential cutoffs result from other diffusion models commonly adopted in astrophysics, such as Kolmogorov's (δ=1/3) or Kraichnan's (δ=1/2). Equation <ref> still provides a good description of the particle spectrum in scenarios that connect the maximum particle energy to magnetic field growth (see e.g.<cit.>). Current theories of efficient acceleration at shocks assume that the magnetic turbulence responsible for particle diffusion is self-generated by the particles being accelerated. As far as SNRs are concerned, in particular, the most common view is that achieving energies close to the knee is only made possible by the so-called non-resonant streaming instability <cit.>, induced by the particles at the instantaneous maximum energy leaving the accelerator. In these scenarios, the maximum energy is connected to the magnetic field growth, rather than limited by the system size (see e.g.<cit.>) and the instantaneous spectrum at the shock is usually assumed to be cut very sharply at E_ max, which would reflect the case of super-exponential cutoffs (β>1) in Eq. <ref>. The γ-ray emission Φ_γ(E) created in interactions of accelerated hadrons with ambient gas is calculated with naima package <cit.>, assuming the pp-cross section derived in <cit.>. In practice, a normalization of Eq. <ref> is calculated for a predicted γ-ray spectrum given a spectral index Γ_P and an energy cutoff E_cut, p. Instead of a direct normalization of the proton spectrum, the predicted γ-ray flux Φ_0 at an energy of E_γ=1 TeV is used as a normalization parameter for the hadron spectrum n(E_p)=n(E_p|E_cut, p, θ) where θ=(Γ_P,Φ_0). This convention simplifies the interpretation of the predicted flux in the context of γ-ray detectors. As an example, Fig. <ref> shows the predicted γ-ray spectrum resulting from a proton spectrum with spectral index of Γ_P=1.7, an energy cutoff of E_cut, p=300 TeV and differential flux of Φ_0=100 mCrab[Throughout the paper, Crab unit is assumed as the differential Crab flux at 1 TeV of 3.84 × 10^-11 cm^-2 s^-1 TeV^-1, taken from Table 6 of <cit.>] at 1 TeV. As discussed in more detail in <cit.> and <cit.>, the resulting γ-ray spectrum is itself well described by a power-law with index Γ_γ = ∼Γ_P-0.15 and sub-exponential cutoff. §.§ The PTS and other criteria for the Pevatron detection Given a set of observational data D, the best fit parameters E^*_cut, p and θ^* for the hadronic emission model discussed above in Sec. <ref> can be determined through the maximization of a likelihood function L(E_cut, p, θ|D). In the following, only flux data Φ(E_i) with errors σ(E_i) in energy bins E_i are analyzed, and the likelihood function is given by L(E_cut, p, θ|D)=-2∑_i (Φ_γ(E_i|E_cut, p, θ)-Φ(E_i)/σ(E_i))^2 . The PTS PTS=-2lnL̂(E_cut, p=1 PeV,θ|D)/L̂(E_cut, p, θ|D) , is introduced in <cit.> as a likelihood ratio test for the deviation of the energy cutoff E_cut, p in Eq. <ref> from 1 PeV. L̂(E_cut, p,θ|D) is the maximum of the likelihood over all values for E_cut, p and θ, including negative values for E_cut, p, and L̂(E_cut, p=1 PeV,θ|D) is the maximum likelihood when the cutoff energy is fixed to the Pevatron threshold of 1 PeV. The statistical significance of the PTS is calculated as S_PTS=sign(E_cut, p^*-1 PeV)√(PTS) . For S_PTS<-5, the association of a γ-ray source with a Pevatron can be excluded with a CL corresponding to at least 5σ. If, on the other hand, S_PTS≥ 5, a Pevatron detection can be claimed with a CL corresponding to at least 5σ under the assumption that the detected γ-ray emission is generated in interactions of hadrons with target nuclei. In other words, S_PTS>5 ensures with a CL corresponding to at least 5σ that the underlying hadron spectrum goes well beyond 1 PeV as power-law without showing any signs of a spectral cutoff, and consequently, such a source contributes to the CR spectrum at energies above 1 PeV. For |S_PTS|<5, the data are insufficient to decide whether or not the γ-ray source is associated with a Pevatron, and typically more data must then be acquired to make a decision based on the PTS possible. In the hypothetical case where the true cutoff energy E_cut, p is equal to the threshold energy of 1 PeV, the PTS is by definition insensitive given finite data. In practice, the PTS can only detect a Pevatron when the true cutoff energy is much larger than 1 PeV. This reflects the Pevatron definition discussed in Sec. <ref> according to which a Pevatron must accelerate hadrons to energies well above 1 PeV. More information on the interpretation of S_PTS and the connection between S_PTS and the PTS can be found in <cit.>. Two alternative methods, the detection significance of the γ-ray emission above 100 TeV and the 95% CL lower limit of the hadronic energy cutoff, are currently used in the literature to claim evidence for a Pevatron detection. The claim for the presence of a Pevatron based on a lower limit on the energy cutoff inferred to be larger than 1 PeV faces the problem that the confidence level, typically 95% or less[The z-score of 95% C.L. is ∼1.96.], is much smaller than the confidence level corresponding to 5σ, which is typically requested for a detection. On the other hand, detection of a significant (i.e. 5σ) cutoff in the hadronic energy spectrum well below 1 PeV (i.e. E_cut, p≪1 PeV) can serve as strong evidence against a potential association between a γ-ray source and a Pevatron. This asymmetry between the confidence level used for exclusion and confirmation of an association between a γ-ray source and a Pevatron is unsatisfactory, in particular when one deals with such an important claim as the detection of the sources of the highest energy CRs in the Galaxy, which certainly deserves to be made with a high confidence level. Similarly, the association between a γ-ray source with significant (> 5σ) emission at energies greater than 100 TeV and a Pevatron is problematic. Figure <ref> shows a γ-ray spectrum predicted for pp-interactions given a true hadronic cutoff energy of E_cut, p=300 TeV, i.e. for a hadronic accelerator which is not a Pevatron, together with the sensitivities of the Large High Altitude Air Shower Observatory (LHAASO) and the planned SWGO. It is obvious that both, SWGO and LHAASO, would be able to detect significant γ-ray emission from this simulated source above energies of 100 TeV, although this source is not associated with a Pevatron. The problem with this method is that a Pevatron is identified with cumulative excess events above 100 TeV and independent of the spectral shape, which does not guarantee that the cut-off energy is well above 1 PeV. The PTS method avoids both problems: confirmation and rebuttal of the association between a γ-ray source and a Pevatron are assessed with the same confidence level and the spectral shape is employed to ensure that the hadron energy cutoff is well above 1 PeV when detection is claimed. Figure 8 of <cit.> shows the relation between PTS and 95% CL lower limit on the proton spectral cutoff, together with the significance of E>100 TeV detection obtained from simulations of synthetic Pevatron sources. It was shown that these properties are strongly correlated and requirements for both of these alternative methods are well satisfied when the condition S_PTS≥ 5 is satisfied. § APPLICATION TO DATA The PTS is in the following calculated and interpreted for selected γ-ray sources based on public spectral data. As a first test, the PTS is calculated for three sources that are not considered to be Pevatrons, and results already established are confirmed with the new PTS concept. The discussion starts with the two shell-type SNRs, Vela Jr. and RX J1713.7-3946. Afterwards, HESS J1745-290, which is spatially coincident with the compact radio source Sgr A* at the center of the Galaxy, is discussed. For these three sources, results that were derived previously by other means and proving the non-Pevatron nature, are confirmed with the PTS. In a second step, it is shown that the PTS cannot decide whether the diffuse γ-ray emission from the vicinity of the Galactic Center (GC) is emitted by interactions of hadrons which are accelerated in a Pevatron. Both Pevatron <cit.> and non-Pevatron <cit.> conclusions were previously drawn for the diffuse γ-ray emission based on the derived lower limit on the hadronic cutoff energy. Together with the previous examples, this discussion shows the ability of the PTS to either decide whether a γ-ray source is a Pevatron at a given significance level or to quantify that a decision is impossible based on the available data, with the same unified criterion. The PTS is applied to the recently detected ultra-high-energy (UHE, E>100 TeV) Pevatron candidate γ-ray sources of LHAASO J2226+6057, MGRO J1908+06, LHAASO J2108+5157, and HESS J1702-420A. The potential of using high angular resolution observations to resolve source confusion and locate Pevatrons is demonstrated and explored based on the PTS analysis of the LHAASO J2226+6057 region. The joint spectral analysis of the LHAASO J2226+6057 region using PTS results in a significant rejection of the Pevatron hypothesis, when source confusion cannot be resolved. However, by using spectral data from high angular resolution observations to address source confusion, a sub-component of this region emerges as one of the best Pevatron candidate. In any case, additional spectral data are needed for these sources to decide whether they are associated with hadronic Pevatrons that can explain the 3 PeV knee feature. Finally, the PTS profiles of the Pevatron candidates are extracted. It is argued that the proton spectra underlying the observed γ-ray emission from MGRO J1908+06 and the tail region of SNR G106.3+2.7 can reach a marginal S_PTS significance level of 3σ at energies around 350–400 TeV (and 5σ at 150–200 TeV). Assuming that the knee of proton (and helium) spectra observed from the Earth is below 1 PeV (i.e. ∼700 TeV <cit.>), then the fact that these sources have reached marginal S_PTS levels suggests that they could be responsible for contributing to the knee of the proton spectra. Therefore, it is possible that these sources are proton Pevatrons, although the evidence for this contribution is only marginally significant. §.§ Data analysis In the following sections, public spectral γ-ray flux data from observations of different sources are analyzed. For each source, a flux dataset contains estimates of the differential γ-ray flux, dN/dE, at different energies. Flux measurements inferred from data acquired with different instruments are analyzed jointly in the framework of gammapy <cit.>. Where asymmetric statistical errors, [σ_-,σ_+], are reported for a γ-ray flux point, a conservative symmetric statistical error σ_stat=max{σ_-, σ_+} is used. Additionally, a systematic error σ_sys on each differential flux point is considered. The systematic error is assumed to scale proportionally to the estimated flux, i.e. σ_sys=ξ dN/dE, where ξ can be considered as the minimal relative error that is considered for each flux point. In the following, all conclusions are based on a conservative relative flux error of ξ≥ 20%. Analyses with ξ<20% are only discussed to the purpose of illustrating the dependence of the analysis on the assumed value of systematics error, ξ. The final conservative error on each differential flux point is calculated as σ=max{σ_sys, σ_stat}. For each considered γ-ray source, the respective flux dataset is fitted to a hadronic γ-ray emission model as described in Sec. <ref>, and the best-fit parameters are derived from χ^2-minimization. Lower limits on the hadronic cutoff energy E_cut, p and the significance S_PTS of the PTS are derived as detailed in <cit.> with the ecpli package <cit.>. The reported p-values are derived from a χ^2 test of the best fit model against the spectral data, with the error σ defined as above. The γ-ray flux, ϕ_true, emitted by a source is attenuated due to the effect of pair creation on interstellar radiation fields, i.e. the process γγ→ e^+e^- also known as γγ-absorption. Following <cit.>, it is assumed that the probability 1-P for a γ-ray to be absorbed due to pair creation within the Galaxy is smaller than 10% for γ-ray energies below 100 TeV. The relative correction to the observed flux due to γγ-absorption, (ϕ_true-ϕ_obs)/ϕ_obs=1/P-1 (being ϕ_obs=ϕ_true-(1-P)ϕ_true=Pϕ_true) , is therefore smaller than the considered minimum relative error of ξ=20% on the flux, when only γ-ray flux data for energies below 100 TeV are used. This applies in the following to the analysis of data for Vela Jr., RX J1713.7-3946, the GC region, and HESS J1702-420A. As argued in Sec. <ref>, Sec. <ref> and Sec. <ref>, the effect of pair creation can also be neglected for the considered data from the sources LHAASO J2226+6057, MGRO J1908+06 and LHAASO J2108+5157, respectively. §.§ Rejecting Pevatron hypotheses: The Supernova Remnants RX J1713.7-3946 and Vela Junior RX J1713.7-3946 and Vela Junior are two sources associated with shell-type γ-ray emitting SNRs. Despite the constraints on the mean target gas density, purely hadronic emission models as described in Sec. <ref> are used in the following to model the γ-ray emission detected from these two SNRs. This is motivated by the putative presence of dense matter clumps in the remnants' surroundings, as detailed in <cit.> for Vela Jr. and in <cit.> for RX J1713.7-3946. Figure <ref> shows the γ-ray spectral data for the two remnants from <cit.> for RX J1713.7-3946, and from <cit.> for Vela Jr. A minimum relative flux error of ξ=20% is assumed for all flux points seen in Fig. <ref>. The best fit γ-ray spectra resulting from the assumed hadronic emission model are shown as blue solid lines, while the red lines, shown for comparison, are the best fit γ–ray spectra when the particle population energy cutoff E_cut, p is fixed to 1 PeV, i.e. when the sources are modeled as Pevatrons. The figure shows that the fits of the data within a Pevatron model are clearly disfavoured. Analysis results obtained for these two remnants are summarized in Tab. <ref>. The results for RX J1713.7-3946 are shown in the first six rows of Tab. <ref>, which differ in the analyzed energy interval and systematic errors taken into account. In the analysis summarized with 20% systematics (3^rd row), where only data from H.E.S.S. is used, S_PTS=-5.2 is inferred. This result already corresponds to a rejection of the Pevatron hypothesis for RX J1713.7-3946 within the considered hadronic emission model with a significance greater than the 5σ level. A more robust rejection of the Pevatron hypothesis with a significance of S_PTS=-14.5 is possible when data from Fermi is considered in addition to data from H.E.S.S. (6^th row). As it can be seen from the table, the level of systematics has a strong influence on the obtained S_PTS values, reflecting in general their importance for the search of Galactic Pevatrons. A preference for a break in the energy spectrum of the hadronic particle population for RX J1713.7-3946 is found in <cit.>. Arguments for the presence of a hadronic energy break as a result of dense clumps in the remnants environment are discussed in <cit.>, following <cit.> and <cit.>. Assuming a hadronic particle population with an energy break at E_break=1.4 TeV, the best-fit values for the hadronic energy cutoff E_cut, p and the two spectral indices at energies below and above the energy break found in <cit.> are confirmed within errors for ξ=20% when data from H.E.S.S. and Fermi are fit jointly. Additionally, the Pevatron hypothesis can still be rejected with a significance of S_PTS=-6.4. Similarly, in the case of Vela Jr., the addition of data acquired with Fermi allows increasing the significance of the Pevatron hypothesis rejection from -4.6σ, when only H.E.S.S. data with minimal relative flux error ξ=20% are considered, to -7.2σ. The best-fit values for Γ_p and E_cut, p derived in <cit.> for Vela Jr agree within systematics with the values listed in the last row of Tab. <ref>. As discussed, due to their age and the presence of a cutoff at TeV energies in the γ-ray spectrum, Vela Jr. and RX J1713.7-3946 are typically not believed to be Pevatrons at present times within simple hadronic models. The PTS method confirms this idea with high statistical significance, and, moreover, can quantify the significance of rejection in a straightforward way. A different but very important question is whether these sources were Pevatrons earlier on during their evolution. If this were the case, signatures of the past acceleration of particles to PeV energies might be possible to find by looking at clouds in the source vicinity <cit.>. Using models for particle acceleration throughout the history of the respective source and particle propagation in the source vicinity, the PTS can also be used to investigate these questions. Appropriate data to carry out such a study will become available with the upcoming generation of high sensitivity, and especially high angular resolution IACTs. §.§ The Galactic Center Region Observations of the region around the center of the Galaxy across the electromagnetic spectrum have revealed a very complex astrophysical environment. The compact radio source Sagittarius A* (Sgr A*) is found to be spatially coincident with the dynamic center of the Galaxy, and is frequently associated with a supermassive black hole <cit.>. Observations of this region with the MeerKAT radio telescope were discussed in <cit.> and revealed many SNR structures which can act as potential CR accelerators. The possible presence of a Galactic Pevatron in this region is discussed in <cit.>. A review of the research status and further references can be found in <cit.> and, specifically for the γ-ray emission from the Galactic Center (GC) region, in <cit.>. The following discussion is limited to VHE γ-ray data above energies of ∼100 GeV, where measurements with multiple instruments and independent data analyses are publicly available. The analysis of data acquired at energies below 100 GeV with the Fermi satellite would require a careful consideration of large systematic errors <cit.> and the putative 'GeV excess' <cit.>, therefore it is not included in the analysis. Spectral data from three different regions, as shown in Fig. <ref>, are in the following considered. The first region is the pointlike source HESS J1745-290, shown with the black circle in Fig. <ref>, and frequently associated with Sgr A*, although other counterparts are also being discussed <cit.>. Spectral data for this source are available from three instruments <cit.> and shown in the upper panel of Fig. <ref>. In addition to the pointlike source HESS J1745-290, the significant detection of diffuse γ-ray emission around the GC is reported in <cit.>. Two different sub-regions for the diffuse γ-ray emission in the vicinity of the GC are considered in the following. The first sub-region is the 'GC ridge', defined by longitude |l|<1^∘ and latitude |b|<0.3^∘, excluding known γ-ray sources. This region is shown by the white rectangle in Fig. <ref>. Spectral data for the 'GC ridge' region are reported in <cit.>, and shown in the lower left panel of Fig. <ref>. The second sub-region is the 'GC Pacman', defined in <cit.> as the annulus around the GC with inner and outer radii of 0.15^∘ and 0.45^∘ respectively, excluding again known γ-ray sources. Spectral γ-ray data for this sub-region, shown by the red annulus in Fig. <ref>, are discussed in <cit.>, and shown in the lower right panel of Fig. <ref>. Based on an inferred lower limit of ∼400 TeV at 95% CL[In <cit.>, the 95% CL lower limit of 1 PeV is derived for this region taking into account Galactic absorption effects.], the possible presence of a Pevatron in this sub-region is discussed in <cit.>. Empirically, the diffuse γ-ray emission in the vicinity of the GC exhibits a strong spatial correlation with molecular clouds <cit.>, which suggests a hadronic origin. A connection between the diffuse γ–ray emission observed towards the vicinity of the GC and previous phases of enhanced acceleration of hadrons by the SMBH associated with Sgr A* is, for example, discussed in <cit.>. An alternative model, where young stellar clusters in the vicinity of the GC accelerate hadrons, is presented in <cit.>. In the following, only pure hadronic models for the diffuse γ-ray emission from the 'GC ridge' and the 'GC Pacman' regions as well as the central source HESS J1745-290 are considered. Alternative models for the origin of the diffuse γ-ray emission and the central source HESS J1745-290 are summarized in <cit.>. Systematic errors on the flux normalization and the spectral index are estimated as 15% and 0.1, respectively, for spectral data derived from H.E.S.S. observations <cit.>. For spectral data derived from observations with VERITAS, a 40% systematic error on both the flux normalization and the spectral index are estimated in <cit.>. In the present analysis, we make the following conservative assumptions: an estimated relative uncertainty ξ=20% is associated to each data point from H.E.S.S. and MAGIC, while ξ=40% is assumed for VERITAS data. The upper panel of Fig. <ref> shows the three spectral measurements for the point-like source HESS J1745-290. The spectral data inferred from all different observatories are compatible within the assumed errors. The fit results of the spectral data to the hadronic emission model described in Sec. <ref> are summarized in Tab. <ref>. HESS and MAGIC data immediately provide a strong indication towards the rejection of the Pevatron hypothesis, both considered separately and in a combined manner. The combination of data from HESS, MAGIC, and VERITAS leads to improved significance of S_PTS=-4.1 within systematic errors, and therefore to a rejection of the Pevatron hypothesis for the central source HESS J1745-290. A significant spectral cutoff feature was detected in the γ-ray spectrum of HESS J1745-290 at about 10 TeV <cit.>, consequently the γ-ray emission is not expected to be the result of a Pevatron activity.  The PTS analysis of the region can confirm this result, providing a quantitative rejection level of the Pevatron hypothesis. Table <ref> summarizes the results of best-fit hadronic γ-ray emission models to the GC Pacman and the GC Ridge data available. Again, a minimal relative flux error of ξ=20% is assumed for data from H.E.S.S. and MAGIC, while ξ=40% is used for the analysis of spectral data from VERITAS. The PTS leads to S_PTS=0.4 for the GC Pacman region, and to S_PTS=-2.3 for the GC ridge region. Our conclusion is that the data are insufficient to assess the Pevatron hypothesis for both diffuse emission regions based on the PTS. Deeper observations of this region with future instruments, especially at >100 TeV energies (i.e. with the future SWGO experiment), are needed in order to reject or confirm the Pevatron hypothesis for the diffuse γ-ray emission in the vicinity of the GC. §.§ LHAASO J2226+6057 and MAGIC Tail Emission: The Boomerang PWN and SNR G106.3+2.7 The LHAASO collaboration reported the significant detection of UHE γ rays from the direction of the source LHAASO J2226+6057 at energies above 100 TeV in <cit.>. Together with previous measurements with different instruments <cit.>, spectral γ-ray data from GeV to several hundred TeV energies are available for this region. The region was first studied by VERITAS <cit.> and secondly by HAWC <cit.>. The joint VERITAS-HAWC spectrum can be described well by a power-law with a spectral index of ∼2.3, without showing any sign of a spectral cutoff up to 180 TeV. The 90% C.L. spectral cutoff lower limits on the γ-ray and proton spectra are found to be 120 TeV and 800 TeV, respectively. Thanks to their improved angular resolution, the recent results from the MAGIC Collaboration <cit.> provided for the first time clear evidence for the existence of two emission components in the region, while the data from other experiments did not show any hint for separate components. The soft component, called 'head', has a spectral index of Γ_H = 2.12 ± 0.12, while the spectral index of the hard component, called 'tail', is found to be Γ_T = 1.83 ± 0.10. The best-fit positions of the head and tail components can be statistically separated from each other, having their emissions centered at RADEC coordinates of (337^∘.13, 61^∘.10) and (336^∘.72, 60^∘.84), respectively, and a spatial extensions of 0.16^∘ <cit.>. Two different astrophysical objects, SNR G106.3+2.7 and the Boomerang PWN, have been discussed as plausible sources of the observed γ-ray emission. The distance to SNR G106.3+2.7 is estimated to be less than 1 kpc <cit.>. As discussed in <cit.>, the VHE emission seen by VERITAS is centered near the peak of a dense ^12CO region which suggests a hadronic origin of the emission. The acceleration of particles by SNR G106.3+2.7 is discussed in <cit.>. However, as for example noted in <cit.>, SNR G106.3+2.7 is older than 3.9 kyrs and therefore unlikely to accelerate particles to PeV energies. An alternative hadronic origin of the emission powered by the Boomerang PWN is discussed in <cit.>. The multi-wavelength investigation of the emission from the tail region suggests a hadronic origin, while the nature of the emission mechanism from the head region can be both leptonic or hadronic <cit.>. Given the spatial proximity of SNR G106.3+2.7 and following <cit.>, the attenuation of the γ-ray spectrum due to pair creation is expected to be much smaller than 10%. Within the assumed systematic error, the effect of γ-ray attenuation can therefore be neglected. In order to demonstrate the power and effect of resolving source confusion in Pevatron searches, analyses of two different datasets, one for the entire region (LHAASO J2226+6057) covering both the SNR G106.3+2.7 and the Boomerang PWN, and the other for the tail region only, are performed. Figure <ref> (left) shows γ-ray data from the entire region including both head and tail regions, together with the best fit hadronic emission model shown in blue. The data acquired with Fermi <cit.>, VERITAS <cit.>, Tibet-ASγ <cit.> and LHAASO <cit.> were used for the analysis of this emission region. As discussed in <cit.>, spectral data from VERITAS observations in Fig. <ref> are scaled by a factor of 1.62 to adjust for the differences in the integration radius between the different analyses. On the other hand, Fig. <ref> (right) shows γ-ray emission only from the tail region. Energy-dependent morphology investigation of Fermi data shows that the high energy γ-ray emission above 10 GeV is centered at RADEC coordinates of (336^∘.71, 60^∘.90) <cit.>, while the UHE emission from the direction of LHAASO J2226+6057 above 100 TeV is centered at RADEC coordinates of (336^∘.75, 60^∘.95). Fermi and LHAASO emission are therefore found to be coincident with the reported emission from the tail region. Furthermore, <cit.> discussed that the contribution of head emission to the total flux above 10 TeV is below 37.1%. Using the power-law spectral models for head and tail regions given in <cit.>, this contribution can be calculated as 22.6% above 50 TeV and 19.2% above 100 TeV. In order to ensure that possible contamination coming from the head region is still within our minimum relative error of ξ=20%, only the LHAASO spectral points above 100 TeV, together with Fermi and MAGIC tail data, are taken into account in the joint fit shown in Fig. <ref> (right). Quantitative results for the fit of the hadronic emission model described in Sec. <ref> to the available spectral data for the entire region and tail region are summarized in Tab. <ref>. For the entire region, assuming a single emission component, the combination of data from Fermi, VERITAS, Tibet-ASγ, and LHAASO results in S_PTS=-5.2. In this case, it is therefore excluded with a statistical significance of more than 5σ that the source associated with LHAASO J2226+6057 is a Pevatron. The best-fit energy cutoff of the hadronic particle population is E_cut, p=(327±60) TeV together with the 95% CL lower limit of 241 TeV. Table <ref> for the LHAASO J2226+6057 region also highlights the importance of the combination of data over a wide range of energies. With only data from one of the considered experiments, a decision on the Pevatron hypothesis based on the PTS is impossible, while combining the different data sets can results in significant rejection. On the other hand, the fit of the hadronic emission to the available spectral data for the tail region shown in Fig. <ref> (right) results in S_PTS=1.2 and the best-fit energy cutoff of the hadronic particle population is E_cut, p=(1750±878) TeV with the 95% CL lower limit on the hadronic cutoff energy of ∼820 TeV, which provides more promising Pevatron picture with respect to joint HAWC and VERITAS analysis. Based on the results obtained from the joint analysis of currently available γ-ray data for the tail region, it is therefore impossible to decide whether the source is a Pevatron contributing to the CR spectrum above 1 PeV, and further observations are needed. When an extended source model for the data acquired with Fermi is assumed, instead of a pointlike source model, and a hadronic emission model is fitted to otherwise unchanged data, the results obtained both for the entire and tail only regions do not change significantly (see Tab. <ref>). The importance of improved angular resolution in the hunt for Galactic Pevatrons is demonstrated in light of recent MAGIC results. In the case when source confusion can not be resolved and the emission from the region is assumed to result from a single component (i.e. LHAASO J2226+6057), the joint data analysis results in a significant rejection of the Pevatron hypothesis with S_PTS=-5.2. On the contrary, when the source confusion can be resolved with high angular resolution observations and the emission can be separated into two components, the joint analysis leads to S_PTS=1.2 and a lower limit on the cutoff energy is 817 TeV, therefore indicating the source as one of the most intriguing Pevatron candidates. The future CTA observations of the tail region can indeed provide unprecedented angular resolution together with spectral data, especially between 10 TeV and 100 TeV, and therefore can lead to robust identification of the Pevatron nature of the tail region. §.§ The unidentified UHE source: MGRO J1908+06 One of the most promising Pevatron candidates is the unidentified source MGRO J1908+06. Both the LHAASO and HAWC Collaborations reported significant γ-ray emission above 100 TeV coming from the direction of this source <cit.>. Several astrophysical objects in the region can be responsible for the observed γ-ray emission. Two pulsars, PSR J1907+0602 and PSR J1906+0722, with Ė values of 2.8×10^36 erg/s and 1.0×10^36 erg/s, respectively, can produce leptonic emission. Moreover, there are also two SNRs, SNR G40.5-0.5 and SNR 3C397, and dense molecular clouds located in the emission region. Especially, the interaction between SNR G40.5-0.5 and dense molecular clouds located around the SNR, with gas densities ranging between [110, 280] cm^-3 (for a near kinematic distance of 3.7 kpc) and [260, 660] cm^-3 (for a far kinematic distance of 8.7 kpc), can give rise to hadronic emission. It was discussed in <cit.> that the multi-wavelength modelling of the emission suggests preferably a leptonic origin, while a hadronic origin cannot be excluded. The γ-ray data available for this region cover a wide energy range from a few tens of GeV to several hundred TeV, acquired from Fermi <cit.>, HESS <cit.>, HAWC <cit.> and LHAASO <cit.> observations. The source displays a single component with an extended morphology (>0.5^∘) in the HE-VHE domain, and remains extended even in the UHE domain (0.45^∘). The 1σ statistical uncertainties on the best-fit positions derived from different observations are shown in Fig. <ref>. One can see from the figure that all best-fit positions are compatible within 3σ uncertainties. In contrast to the case of SNR G106.3+2.7 discussed in Sect. <ref>, the recent observations taken with HESS telescopes, reaching up to a total live time of 80 h and providing relatively good angular resolution compared to the other experiments (see Fig. <ref>), were not sufficient to resolve more than a single component or any energy-dependent morphology in the region, leaving the hotspot structures seen in the data still in agreement within uncertainties <cit.>. Consequently, the connection between the observed GeV and >100 TeV emission remains unclear. In this section, two different assumptions are made in order to investigate the Pevatron nature of the observed emission, assuming pure hadronic origin. The first approach assumes that there is only one source in the region, therefore the Fermi GeV and UHE emission have the same origin, while the second approach assumes that there are two different origins responsible for the GeV and UHE emission. Table <ref> summarizes the fit results obtained from different combinations of the available spectral data to the hadronic emission model. For the former case, assuming a single origin, joint analyses of combined Fermi, HESS (or HAWC), and LHAASO data result in significant rejection of the Pevatron hypothesis, regardless of whether HESS or HAWC data are used (see Tab. <ref>). Figure <ref> (top) shows available joint spectral γ-ray data using HESS (left) and HAWC (right) observations, giving S_PTS of -5.82σ and -7.15σ, respectively. On the other hand, assuming a common origin for the VHE and UHE emission and a different origin for the GeV emission, joint analysis of combined HESS (or HAWC) and LHAASO data does not allow one to reject or accept the Pevatron hypothesis, resulting in insignificant S_PTS of -1.40σ and -1.30σ, respectively, as shown in Fig. <ref> (bottom). As it was shown in Extended Data Fig. 6 of <cit.>, the attenuation of the γ-ray spectrum of LHAASO J1908+0621 due to pair creation is expected to be smaller than 20% for the energies below ∼600 TeV, which is compatible with the assumed systematic errors, and can therefore be neglected. Joint analyses of the currently available γ-ray data from this region show no hint of the acceleration of hadrons well beyond 1 PeV energies, consequently no signature for a possible contribution to the 3 PeV knee seen in the CR spectrum could be found in the data. However, given the number of hotspot structures seen in HESS observations of this region, it is possible that there are at least two (or more) sub-components contributing to the observed γ-ray emission. Similar to the case of SNR G106.3+2.7 discussed in Sect. <ref>, it is likely that at least one of possible sub-components can have hard spectra reaching up to energies above 100 TeV, producing UHE γ-ray emission detectable by LHAASO. Deep observations of this region with the future CTA experiment, covering energies from a few tens of GeV up to a few hundred TeV and with its superior angular resolution, can shed light on whether there is more than one source in the region, and pinpoint the origin of the UHE γ-ray emission. §.§ Two unidentified sources: LHAASO J2108+5157 and HESS J1702-420A Recent analyses of data acquired respectively with the LHAASO and HESS observatories resulted in the detection of two previously unknown γ-ray sources, LHAASO J2108+5157 <cit.> and HESS J1702-420A <cit.>. The latter source was detected as a sub-component of the bright H.E.S.S. source HESS J1702-420 <cit.>. The γ-ray energy spectra of both sources are compatible with power-law models, showing no clear indications for spectral γ-ray cutoff up to at least several tens of TeV. Therefore, both sources are considered as potential Pevatron candidates. A spatial correlation with molecular clouds, and consequently a hadronic origin of the observed γ-ray emission, is plausible for LHAASO J2108+5157 <cit.>. Based on work presented by <cit.>, <cit.> discuss the possibility that the γ-ray emission from LHAASO J2108+5157 may result from the interactions of hadrons accelerated in young stellar clusters. Figure <ref> (left) shows the available spectral data for LHAASO J2108+5157. The analysis of these data results in an insignificant PTS with S_PTS=-0.6, and a 95% CL lower limit of 102 TeV on E_cut,p, when a minimum relative flux error of ξ=20% is considered. Based on the currently available LHAASO data only, it is impossible to decide whether the source is a Pevatron or not, and further observations, especially at energies lower than 10 TeV, are needed. As a result of observations with the single Large Size Telescope (LST) of the planned Northern CTA observatory <cit.> that is already operating, 95% CL upper limits on the γ-ray flux towards LHAASO J2108+5157 were recently derived at energies above 500 GeV <cit.>. Figure <ref> (left) demonstrates that even these flux upper limits can be used to put constraints on the hadronic best-fit models based on the LHAASO data. In particular, the flux upper limits derived from observations with LST-1 (shown with blue markers in Fig. <ref> left) are in tension with the 68% CL prediction of the γ-ray emission from the extrapolation of the best-fit Pevatron model to lower energies (shown with red shaded area and dashed line in Fig. <ref> left). A fit of the available LHAASO data, constrained, in addition, to be compatible with the LST-1 flux upper limits, is shown by the blue line in Fig. <ref> (left). The significance of the PTS for this combination of data is S_PTS=-2.4, which can be interpreted as an indication that this source is not a Pevatron. However, additional data will be required for a decision with high statistical significance. As shown in Fig. <ref> (left), the sensitivity of the full Northern CTA Observatory after complete construction and acquisition of 50 h of data will allow for further constraining measurements, especially within the energy range from 1 TeV to 10 TeV. Figure <ref> (left) also clearly demonstrates that very important constraints on the nature of LHAASO J2108+5157 can be obtained from extensive observation with ASTRI Mini-Array. This array of 9 Cherenkov telescopes will be able to detect gamma-ray photons up to an energy of 300 TeV and will have an angular resolution ∼ 3' at the highest energies <cit.>, much better than currently available. Operations will start, with an initial layout of 3 telescopes, in early 2024, and then in the final configuration by the end of 2025, early 2026 (S. Scuderi, personal communication), with a delay of 4-6 months with respect to the timeline foreseen by <cit.>. Although LHAASO J2108+5157 is a rather faint source, being the search for Pevatrons one of ASTRI Mini-Array key science objectives <cit.>, 𝒪(200) hours deep exposure of this promising Pevatron candidate can be foreseen. As discussed in Sec. <ref>, the attenuation of γ-rays due to γγ absorption is neglected at energies below 100-TeV, given that its effects are within our assumed minimum uncertainty of ξ=20%. The spectral dataset for LHAASO J2108+5157 contains three points at energies above 100 TeV, with relative errors of 32% at 126 TeV, 144% at 200 TeV and 193% at 500 TeV. The previous conclusions regarding the PTS do not depend on the available spectral data above 100 TeV. The significance of the PTS leads to S_PTS=-0.2 when only LHAASO data at energies below 100 TeV are fitted, and S_PTS=-1.2 when the available flux upper limits from LST-1 are taken into account together with LHAASO E<100 TeV data. In addition to LHAASO J2108+5157, another γ-ray source, HESS J1702-420A <cit.>, without any clear counterpart below TeV energies, was recently discovered and is discussed as a Pevatron candidate. This new γ-ray source emerges as a sub-component of the previously known bright source HESS J1702-420 <cit.> at energies above ∼30 TeV. A hadronic emission model and the association with a Pevatron are discussed in <cit.> due to the presence of several molecular clouds detected along the line of sight and the γ-ray spectrum extending without indication of a clear spectral cutoff up to energies of at least 100 TeV. The available spectral data are shown in Fig. <ref> (right), for a minimum relative flux error of ξ=20%. Within the hadronic emission model described in Sec. <ref>, the best-fit index is found to be Γ_P=1.57± 0.18, which is compatible with the result derived in <cit.>. The lower limit on the hadrons energy cutoff is 436 TeV (at 95% CL) and the PTS is insignificant (S_PTS=1). Similar to LHAASO J2108+5157, it is therefore impossible to decide based on the PTS and the available data whether HESS J1702-420 is associated with a Pevatron or not. Additionally, Fig. <ref> (right) shows are the γ-ray flux sensitivities of two planned observatories in the Southern hemisphere. The figure suggests that the planned SWGO and the Southern CTA observatory will both allow probing the γ-ray flux predicted by the hadronic model that best fits the currently available data from HESS. The future SWGO observations of this region can provide very valuable E>100 TeV data, while CTA observations will allow probing the source spectrum down to sub-TeV energies with an unprecedented angular resolution. § PEVATRON TEST STATISTIC PROFILES OF PEVATRON CANDIDATE SOURCES The joint γ-ray data analyses of the Pevatron candidate sources presented in Sec. <ref> assume a Pevatron definition threshold of 1 PeV, as discussed around Eq. <ref> in Sec. <ref>. With this assumption, the obtained values of S_PTS quantify the statistical significance and corresponding CL for a putative underlying hadron spectrum to extend beyond 1 PeV as a power-law, without indication of a cutoff. In other words, S_PTS quantifies whether the source can contribute to the CR spectrum above 1 PeV. However, taking into account the available joint γ-ray spectral data, none of the sources discussed in the previous section does robustly reach a 5σ level for S_PTS. The Pevatron threshold, i.e. the E_cut, p term in the numerator of Eq. <ref>, used for the calculation of S_PTS, can be modified to quantify the contribution of accelerated particles to the CR spectrum above a given energy threshold. In other words, S_PTS can be profiled to extract up to which energy threshold a significant contribution to the CR spectrum can be expected from a given source. As discussed in Sec. <ref>, there is evidence that the knee feature for proton and helium nuclei might be at energies around 700 TeV, i.e. lower than 1 PeV <cit.>. In this case a Pevatron threshold of ∼300 TeV could be sufficient for a source to contribute to the proton knee, namely to the highest energy protons accelerated in the Galaxy. Figure <ref> shows the threshold energy dependent profile of S_PTS for the sources discussed in this work, which result in |S_PTS|<5 for a Pevatron threshold of 1 PeV. The profiles were extracted using Eq. <ref> for a set of Pevatron energy thresholds between 100 TeV and 1 PeV with a step size of 100 TeV. It can be seen from this figure that for MGRO J1908+06 (H+L, see Fig. <ref> bottom left) and the tail region of SNR G106.3+2.7 as seen by MAGIC (see Fig. <ref> right), a marginal significance level of 3σ at energies around 350-400 TeV, and a robust 5σ level at energies of 150-200 TeV, is reached. Assuming that the underlying emission mechanism is hadronic, these results provide marginal evidence that astrophysical objects responsible for the γ-ray emission seen from the direction of MGR0 J1908+06 and the tail region of the SNR G106.3+2.7 seen in MAGIC data analysis can contribute to the knee of proton (and helium) spectra when the knee feature for these light elements is at energies around 700 TeV. Similarly, for the Pevatron candidate source HESS J1702-420A, a marginal 3σ level is reached for threshold energies around 200 TeV. Eventually, the S_PTS profile for the GC Pacman region does not reach a 3σ level for energies above 100 TeV and is therefore less promising. However, there are currently no UHE data available for the GC Pacman reach and HESS J1702-420A and the spectral data which will be acquired with future observations by SWGO have key importance and can potentially increase the achieved S_PTS levels. § CONCLUSION In this work, a Pevatron is defined to be a source of CRs at energies around the knee of the CR spectrum. Based on this definition, the PTS is shown to be a unified metric for the confirmation and exclusion of an association between γ-ray sources and Pevatrons which exhibits clear advantages over other currently employed methods, and offers a new approach for the robust detection of Pevatrons. As demonstrated in this paper for multiple Galactic γ-ray sources, the method is simple to apply in practice, especially for isolated sources and resolved source components. With a statistical significance of more than 5σ, it is excluded that the two shell type SNRs RX J1713.7-3946 and Vela Jr. are Pevatrons that can contribute to the knee feature seen at ∼3 PeV energies. Similarly, the Pevatron hypothesis for the Galactic central source HESS J1745-290 can also be excluded with a significance level of more than 4σ. The importance of using high angular resolution observations to resolve source confusion when searching for Pevatrons is demonstrated with the PTS analysis of the γ-ray emission region encompassing the SNR G106.3+2.7 and the Boomerang nebula. In this region source confusion is problematic. The PTS analysis results for the case when the region is considered as a unique source and when it is resolved in two sources are compared to each other, leading respectively to S_PTS=-5.2σ and 1.2σ, while the corresponding 95% C.L. lower limits on the proton cutoff are found to be ∼240 TeV and ∼820 TeV, respectively. This demonstrates clearly that source confusion can lead to misleading total γ-ray spectra, possibly obscuring Pevatron signatures, and implies the critical importance of high angular resolution observations for Pevatron searches, especially at energies above 10 TeV. No statistically significant conclusion can be drawn for the unidentified sources LHAASO J2108+5157, HESS J1702-420A and MGRO J1908+06. However, it is argued that data from future observatories, like the CTA, ASTRI Mini-array, and SWGO will help to decide whether these sources are Pevatrons. With currently available data, we tried to determine up to what energies these sources can contribute to the CR spectrum. Assuming a purely hadronic origin of the γ-ray emission, we found that the parent proton spectra of MGRO J1908+06 and the tail region of SNR G106.3+2.7 can reach marginal PTS levels of 3σ at energies around 350-400 TeV, and even 5σ at energies around 200 TeV. This result is a strong indication for these two sources being proton and helium Pevatrons, and likely contribute to the knee of the proton and He spectra around 700 TeV observed at Earth. § ACKNOWLEDGEMENTS E.O.A. acknowledges financial support by TÜBİTAK Research Institute for Fundamental Sciences. G.S. acknowledges financial support by the German Ministry for Education and Research (BMBF). S.C. acknowledges financial support from the Polish National Science Centre, grant DEC-2017/27/B/ST9/02272. E. A. acknowledges financial support by INAF under grant INAF-MAINSTREAM 2018 and PRIN-INAF 2019. This research has made use of the CTA instrument response functions provided by the CTA Consortium and Observatory, see <https://www.cta-observatory.org/science/cta-performance/> version prod5 v0.1 <cit.> for more details. This research has made use of the ASTRI Mini-Array sensitivity curve provided by the ASTRI Project <cit.>, see <cit.> for more details. We are grateful to Saverio Lombardi and Stefano Vercellone for their comments and indications in relation to the ASTRI Mini-Array performance, and to Salvatore Scuderi for updates on the ASTRI Mini-Array timeline. We express our sincere gratitude to Heide Costantini, Kathrin Egberts, and Ulisses Barres de Almeida for their useful contributions and constructive feedback, which greatly enhanced the quality of the paper. Facilities : CTA, Fermi, HAWC, HESS, LHAASO, MAGIC, SWGO, VERITAS, ASTRI Mini-Array § DATA AVAILABILITY The data that support the findings of this study are openly available and taken from the respective publications which are explicitly mentioned in the figures and text. § SOFTWARE The calculations are performed with ecpli python package <cit.>, which uses naima <cit.> and gammapy <cit.> python packages. mnras
http://arxiv.org/abs/2306.02991v1
20230605160241
Second-scale rotational coherence and dipolar interactions in a gas of ultracold polar molecules
[ "Philip D. Gregory", "Luke M. Fernley", "Albert Li Tao", "Sarah L. Bromley", "Jonathan Stepp", "Zewen Zhang", "Svetlana Kotochigova", "Kaden R. A. Hazzard", "Simon L. Cornish" ]
physics.atom-ph
[ "physics.atom-ph", "cond-mat.quant-gas" ]
[email protected] Joint Quantum Centre (JQC) Durham-Newcastle, Department of Physics, Durham University, Durham, United Kingdom, DH1 3LE. Joint Quantum Centre (JQC) Durham-Newcastle, Department of Physics, Durham University, Durham, United Kingdom, DH1 3LE. Joint Quantum Centre (JQC) Durham-Newcastle, Department of Physics, Durham University, Durham, United Kingdom, DH1 3LE. Joint Quantum Centre (JQC) Durham-Newcastle, Department of Physics, Durham University, Durham, United Kingdom, DH1 3LE. Department of Physics and Astronomy, Rice University, Houston, Texas 77005, USA. Department of Physics and Astronomy, Rice University, Houston, Texas 77005, USA. Department of Physics, Temple University, Philadelphia, Pennsylvania 19122, USA. Department of Physics and Astronomy, Rice University, Houston, Texas 77005, USA. Rice Center for Quantum Materials, Rice University, Houston, Texas 77005, USA. [email protected] Joint Quantum Centre (JQC) Durham-Newcastle, Department of Physics, Durham University, Durham, United Kingdom, DH1 3LE. Ultracold polar molecules uniquely combine a rich structure of long-lived internal states with access to controllable long-range, anisotropic dipole-dipole interactions. In particular, the rotational states of polar molecules confined in optical tweezers or optical lattices may be used to encode interacting qubits for quantum computation or pseudo-spins for simulating quantum magnetism. As with all quantum platforms, the engineering of robust coherent superpositions of states is vital. However, for optically trapped molecules, the coherence time between rotational states is typically limited by inhomogeneous light shifts. Here we demonstrate a rotationally-magic optical trap for RbCs molecules that supports a Ramsey coherence time of 0.78(4) seconds in the absence of dipole-dipole interactions. This extends to >1.4 seconds at the 95% confidence level using a single spin-echo pulse. In our magic trap, dipolar interactions become the dominant mechanism by which Ramsey contrast is lost for superpositions that generate oscillating dipoles. By changing the states forming the superposition, we tune the effective dipole moment and show that the coherence time is inversely proportional to the strength of the dipolar interaction. Our work unlocks the full potential of the rotational degree of freedom in molecules for quantum computation and quantum simulation. Second-scale rotational coherence and dipolar interactions in a gas of ultracold polar molecules Simon L. Cornish July 31, 2023 ================================================================================================== The rotational states of polar molecules, together with their controllable dipole-dipole interactions, may be used to encode and entangle qubits <cit.>, qudits <cit.>, pseudo-spins <cit.>, or synthetic dimensions <cit.>. So far, this capability has been exploited to study XY models in a range of geometries <cit.>, and to engineer iSWAP gates that prepare pairs of tweezer-confined molecules in maximally-entangled Bell states <cit.>. Such experiments rely upon the precise control of molecule position that comes from using optical lattices and tweezer arrays. However, spatially-varying and state-dependent light shifts in these traps generally produce a dominant source of decoherence, severely restricting the duration of coherent quantum dynamics. `Magic-wavelength' traps have been an invaluable tool in engineering atomic <cit.> and molecular <cit.> clocks that are insensitive to light shifts. The general method being to choose a trap wavelength such that the polarisability of the target states are the same. However, achieving long coherence for rotational states in ultracold molecules has proved difficult, due to the anisotropic interaction with the trap light. The resulting differential light shifts lead to unwanted shifts in the frequency of the rotational transition across the trap. The only implementation of a magic-wavelength trap for rotational transitions has been in fermionic ^23Na^40K molecules <cit.>. In this case, coherence was limited to ∼1 ms by inhomogeneities in the dc electric field that was also required as part of the scheme. Recent experiments using ^23Na^87Rb molecules in a near-magic optical lattice reported single-particle rotational coherence times of 56(2) ms <cit.>. Other attempts to produce rotationally-magic traps have sought to match polarisabilities by tuning either the polarisation <cit.> or intensity <cit.> of the trap light. Here however, residual differential light shifts may still occur due to hyperfine couplings that are quadratic in intensity <cit.>. Microwave pulse sequences can be designed to minimise the effects of single-particle dephasing resulting from small residual light shifts or electric field inhomogeneity, for example. Most notably spin echo <cit.> or XY8 <cit.> sequences have been used. To date, the longest rotational coherence time reported without rephasing is 93(7) ms for single CaF molecules confined to optical tweezers with the polarisation set to a magic angle; this was extended to 470(40) ms using a spin-echo sequence <cit.>. In this article, we report second-scale rotational coherence times in a dilute gas of optically-trapped ^87Rb^133Cs molecules (hereafter RbCs). We engineer a magic-wavelength trap by tuning the frequency of the trap light in the vicinity of a forbidden molecular transition. We observe a Ramsey coherence time of 0.78(4) s that is limited primarily by the stability of the trap laser frequency in a state configuration without dipole-dipole interactions. Introducing a single spin echo pulse, we observe no loss of rotational coherence over 0.7 s and estimate a minimum coherence time of >1.4 s at the 95% confidence level. We show that with all other sources of decoherence eliminated, dipolar interactions become the dominant source of decoherence for superpositions that generate oscillating dipoles. We control the strength of these interactions by changing the states forming the superposition. We demonstrate that the coherence time is inversely proportional to the strength of the resonant dipole-dipole interactions. We start by preparing a thermal gas of ultracold RbCs molecules in their lowest rotational state in an optical trap (see Methods). We typically produce around ∼2400 molecules at a temperature of 1.5 μK, and estimated peak density of 6×10^10 cm^-3. We use resonant microwave fields that couple to part of the molecule-frame dipole moment d_0=1.23 D <cit.> to coherently transfer the molecules between the rotational states shown in Fig. <ref>(a). We label the states used in this work by |0⟩≡(N=0, M_N=0), |1⟩≡(1,0), |1̅⟩≡(1,1), |2̅⟩≡(2,-1), and |2̂⟩≡(2,2). Here, N describes the rotational angular momentum, and M_N denotes the dominant projection along the quantisation axis. All these states have the same dominant nuclear spin projections of m_Rb=3/2 and m_Cs=7/2. For a complete description of the state compositions see Supplementary Section I. Optical trapping relies upon light with intensity I interacting with the dynamic polarisability of the molecule α, such that there is a perturbation in energy -α I / (2ϵ_0 c) where ϵ_0 is the permittivity of free space and c is the speed of light. Because diatomic molecules are not spherically symmetric, the polarisability along the internuclear axis, α_∥, is different from that perpendicular to the axis, α_⊥, with the two polarisabilities arising from electronic transitions in the molecule with different symmetries <cit.>. This results in a molecular polarisability that depends on the orientation of the molecule and can be separated into an isotropic α^(0) and anisotropic α^(2) component such that . Here, θ is the angle of the laser polarisation with respect to the internuclear axis, α^(0)=(α_∥+2α_⊥)/3 and α^(2)=2(α_∥-α_⊥)/3. The presence of the anisotropic component leads to a polarisability and therefore light shifts that are dependent on the rotational angular momentum N, the projection along the quantisation axis M_N, and the angle between the trap laser polarisation and the quantisation axis <cit.>. To produce a rotationally magic trap we tune the value of α^(2) to be zero. This is achieved by trapping with light at a wavelength of following a scheme proposed by Guan et al. <cit.>. We tune the laser frequency to be between transitions to the v'=0 and v'=1 vibrational states of the mixed b^3Π potential, as indicated in Fig. <ref>(b). Transitions to this potential are nominally forbidden from the X^1Σ^+ ground state, but may be driven due to weak mixing of b^3Π with the nearby A^1Σ^+ potential. Coupling to A^1Σ^+ components allows α_∥ to be tuned by varying the frequency of the trapping light, with poles in the polarisability occurring for each vibrational state in the b^3Π potential, as shown in Fig. <ref>(c). Meanwhile, α_⊥ remains nearly constant as the light is red-detuned by ∼100 THz from the bottom of the nearest ^1Π potential. By setting α_∥ = α_⊥ with the laser frequency, the polarisability of the molecule becomes isotropic such that α^(2)=0 and α^(0)=α_⊥. In Fig. <ref>(d,e) we show the effect of tuning the laser frequency on the optical potentials experienced by molecules in states |0⟩ and |1⟩. The magic condition where the polarisability, and therefore the optical potential, is the same for molecules in either state occurs at a laser detuning of ∼186 GHz from the transition to the v'=0 state. To identify the magic detuning experimentally, we perform Ramsey interferometry as shown schematically in Fig. <ref>(a). For a given pair of states, we fix the Ramsey time and measure the contrast of a Ramsey fringe as a function of the laser detuning (see Methods). We observe a peak in the fringe contrast when the trap light is tuned to be magic as shown in Fig. <ref>(b,c), indicating that the coherence time for that particular combination of states has been maximised. There is a small ∼1 GHz variation in the magic detuning that depends upon the states chosen and the polarisation of the trap light; this is due to the light coupling to different rotational levels of the excited vibrational states <cit.>. The width of the feature we observe depends on the sensitivity of the differential light shift to the laser frequency, and is inversely proportional to the Ramsey time used. Tuning close to a molecular transition to access a magic wavelength could potentially lead to loss of molecules due to photon scattering. However, we find that our method is compatible with long trap lifetimes. To estimate the scattering rate due to the 1145 nm light we examine loss of molecules prepared in |0⟩ from the trap. We begin our measurement after a hold time in the trap of 0.4 s such that the density of molecules is relatively low and collisional losses <cit.> are therefore reduced. We compare the loss from the magic-wavelength trap with loss observed when the trap light wavelength is changed to 1064 nm, with the intensity set such that the molecules experience the same trap frequencies. The results of both measurements are shown in Fig. <ref>(a), with fits from a model assuming exponential decay (see Supplementary Section II). We observe similar loss rates, corresponding to lifetimes on the order of ∼1 s in both traps. Assuming the photon scattering rate in the 1064 nm trap is negligible, we estimate an upper limit on the photon scattering rate in the 1145 nm trap of <0.23 s^-1 at the 95% confidence level. In other work <cit.>, we have characterised the linewidths of the relevant transitions, with the closest having linewidths Γ_v'=0=3.7(4) kHz and Γ_v'=1=2.4(3) kHz. Therefore, the trap light is effectively far detuned, with the ratio of the laser detuning to the linewidth of the nearest transition Δ/Γ_v'=0≈ 5 × 10^7. It follows that loss due to photon scattering is not an issue for our magic-wavelength trap. When molecules are prepared in superpositions of rotational states that are connected by dipole-allowed transitions, they exhibit an oscillating dipole moment in the lab frame. The resultant dipole-dipole interactions can significantly affect the rate of collisional loss of molecules from the trap <cit.>. In Fig. <ref>(b) we compare the loss from the magic trap as a function of time for molecules prepared in either |0⟩, |1⟩ or the superposition 1/√(2)(|0⟩+|1̅⟩). For the dipolar superposition, we observe a loss rate that is ×2.5 greater than for molecules prepared in a single rotational state. The interrogation time available for dipolar samples of molecules is therefore significantly shorter than for non-interacting samples. We first measure the coherence time for a non-interacting sample of molecules by examining the coherence between |0⟩ and |2̂⟩; these are two rotational states not linked by an electric dipole-allowed transition. To perform Ramsey interferometry on this transition, we use a pulse sequence composed of one-photon π/2 and π pulses on the electric dipole-allowed transitions |0⟩↔|1̅⟩ and |1̅⟩↔|2̂⟩ (see Supplementary Section III). We measure the contrast of the Ramsey fringes as a function of time, shown by the empty circles in Fig. <ref>(a). We fit the results with a Gaussian model for decoherence <cit.>, where the fringe contrast C(t) = exp[-(T/T^*_2)^2], to extract the 1/e coherence time. From this, we find a coherence time T^*_2=0.78(4) s. The coherence time we measure is currently limited by residual ac Stark shifts in the trap as a result of the light being slightly detuned from the magic wavelength. The largest source of decoherence comes from the stability of the trap laser frequency, which we estimate to be ±0.76 MHz (±1σ) over the duration of a typical Ramsey fringe measurement (approximately 30 minutes). This results in a variation of the transition frequency across each fringe measurement of ±0.46 Hz, with a corresponding theoretical limit on the observed coherence time of 1.1 s. There are additional smaller contributions to decoherence arising from the uncertainty in the magic laser frequency extracted from the optimisation curve in Fig. <ref>(b) with T= 175 ms (limit of 4.3 s), and from a 10 MHz difference between the two beams used to produce the crossed trap (limit of 8.3 s). Details of how these limits are calculated are given in Supplementary Section IV. In addition, there is a small differential magnetic moment between the states of 0.0124 μ_N that adds an additional limit on the coherence time of 10.6 s associated with noise in the magnetic field (∼10 mG). Combining all contributions provides an expected limit on the coherence time of 0.74 s, in excellent agreement with the measured value. Up to an order of magnitude improvement in the coherence time may be achieved by using a better method of laser frequency stabilisation; for example, referencing the light to a high finesse optical cavity would result in a frequency stability of below 100 kHz <cit.>. We remove most of the effects of these residual light shifts by introducing a single spin-echo pulse in the middle of the Ramsey time; this is an effective π pulse between |0⟩ and |2̂⟩ that reverses the direction of procession around the Bloch sphere, thereby cancelling out contributions to single particle dephasing from static inhomogeneities. The result is shown by the filled circles in Fig. <ref>(a). We now observe no loss of fringe contrast over 0.7 s. We do not measure Ramsey fringes for times longer than this due to loss of molecules from the trap diminishing the signal-to-noise ratio. There is a shift in the phase of the Ramsey fringe as a function of Ramsey time that is quadratic (see Supplementary Section V). This may be explained by a small imperfection in the spin-echo rephasing, but does not lead to any appreciable loss of coherence. Fitting to the results using the Gaussian model for decoherence, we estimate a minimum coherence time consistent with our results to be T^*_2>1.4 s at the 95% confidence level. This result represents the elimination of all decoherence at the detectable precision of our current experiment. For superpositions of rotational states that lead to oscillating molecular dipoles, dipolar interactions also cause dynamics of Ramsey contrast, and therefore introduce an additional source of decoherence. The dipole-dipole interactions in the system are described by the Hamiltonian <cit.> Ĥ_DDI = 1/2∑_i≠ j1-3cos^2Θ_ij/r_ij^3(d̂^(i)_0d̂^(j)_0 + d̂^(i)_1d̂^(j)_-1 + d̂^(i)_-1d̂^(j)_1/2) where d̂_0, d̂_̂1̂, d̂_̂-̂1̂ are spherical components of the dipole operator, Θ_ij is the angle between the vector connecting two molecules and the quantisation axis, and r_ij is the inter-molecular distance. The local spatial configuration of molecules varies across the sample. Moreover as the molecules are not pinned by an optical lattice, their configuration is time dependent due to motion of the molecules around the trap. We examine the coherence between the states |0⟩ and |1̅⟩ which are connected via a dipole-allowed transition. An equal superposition of these states produces a dipole that rotates around the quantisation axis with magnitude given by the transition dipole moment d_0/√(3). However, due to the factor of 2 in the denominator of the final term of Eq. <ref>, this contributes an effective dipole d=d_0/√(6)=0.5 D in the lab frame. At the peak densities in our experiments, this corresponds to a typical interaction strength of ∼ h×2 Hz. The Ramsey fringe contrast measured as a function of time is shown in Fig. <ref>(a) by the blue squares, with (filled) and without (empty) a spin-echo pulse. We see a dramatic reduction in the coherence time measured using either pulse sequence when compared to the non-interacting case. Moreover, the results are no longer well described by the Gaussian model for decoherence. Instead, we fit the results assuming an exponential decay of fringe contrast C(t)=exp(-T/T^*_2). We find a 1/e coherence time of T^*_2=89(5) ms without the spin-echo pulse and T^*_2=157(14) ms using the spin-echo sequence. Note that the residual ac Stark shifts that affect the results without spin echo vary depending on the combination of states; we expect that the uncertainty in the magic detuning is the dominant source of dephasing for this combination as collisional losses and dipolar decoherence force us to use a shorter Ramsey time in the optimisation of the trap laser frequency. However, the difference in coherence time between dipolar and non-interacting samples that is observed with the spin-echo pulse can be attributed to the effect of dipole-dipole interactions alone. We tune the strength of the dipole-dipole interactions in the sample by using different combinations of states. In Fig. <ref>(b) we show the 1/e coherence time measured with spin echo for three different combinations of states as a function of their effective lab-frame dipole moments. Here, the dipole moment is varied from 0.31 D to 0.65 D, calculated using the full state compositions given in Supplementary Section I. The laser frequency is set to maximise the coherence time for each state combination. As expected, we see in the inset to Fig. <ref>(b) that the coherence time is inversely proportional to the magnitude of the interaction strength U_ij∝ d^2, which confirms that dipolar interactions are the dominant source of decoherence. Moreover, this result demonstrates application of our magic-wavelength trap to molecules in a range of rotational and hyperfine states, allowing control and tunability of the strength of the dipolar interactions. We compare the decay of fringe contrast observed in the experiment to that calculated using the Moving-Average Cluster Expansion (MACE) method <cit.> for molecules fixed in space (See Methods). Losses are included in the theory by assuming molecules are lost at a constant rate independent of other molecules. We use the 1/e loss time of 0.14(5) s determined from an exponential fit to the experimental results; we measure similar loss rates for all three of the dipolar combinations investigated. Decreases in density from loss noticeably slow down the dynamics, as shown in Supplementary Fig. 1(a). Without fitting, using only the measured loss rates, densities, and trap parameters, the MACE calculations reproduce the timescale for the decay in the Ramsey fringe contrast, as well as the dependence on the choice of state-pair, and the overall monotonic decrease; the result for 1/√(2)(|0⟩ + |1̅⟩) is shown in Fig. <ref>(a). Some details differ, most noticeably that the MACE results predict modestly but systematically shorter timescales and a more concave dependence on time than the experimental results. The theory contrast is between the measured echo and no-echo contrast, and differs from the echo results by roughly the same factor for all state-pair choices, suggesting a common underlying cause. Molecule motion during the dynamics is a likely source. The reasonably good agreement of MACE calculations and experiment for the overall timescale of contrast decay, and the dependence of this timescale on the strength of the dipole-dipole interactions, supports the conclusion that dipole-dipole interactions are the main cause of the contrast decay for state combinations that generate oscillating molecular dipoles. In conclusion, we have demonstrated a rotationally-magic trap for RbCs molecules, where the effects of all experimentally relevant sources of decoherence may be suppressed, resulting in a coherence time in excess of 1.4 s for non-interacting rotational superpositions. Crucially, the magic wavelength is sufficiently far detuned from neighbouring transitions that we observe negligible photon scattering rates and hence long trap lifetimes. We have shown that this provides unparalleled access to controllable dipole-dipole interactions between molecules. Our approach of trapping using light detuned from the nominally forbidden X^1Σ(v=0)→b^3Π(v'=0) transition is applicable to other bialkali molecules <cit.>. Our work enables the construction of low-decoherence networks of rotational states, which are the foundation for many future applications of ultracold molecules from quantum computation <cit.> and simulation <cit.>, to precision measurement of fundamental constants <cit.>. The next step for experiments will be to construct molecular arrays using light at this magic wavelength. For molecules in optical tweezer arrays, this will enable high-fidelity quantum gates using resonant dipolar exchange, either directly between molecules <cit.> or mediated via Rydberg atoms <cit.>. For molecules in optical lattices, long rotational coherence times can be combined with long lifetimes. For a lattice depth of 20 recoil energies, we predict a one-photon scattering rate of 0.006 s^-1, corresponding to a lifetime in excess of 100 s. For molecules in the magic-wavelength lattice, nearest neighbours will be separated by r=573 nm, and experience an interaction strength of h×343 Hz for the largest effective dipoles explored here (and Θ=π/2). This corresponds to a timescale for dipolar spin exchange dynamics of 2.9 ms, far shorter than both the coherence time and the lifetime. Techniques for the production of ordered lattice arrays of ground state RbCs molecules have already been demonstrated <cit.>, and are compatible with a magic-wavelength lattice. Our work therefore unlocks the potential of ultracold molecules for simulating quantum magnetism. § ACKNOWLEDGEMENTS We thank J. M. Hutson for many insightful discussions. The Durham authors' work was supported by UK Engineering and Physical Sciences Research Council (EPSRC) Grants EP/P01058X/1, EP/P008275/1 and EP/W00299X/1, UK Research and Innovation (UKRI) Frontier Research Grant EP/X023354/1, the Royal Society and Durham University. K. R. A. H. acknowledges support from the Robert A. Welch Foundation (C-1872), the National Science Foundation (PHY-1848304), the Office of Naval Research (N00014-20-1-2695), and the W. F. Keck Foundation (Grant No. 995764). § METHODS §.§ Production of ground state molecules We produce ultracold RbCs molecules from a pre-cooled mixture of Rb and Cs atoms. The atomic mixture is confined to a crossed optical dipole trap using light with a wavelength of 1550 nm, with a magnetic field gradient applied to cancel the force due to gravity <cit.>. To form molecules, we sweep the magnetic field down across an interspecies Feshbach resonance at 197 G <cit.>. We then remove the remaining atoms from the trap by increasing the magnetic field gradient to over-levitate the atoms, following which the magnetic field gradient is removed. With the exception of the measurements shown in Fig. <ref>(a), at this point the molecules are transferred to the magic trap by ramping the power in the 1145 nm light on over 30 ms, and then the power in the 1550 trap off over a further 5 ms. Finally, we transfer the molecules to the X^1Σ ground state |0⟩ using stimulated Raman adiabatic passage <cit.>. This final step is performed with the trap light briefly turned off to avoid spatially varying ac Stark shifts of the transitions. For the measurements in Fig. <ref>(a), we increase the power in the 1550 nm trap after the removal of atoms and transfer to the magic trap following the ground state transfer. Throughout all of the measurements shown, the molecules are subject to a fixed 181.5 G magnetic field, and there is no electric field. To detect molecules in |0⟩, we reverse the association process, breaking the molecules back apart into their constituent atoms, which we detect using absorption imaging. §.§ Details of the magic trap The magic trap is formed from two beams, each with a waist of 50 μm, crossing at an angle of 20^∘. Both beams propagate and are polarised in the plane orthogonal to the applied magnetic field that defines the quantisation axis. Both beams are derived from the same laser, so to avoid interference effects, we set a 10 MHz difference in frequency between them. The laser detuning reported in Fig. <ref>(b,c) is the average detuning of the two beams. The intensities of the beams are not actively stabilised, but are monitored to ensure they are passively stable to <5% variation over the course of each measurement. The typical trap frequencies experienced by ground state molecules at the magic detuning are [ω_x,ω_y,ω_z]=2π×[29(1),144(5),147(5)] rad s^-1. After 15 ms in the magic trap, we measure the temperature of the ground-state molecules to be 1.5(2) μK using time-of-flight expansion of the cloud over 6 ms. The frequency of the 1145 nm laser is stabilised using a scanning transfer cavity lock <cit.>, that is referenced to a 977 nm laser that is in turn locked to a high finesse cavity with ultra-low expansion glass spacer <cit.>. The lock makes corrections to the laser frequency at a rate of ∼100 Hz. This slow feedback rate, together with the relatively low finesse of the transfer cavity >400, limits the frequency stability of the laser in the current experiments. §.§ Coherent state control We use coherent one-photon microwave pulses to perform the Ramsey interferometry, during which the trap light is turned off. The microwave sources are referenced to a 10 MHz GPS clock, and we set the microwaves on resonance with the desired transition and calibrate the duration of the pulses using one-photon spectroscopy as described in <cit.>. The pulse sequences used in this work are shown schematically in Supplementary Section II. §.§ Analysis of Ramsey fringes We observe Ramsey fringes as a variation in the molecule number N_mol detected in state |0⟩ as a function of the phase difference Φ between the initialisation and read-out microwave pulses. We fit each measurement with the function N_mol(Φ) = N_mol^tot(1+Ccos(Φ-Φ_0)) where N_mol^tot is the total number of molecules in the sample, Φ_0 is the phase offset in the Ramsey fringe, and C is the contrast. We use a bootstrap fitting algorithm to estimate the uncertainty in the fringe contrast. For a given fringe measurement, we randomly sample the measured N_mol for each value of Φ to build up a new dataset that is the same size as the original. We fit to this randomly resampled data to extract a coherence time. This process is repeated 1000 times to build up a distribution of fitted coherence times, from which we calculate a standard deviation that represents the uncertainty in the true value. §.§ MACE We calculate the dynamics of our system using the Moving-Average Cluster Expansion (MACE) method <cit.>. Particle locations are randomly sampled from the thermal distribution based on the measured trap parameters, temperature, and particle number, and are assumed to be fixed for all times at their initial positions. We calculate the dynamics starting from all molecules in the |→⟩ state, which is the state (ideally) prepared by the initial π/2 pulse in the Ramsey spectroscopy sequence. We simulate the time evolution of the Hamiltonian in Eq. <ref> projected onto the relevant state pair, which is a spin-1/2 dipolar XX model <cit.> H = J_⊥/2∑_i≠ j^1/21-3cos^2 θ_ij/r_ij^3(S^+_i S^-_j+ h.c.), where r⃗_ij=r⃗_i-r⃗_j is the distance between molecules i and j, θ_ij is the angle between the quantization axis and r⃗_ij, S^±_i are raising/lowering operators, and J_⊥= -⟨↑ | d_1 | ↓|^⟩2 for state pairs with angular momentum projections differing by ± 1, and J_⊥=2 ⟨↑ | d_0 | ↓|^⟩2 for state pairs with the same angular momentum projection. For the 1/√(2)(|0⟩ + |1̅⟩) state pair, |J_⊥ρ/2 h| = 2.26Hz where ρ is the estimated peak density (6×10^10 cm^-3). Each simulation is performed to a time just before the second π/2 pulse and then the expectation of S^x = ∑_i S^x_i is calculated, which is the same as the Ramsey contrast after the pulse. The MACE method constructs a cluster for each S^x_i from molecule i and the N_c-1 other molecules with the strongest interactions with atom i, where N_c is a convergence parameter of the method. We exactly calculate ⟨S^x_i(t)|$⟩ of each resulting cluster. To assess convergence, we have compared the dynamics forN_c = 2,4,6,8,and10as shown in Supplementary Fig. 1(b). The results converge quickly withN_cfor the simulation times of interest, andN_c=6is used for the results in Fig. <ref>. The contrast is expected to be converged within widths of the plotted lines over most of the time regime shown. The dynamics of the Ramsey contrast is already roughly captured if one ignores particle loss, but the loss has non-negligible quantitative effects, which we include in our calculations shown in the main text. Molecules leaving the trap decrease the density, which causes the contrast to decay more slowly. To include this loss in the MACE calculations, we assume that molecules are independently lost from the trap at a constant rate, consistent with the measured time-dependence of the particle number. We take the loss rate to be 0.14(5) s, as determined experimentally. MACE clusters are built based on the particle distribution at timet=0and do not change over time. For each cluster, whenever a molecule is lost we set the interactions between the lost molecule and the remaining molecules to zero. To propagate the dynamics after this event, we re-diagonalize the Hamiltonian. This modestly increases the computational difficulty, but only by a factor ofN_cin the worst case (when all molecules are lost during the timescale under consideration). To obtain good statistics, we average together 10 loss trajectories of∼2400molecules each to obtain a stable result. This reduces the statistical error between runs to a maximum of 2% over the time scales we are working with. The error bars on the theoretical calculations presented in the main text show the result of the experimental uncertainty of the number of molecules, loss rate, and temperature. For particle number uncertainty, we computed the Ramsey contrast decay for the±1 σmeasured particle numbers, and did the same for loss rate uncertainty and temperature uncertainty. These uncertainties were added together in quadrature to obtain the error bounds in Fig. <ref>. Each of these errors are much larger than the statistical or MACE convergence errors. 57 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Demille(2002)]DeMille2002 authorauthorD. Demille, titletitleQuantum computation with trapped polar molecules, https://doi.org/10.1103/PhysRevLett.88.067901journaljournalPhys. Rev. Lett. volume88,pages067901 (year2002)NoStop [Yelin et al.(2006)Yelin, Kirby, and Côte]Yelin2006 authorauthorS. F. Yelin, authorK. Kirby, andauthorR. Côte, titletitleSchemes for robust quantum computation with polar molecules, https://doi.org/10.1103/PhysRevA.74.050301journaljournalPhys. Rev. A volume74, pages050301(R) (year2006)NoStop [Pellegrini and Desouter-Lecomte(2011)]Pellegrini2011 authorauthorP. Pellegrini and authorM. Desouter-Lecomte, titletitleQuantum gates driven by microwave pulses in hyperfine levels of ultracold heteronuclear dimers, https://doi.org/10.1140/epjd/e2011-20128-xjournaljournalEur. Phys. J. D volume64, pages163 (year2011)NoStop [Wei et al.(2016)Wei, Cao, Kais, and Friedrich]Wei2016 authorauthorQ. Wei, authorY. Cao, authorS. Kais, and authorB. Friedrich, titletitleQuantum computation using arrays of N polar molecules in pendular states, https://doi.org/10.1002/cphc.201600781journaljournalChemPhysChem volume17, pages3714 (year2016)NoStop [Ni et al.(2018)Ni, Rosenband, and Grimes]Ni2018 authorauthorK.-K. Ni, authorT. Rosenband, andauthorD. D. Grimes, titletitleDipolar exchange quantum logic gate with polar molecules, https://doi.org/10.1039/C8SC02355GjournaljournalChem. Sci. volume9, pages6830 (year2018)NoStop [Hughes et al.(2020)Hughes, Frye, Sawant, Bhole, Jones, Cornish, Tarbutt, Hutson, Jaksch, and Mur-Petit]Hughes2020 authorauthorM. Hughes, authorM. D. Frye, authorR. Sawant, authorG. Bhole, authorJ. A. Jones, authorS. L. Cornish, authorM. R. Tarbutt, authorJ. M. Hutson, authorD. Jaksch, and authorJ. Mur-Petit, titletitleRobust entangling gate for polar molecules using magnetic and microwave fields, https://doi.org/10.1103/PhysRevA.101.062308journaljournalPhys. Rev. A volume101, pages062308 (year2020)NoStop [Zhang and Tarbutt(2022)]Zhang2022 authorauthorC. Zhang and authorM. R. Tarbutt, titletitleQuantum computation in a hybrid array of molecules and Rydberg atoms, https://doi.org/10.1103/PRXQuantum.3.030340journaljournalPRX Quantum volume3, pages030340 (year2022)NoStop [Wang et al.(2022)Wang, Williams, Picard, Yao, andNi]Wang2022 authorauthorK. Wang, authorC. P. Williams, authorL. R. Picard, authorN. Y. Yao, and authorK.-K. Ni, titletitleEnriching the quantum toolbox of ultracold molecules with rydberg atoms, https://doi.org/10.1103/PRXQuantum.3.030339journaljournalPRX Quantum volume3, pages030339 (year2022)NoStop [Sawant et al.(2020)Sawant, Blackmore, Gregory, Mur-Petit, Jaksch, Aldegunde, Hutson, Tarbutt, and Cornish]Sawant2020 authorauthorR. Sawant, authorJ. A. Blackmore, authorP. D. Gregory, authorJ. Mur-Petit, authorD. Jaksch, authorJ. Aldegunde, authorJ. M. Hutson, authorM. R. Tarbutt, and authorS. L. Cornish, titletitleUltracold polar molecules as qudits, https://doi.org/10.1088/1367-2630/ab60f4journaljournalNew J. Phys. volume22, pages013027 (year2020)NoStop [Barnett et al.(2006)Barnett, Petrov, Lukin, and Demler]Barnett2006 authorauthorR. Barnett, authorD. Petrov, authorM. Lukin, and authorE. Demler, titletitleQuantum magnetism with multicomponent dipolar molecules in an optical lattice, https://doi.org/10.1103/PhysRevLett.96.190401journaljournalPhys. Rev. Lett. volume96,pages190401 (year2006)NoStop [Micheli et al.(2006)Micheli, Brennen, and Zoller]Micheli2006 authorauthorA. Micheli, authorG. K. Brennen, and authorP. Zoller, titletitleA toolbox for lattice-spin models with polar molecules, https://doi.org/10.1038/nphys287journaljournalNat. Phys. volume2, pages341 (year2006)NoStop [Capogrosso-Sansone et al.(2010)Capogrosso-Sansone, Trefzger, Lewenstein, Zoller, and Pupillo]Capogrosso-Sansone2010 authorauthorB. Capogrosso-Sansone, authorC. Trefzger, authorM. Lewenstein, authorP. Zoller, and authorG. Pupillo,titletitleQuantum phses of cold polar molecules in 2D optical lattices, https://doi.org/10.1103/PhysRevLett.104.125301journaljournalPhys. Rev. Lett. volume104,pages125301 (year2010)NoStop [Pollet et al.(2010)Pollet, Picon, Büchler, and Troyer]Pollet2010 authorauthorL. Pollet, authorJ. D. Picon, authorH. P. Büchler, andauthorM. Troyer, titletitleSupersolid phase with cold polar molecules on a triangular lattice, https://doi.org/10.1103/PhysRevLett.104.125302journaljournalPhys. Rev. Lett. volume104,pages125302 (year2010)NoStop [Gorshkov et al.(2011a)Gorshkov, Manmana, Chen, Ye, Demler, Lukin, and Rey]Gorshkov2011 authorauthorA. V. Gorshkov, authorS. R. Manmana, authorG. Chen, authorJ. Ye, authorE. Demler, authorM. D. Lukin, and authorA. M. Rey, titletitleTunable superfluidity and quantum magnetism with ultracold polar molecules, https://doi.org/10.1103/PhysRevLett.107.115301journaljournalPhys. Rev. Lett. volume107, pages115301 (year2011a)NoStop [Gorshkov et al.(2011b)Gorshkov, Manmana, Chen, Demler, Lukin, and Rey]Gorshkov2011b authorauthorA. V. Gorshkov, authorS. R. Manmana, authorG. Chen, authorE. Demler, authorM. D. Lukin, and authorA. M. Rey, titletitleQuantum magnetism with polar alkali-metal dimers,https://doi.org/10.1103/PhysRevA.84.033619journaljournalPhys. Rev. A volume84,pages033619 (year2011b)NoStop [Zhou et al.(2011)Zhou, Ortner, and Rabl]Zhou2011 authorauthorY. L. Zhou, authorM. Ortner, andauthorP. Rabl, titletitleLong-range and frustrated spin-spin interactions in crystals of cold polar molecules, https://doi.org/10.1103/PhysRevA.84.052332journaljournalPhys. Rev. A volume84, pages052332 (year2011)NoStop [Hazzard et al.(2013)Hazzard, Manmana, Foss-Feig, andRey]Hazzard2013 authorauthorK. R. A.Hazzard, authorS. R.Manmana, authorM. Foss-Feig, and authorA. M. Rey, titletitleFar-from-equilibrium quantum magnetism with ultracold polar molecules, https://doi.org/10.1103/PhysRevLett.110.075301journaljournalPhys. Rev. Lett. volume110,pages075301 (year2013)NoStop [Lechner and Zoller(2013)]Zoller2013 authorauthorW. Lechner and authorP. Zoller, titletitleFrom classical to quantum glasses with ultracold polar molecules, https://doi.org/10.1103/PhysRevLett.111.185306journaljournalPhys. Rev. Lett. volume111,pages185306 (year2013)NoStop [Sundar et al.(2018)Sundar, Gadway, and Hazzard]Sundar2018 authorauthorB. Sundar, authorB. Gadway, and authorK. R. A. Hazzard,titletitleSynthetic dimensions in ultracold polar molecules, https://doi.org/10.1038/s41598-018-21699-xjournaljournalSci. Rep. volume8, pages3422 (year2018)NoStop [Sundar et al.(2019)Sundar, Thibodeau, Wang, Gadway, and Hazzard]Sundar2019 authorauthorB. Sundar, authorM. Thibodeau, authorZ. Wang, authorB. Gadway, and authorK. R. A. Hazzard, titletitleStrings of ultracold molecules in a synthetic dimension, https://doi.org/10.1103/PhysRevA.99.013624journaljournalPhys. Rev. A volume99, pages013624 (year2019)NoStop [Feng et al.(2022)Feng, Manetsch, Rousseau, Hazzard, and Scalettar]Feng2022 authorauthorC. Feng, authorH. Manetsch, authorV. G. Rousseau, authorK. R. A. Hazzard, and authorR. Scalettar, titletitleQuantum membrane phases in synthetic lattices of cold molecules or Rydberg atoms, https://doi.org/10.1103/PhysRevA.105.063320journaljournalPhys. Rev. A volume105, pages063320 (year2022)NoStop [Cohen et al.(2022)Cohen, Casebolt, Zhang, Hazzard, and Scalettar]Cohen2022 authorauthorM. Cohen, authorM. Casebolt, authorY. Zhang, authorK. R. A. Hazzard, and authorR. Scalettar, titletitleClassical analog of quantum models in synthetic dimensions, journaljournalarXiv:2212.07017https://doi.org/10.48550/arXiv.2212.0701710.48550/arXiv.2212.07017 (year2022)NoStop [Yan et al.(2013)Yan, Moses, Gadway, Covey, Hazzard, Rey, Jin, andYe]Yan2013 authorauthorB. Yan, authorS. A. Moses, authorB. Gadway, authorJ. P. Covey, authorK. R. A. Hazzard, authorA. M. Rey, authorD. S. Jin, and authorJ. Ye, titletitleObservation of dipolar spin-exchange interactions with lattice-confined polar molecules, https://doi.org/10.1038/nature12483journaljournalNature volume501,pages521 (year2013)NoStop [Seeßelberg et al.(2018)Seeßelberg, Luo, Li, Bause, Kotochigova, Bloch, andGohle]Seesselberg2018 authorauthorF. Seeßelberg, authorX.-Y. Luo, authorM. Li, authorR. Bause, authorS. Kotochigova, authorI. Bloch, and authorC. Gohle, titletitleExtending rotational coherence of interacting polar molecules in a spin-decoupled magic trap, https://doi.org/10.1103/PhysRevLett.121.253401journaljournalPhys. Rev. Lett. volume121,pages253401 (year2018)NoStop [Bao et al.(2022)Bao, Yu, Anderegg, Chae, Ketterle, Ni, and Doyle]Bao2022 authorauthorY. Bao, authorS. S. Yu, authorL. Anderegg, authorE. Chae, authorW. Ketterle, authorK.-K. Ni, and authorJ. M. Doyle, titletitleDipolar spin-exchange and entanglement between molecules in an optical tweezer array, @noop journaljournalarXiv:2211.09780 (year2022)NoStop [Holland et al.(2022)Holland, Lu, and Cheuk]Holland2022 authorauthorC. M. Holland, authorY. Lu, andauthorL. W. Cheuk, titletitleOn-demand entanglement of molecules in a reconfigurable optical tweezer array, @noop journaljournalarXiv:2210.06309 (year2022)NoStop [Li et al.(2023)Li, Matsuda, Miller, Carroll, Tobias, Higgins, and Ye]Li2023 authorauthorJ.-R. Li, authorK. Matsuda, authorC. Miller, authorA. N. Carroll, authorW. G. Tobias, authorJ. S. Higgins, and authorJ. Ye, titletitleTunable itinerant spin dynamics with polar molecules, https://doi.org/10.1038/s41586-022-05479-2journaljournalNature volume614, pages70 (year2023)NoStop [Christakis et al.(2023)Christakis, Rosenberg, Raj, Chi, Morningstar, Huse, Yan, and Bakr]Christakis2023 authorauthorL. Christakis, authorJ. S. Rosenberg, authorR. Raj, authorS. Chi, authorA. Morningstar, authorD. A. Huse, authorZ. Z. Yan, and authorW. S. Bakr, titletitleProbing site-resolved correlations in a spin system of ultracold molecules, https://doi.org/10.1038/s41586-022-05558-4journaljournalNature volume614,pages64 (year2023)NoStop [Takamoto et al.(2005)Takamoto, Hong, Higashi, andKatori]Takamoto2005 authorauthorM. Takamoto, authorF.-L. Hong, authorR. Higashi, and authorK. Katori, titletitleAn optical lattice clock, https://doi.org/10.1038/nature03541journaljournalNature volume435, pages321 (year2005)NoStop [Kondov et al.(2019)Kondov, Lee, Leung, Liedl, Majewska, Moszynski, and Zelevinsky]Kondov2019 authorauthorS. S. Kondov, authorC.-H. Lee, authorK. H. Leung, authorC. Liedl, authorI. Majewska, authorR. Moszynski, and authorT. Zelevinsky, titletitleMolecular lattice clock with long vibrational transition, https://doi.org/10.1038/s41567-019-0632-3journaljournalNat. Phys. volume15, pages1118 (year2019)NoStop [Bause et al.(2020)Bause, Li, Schindewolf, Chen, Duda, Kotochigova, Bloch, and Luo]Bause2020 authorauthorR. Bause, authorM. Li, authorA. Schindewolf, authorX.-Y. Chen, authorM. Duda, authorS. Kotochigova, authorI. Bloch, and authorX.-Y.Luo, titletitleTune-out and magic wavelengths for ground state ^23Na^40K molecules, https://doi.org/10.1103/PhysRevLett.125.023201journaljournalPhys. Rev. Lett. volume125,pages023201 (year2020)NoStop [Kotochigova and DeMille(2010)]Kotochigova2010 authorauthorS. Kotochigova and authorD. DeMille, titletitleElectric-field-dependent dynamic polarizability and state-insensitive conditions for optical trapping of diatomic molecules, https://doi.org/10.1103/PhysRevA.82.063421journaljournalPhys. Rev. A volume82, pages063421 (year2010)NoStop [Neyenhuis et al.(2012)Neyenhuis, Yan, Moses, Covey, Chotia, Petrov, Kotochigova, Ye, and Jin]Neyenhuis2012 authorauthorB. Neyenhuis, authorB. Yan, authorS. A. Moses, authorJ. P. Covey, authorA. Chotia, authorA. Petrov, authorS. Kotochigova, authorJ. Ye, and authorD. S.Jin, titletitleAnisotropic polarizability of ultracold polar ^40K^87Rb molecules, https://doi.org/10.1103/PhysRevLett.109.230403journaljournalPhys. Rev. Lett. volume109, pages230403 (year2012)NoStop [Burchesky et al.(2021)Burchesky, Anderegg, Bao, Yu, Chae, Ketterle, Ni, and Doyle]Burchesky2021 authorauthorS. Burchesky, authorL. Anderegg, authorY. Bao, authorS. S. Yu, authorE. Chae, authorW. Ketterle, authorK.-K. Ni, and authorJ. M. Doyle,titletitleRotational coherence times of polar molecules in optical tweezers, https://doi.org/10.1103/PhysRevLett.127.123202journaljournalPhys. Rev. Lett. volume127,pages123202 (year2021)NoStop [Tobias et al.(2022)Tobias, Matsuda, Li, Miller, Carroll, Bilitewski, Rey, and Ye]Tobias2022 authorauthorW. G. Tobias, authorK. Matsuda, authorJ.-R. Li, authorC. Miller, authorA. N. Carroll, authorT. Bilitewski, authorA. M. Rey, and authorJ. Ye, titletitleReactions between layer-resolved molecules mediated by dipolar spin exchange, https://doi.org/10.1126/science.abn8525journaljournalScience volume375, pages1299 (year2022)NoStop [Blackmore et al.(2018)Blackmore, Caldwell, Gregory, Bridge, Sawant, Aldegunde, Mur-Petit, Jaksch, Hutson, Sauer, Tarbutt, and Cornish]Blackmore2018 authorauthorJ. A. Blackmore, authorL. Caldwell, authorP. D. Gregory, authorE. M. Bridge, authorR. Sawant, authorJ. Aldegunde, authorJ. Mur-Petit, authorD. Jaksch, authorJ. M.Hutson, authorB. E.Sauer, authorM. R. Tarbutt, and authorS. L. Cornish, titletitleUltracold molecules for quantum simulation: rotational coherences in CaF and RbCs, https://doi.org/10.1088/2058-9565/aaee35journaljournalQuan. Sci. Technol. volume4, pages014010 (year2018)NoStop [Blackmore et al.(2020)Blackmore, Sawant, Gregory, Bromley, Aldegunde, Hutson, andCornish]Blackmore2020 authorauthorJ. A. Blackmore, authorR. Sawant, authorP. D. Gregory, authorS. L. Bromley, authorJ. Aldegunde, authorJ. M. Hutson, and authorS. L. Cornish, titletitleControlling the ac stark effect of RbCs with dc electric and magnetic fields, https://doi.org/10.1103/PhysRevA.102.053316journaljournalPhys. Rev. A volume102, pages053316 (year2020)NoStop [Molony et al.(2014)Molony, Gregory, Ji, Lu, Köppinger, Le Sueur, Blackley, Hutson, and Cornish]Molony2014 authorauthorP. K. Molony, authorP. D. Gregory, authorZ. Ji, authorB. Lu, authorM. P. Köppinger, authorC. R. Le Sueur, authorC. L.Blackley, authorJ. M.Hutson, and authorS. L.Cornish, titletitleCreation of ultracold ^87Rb^133Cs molecules in the rovibrational ground state, https://doi.org/10.1103/PhysRevLett.113.255301journaljournalPhys. Rev. Lett. volume113, pages255301 (year2014)NoStop [Kotochigova and Tiesinga(2006)]Kotochigova2006 authorauthorS. Kotochigova and authorE. Tiesinga, titletitleControlling polar molecules in optical lattices, https://doi.org/10.1103/PhysRevA.73.041405journaljournalPhys. Rev. A volume73, pages041405(R) (year2006)NoStop [Vexiau et al.(2017)Vexiau, Borsalino, Lepers, Orbán, Aymar, and Dulieu]Vexiau2017 authorauthorR. Vexiau, authorD. Borsalino, authorM. Lepers, authorA. Orbán, authorM. Aymar, and authorO. Dulieu, titletitleDynamic dipole polarizabilities of heteronuclear alkali dimers: optical response, trapping and control of ultracold molecules, https://doi.org/10.1080/0144235X.2017.1351821journaljournalInt. Rev. Phys. Chem. volume36, pages709 (year2017)NoStop [Gregory et al.(2017)Gregory, Blackmore, Aldegunde, Hutson, and Cornish]Gregory2017 authorauthorP. D. Gregory, authorJ. A. Blackmore, authorJ. Aldegunde, authorJ. M. Hutson, and authorS. L. Cornish, titletitleac stark effect in ultracold polar ^87Rb^133Cs molecules, https://doi.org/10.1103/PhysRevA.96.021402journaljournalPhys. Rev. A volume96, pages021402(R) (year2017)NoStop [Guan et al.(2021)Guan, Cornish, and Kotochigova]Guan2021 authorauthorQ. Guan, authorS. L. Cornish, and authorS. Kotochigova,titletitleMagic conditions for multiple rotational states of bialkali molecules in optical lattices, https://doi.org/10.1103/PhysRevA.103.043311journaljournalPhys. Rev. A volume103, pages043311 (year2021)NoStop [Fernley et al.()Fernley, Gregory, Bromley, Kotochigova, and Cornish]Fernley2023 authorauthorL. M. Fernley, authorP. D. Gregory, authorS. L. Bromley, authorS. Kotochigova, and authorS. L. Cornish, titletitleIn preparation, 2023,@noop NoStop [Gregory et al.(2019)Gregory, Frye, Blackmore, Bridge, Sawant, Hutson, and Cornish]Gregory2019 authorauthorP. D. Gregory, authorM. D. Frye, authorJ. A. Blackmore, authorE. M. Bridge, authorR. Sawant, authorJ. M. Hutson, andauthorS. L. Cornish,titletitleSticky collisions of ultracold RbCs molecules, https://doi.org/10.1038/s41467-019-11033-yjournaljournalNature Communications volume10, pages3104 (year2019)NoStop [Gregory et al.(2020)Gregory, Blackmore, Bromley, andCornish]Gregory2020 authorauthorP. D. Gregory, authorJ. A. Blackmore, authorS. L. Bromley, and authorS. L. Cornish, titletitleLoss of ultracold ^87Rb^133Cs molecules via optical excitation of long-lived two-body collision complexes, https://doi.org/10.1103/PhysRevLett.124.163402journaljournalPhys. Rev. Lett. volume124,pages163402 (year2020)NoStop [Gregory et al.(2015)Gregory, Molony, Köppinger, Kumar, Ji, Lu, Marchant, and Cornish]Gregory2015 authorauthorP. D. Gregory, authorP. K. Molony, authorM. P. Köppinger, authorA. Kumar, authorZ. Ji, authorB. Lu, authorA. L. Marchant, and authorS. L. Cornish, titletitleA simple, versatile laser system for the creation of ultracold ground state molecules, https://doi.org/10.1088/1367-2630/17/5/055006journaljournalNew J. Phys. volume17, pages055006 (year2015)NoStop [Wall et al.(2015)Wall, Hazzard, and Rey]Wall2015 authorauthorM. L. Wall, authorK. R. A. Hazzard, and authorA. M. Rey, @noop titleFrom atomic to mesoscale, Chapter 1 (publisherWorld Scientific, year2015)NoStop [Hazzard et al.(2014)Hazzard, Gadway, Foss-Feig, Yan, Moses, Covey, Yao, Lukin, Ye, Jin, andRey]Hazzard2014 authorauthorK. R. A.Hazzard, authorB. Gadway, authorM. Foss-Feig, authorB. Yan, authorS. A. Moses, authorJ. P. Covey, authorN. Y. Yao, authorM. D. Lukin, authorJ. Ye, authorD. S.Jin, and authorA. M.Rey, titletitleMany-body dynamics of dipolar molecules in an optical lattice, https://doi.org/10.1103/PhysRevLett.113.195302journaljournalPhys. Rev. Lett. volume113,pages195302 (year2014)NoStop [Kłos et al.(2022)Kłos, Li, Tiesinga, and Kotochigova]Klos2022 authorauthorJ. Kłos, authorH. Li, authorE. Tiesinga, andauthorS. Kotochigova,titletitleProspects for assembling ultracold radioactive molecules from laser-cooled atoms, https://doi.org/10.1088/1367-2630/ac50eajournaljournalNew J. Phys. volume24, pages025005 (year2022)NoStop [Guttridge et al.(2023)Guttridge, Ruttley, Baldock, González-Férez, Sadeghpour, Adams, and Cornish]Guttridge2023 authorauthorA. Guttridge, authorD. K. Ruttley, authorA. C. Baldock, authorR. González-Férez, authorH. R.Sadeghpour, authorC. S.Adams, and authorS. L.Cornish, titletitleObservation of Rydberg blockade due to the charge-dipole interaction between an atom and a polar molecule, journaljournalarXiv:2303.06126 https://doi.org/10.48550/arXiv.2303.0612610.48550/arXiv.2303.06126 (year2023)NoStop [Reichsöllner et al.(2017)Reichsöllner, Schindewolf, Takekoshi, Grimm, and Nägerl]Reichsoellner2017 authorauthorL. Reichsöllner, authorA. Schindewolf, authorT. Takekoshi, authorR. Grimm, and authorH.-C. Nägerl,titletitleQuantum engineering of a low-entropy gas of heteronuclear bosonic moleucles in an optical lattice, https://doi.org/10.1103/PhysRevLett.118.073201journaljournalPhys. Rev. Lett. volume118,pages073201 (year2017)NoStop [Das et al.(2023)Das, Gregory, Takekoshi, Fernley, Landini, Hutson, Cornish, and Nägerl]Das2023 authorauthorA. Das, authorP. D. Gregory, authorT. Takekoshi, authorL. Fernley, authorM. Landini, authorJ. M. Hutson, authorS. L.Cornish, and authorH.-C.Nägerl, titletitleAn association sequence suitable for producing ground-state RbCs molecules in optical lattices, journaljournalarXiv:2303.16144 https://doi.org/10.48550/arXiv.2303.1614410.48550/arXiv.2303.16144 (year2023)NoStop [McCarron et al.(2011)McCarron, Cho, Jenkin, Köppinger, and Cornish]McCarron2011 authorauthorD. J. McCarron, authorH. W. Cho, authorD. L. Jenkin, authorM. P. Köppinger, andauthorS. L. Cornish,titletitleDual-species Bose-Einstein condensate of ^87Rb^133Cs, https://doi.org/10.1103/PhysRevA.84.011603journaljournalPhys. Rev. A volume84, pages011603(R) (year2011)NoStop [Köppinger et al.(2014)Köppinger, McCarron, Jenkin, Molony, Cho, Cornish, Le Sueur, Blackley, and Hutson]Koppinger2014 authorauthorM. P. Köppinger, authorD. J. McCarron, authorD. L. Jenkin, authorP. K. Molony, authorH.-W. Cho, authorS. L. Cornish, authorC. R. Le Sueur, authorC. Blackley, and authorJ. M. Hutson, titletitleProduction of optically trapped ^87RbCs Feshbach molecules, https://doi.org/10.1103/PhysRevA.89.033604journaljournalPhys. Rev. A volume89, pages033604 (year2014)NoStop [Molony et al.(2016)Molony, Gregory, Kumar, Le Sueur, Hutson, and Cornish]Molony2016 authorauthorP. K. Molony, authorP. D. Gregory, authorA. Kumar, authorC. R. Le Sueur, authorJ. M. Hutson, and authorS. L. Cornish, titletitleProduction of ultracold ^87Rb^133Cs in the absolute ground state: complete characterisation of the STIRAP transfer,https://doi.org/10.1002/cphc.201600501journaljournalChemPhysChem. volume17,pages3811 (year2016)NoStop [Subhankar et al.(2019)Subhankar, Restelli, Wang, Rolston, and Porto]Subhankar2019 authorauthorS. Subhankar, authorA. Restelli, authorY. Wang, authorS. L. Rolston, and authorJ. V. Porto, titletitleMicrocontroller based scanning transfer cavity lock for long-term laser frequency stabilisation, https://doi.org/10.1063/1.5067266journaljournalRev. Sci. Instrum. volume90, pages043115 (year2019)NoStop [Gregory et al.(2016)Gregory, Aldegunde, Hutson, andCornish]Gregory2016 authorauthorP. D. Gregory, authorJ. Aldegunde, authorJ. M. Hutson, andauthorS. L. Cornish,titletitleControlling the rotational and hyperfine state of ultracold ^87Rb^133Cs molecules, https://doi.org/10.1103/physreva.94.041403journaljournalPhys. Rev. A volume94, pages041403(R) (year2016)NoStop § SUPPLEMENTARY INFORMATION: SECOND-SCALE ROTATIONAL COHERENCE AND DIPOLAR INTERACTIONS IN A GAS OF ULTRACOLD POLAR MOLECULES § STATE COMPOSITION Composition of the states used in this work, given in the uncoupled basis(N, M_N, m_Rb, m_Cs), are |0⟩≡ 1.000|0,0,3/2,7/2⟩ |1⟩≡ 0.924|1,0,3/2,7/2⟩ - 0.370|1,1,1/2,7/2⟩ + 0.091|1,1,3/2,5/2⟩ |1̅⟩≡ 1.000|1,1,3/2,7/2⟩ |2̅⟩≡ 0.934|2,-1,3/2,7/2⟩ - 0.220|2,1,-1/2,7/2⟩ - 0.207|2,0,3/2,5/2⟩ + 0.168|2,0,1/2,7/2⟩ - 0.056|2,2,-3/2,7/2⟩ + 0.055|2,1,1/2,5/2⟩ + 0.039|2,2,-1/2,5/2⟩ - 0.005|2,2,1/2,3/2⟩ - 0.001|2,1,3/2,3/2⟩ - 0.001|2,2,3/2,1/2⟩ |2̂⟩≡ 1.000|2,2,3/2,7/2⟩ Here, each of the coefficients are given to 3 decimal places, and the dominant contribution in each case has its coefficient highlighted in bold. The energies and compositions of the rotational and hyperfine states are calculated with Diatomic-Py <cit.>. § LIFETIME MEASUREMENTS IN FIG. 3(A) We fit the results in Fig. 3(a) with the exponential functionN_mol(t)=N_mol^inite^-kt. Here,N_mol^initis the number of molecules present at timet=0andkcharacterises the rate of loss of molecules from the trap. We extract a loss rate ofk_1145=0.61(5)s^-1with 1145 nm light, andk_1064=0.56(7)s^-1with 1064 nm light. This is consistent a rate of loss that is not dependent on the wavelength. To estimate the upper limit to the photon scattering rate that is given in the main text, we calculate the difference in these scattering ratesk_1145-k_1064=0.05(9)s^-1, assuming no correlation in the uncertainty of the two measurements. At the 95% confidence level, this indicates that the difference in loss rate must be below 0.23 s^-1. This is broadly consistent with the expected single photon scattering rate which we calculate to be 0.4(1) s^-1from the known linewidths of the transitions <cit.> and peak laser intensity in our optical trap (20 kW cm^-2). § PULSE SEQUENCES We use various pulse sequences to perform Ramsey interferometry depending on the combination of states targeted. These are shown schematically below. < g r a p h i c s > At the start of the sequence the molecules are always prepared inN=0. The solid line with red fill indicates microwave transitions driven betweenN=0and1rotational states, and the dotted line with blue fill indicates transitions driven betweenN=1and2. Sequences are used for the combinations (i)1/√(2)(|0⟩+|1⟩), 1/√(2)(|0⟩+|1̅⟩)(ii)1/√(2)(|0⟩+|2̂⟩), (iii)1/√(2)(|1⟩+|2̅⟩). In each case the top (bottom) sequence shows the sequence without (with) a spin-echo pulse. To measure a Ramsey fringe, the phase of the lastπ/2pulse is varied. § RESIDUAL LIGHT SHIFT CONTRIBUTIONS TO DECOHERENCE In the absence of dipole-dipole interactions, the coherence timeT_2^*is limited by variationΔEin the energy difference between two states T_2^*=h/|Δ E| wherehis the Planck constant. In Fig. 4(a), we observe decoherence without interactions which is suppressed by a spin-echo pulse. Here we discuss possible sources of this decoherence, specifically for the superposition1/√(2)(|0⟩+|2̂⟩). For our calculations, we assumeΔEis the2σvariation in the transition energy. §.§ Instability in the trap laser frequency The trap laser frequency is stabilised by a scanning transfer cavity lock, and we estimate the standard deviation in the laser frequency to be0.76MHz. For a2σchange in the laser frequency, the differential polarisability between|0⟩and|2⟩changes by . Moreover, we have directly measured the coefficient relating the light shift to the change in laser frequency to be6.0(2)×10^-7in our trap. The2σvariation in the laser frequency therefore corresponds to2×(6×10^-7)×0.76 MHz=0.92Hz which we assume equal toΔE/h, yielding a limit onT_2^*of1.1s. §.§ Uncertainty in optimisation of magic detuning We measure the magic detuning for a given state combination as shown in Fig. 2(b). For the states|0⟩and|2̂⟩, our most precise measurement of the magic detuning is found using a Ramsey time ofT=175ms, which has1σuncertainty of 3 MHz. A systematic detuning of 3 MHz leads to an average light shift experienced by the molecules of(6×10^-7)×3 MHz=1.8Hz. Spatial variation in this light shift causes decoherence. We estimate this variation from the known geometry of the trap beam and assuming a thermal cloud of molecules at equilibrium. From this, the2σvariation in the light shift experienced will be13%of the average, i.e. there is spatial variation in the light shift of0.13×1.8=0.234Hz. This puts a limit onT_2^*of 4.3 s. §.§ 10 MHz frequency difference between beams There is a 10 MHz frequency difference between the two beams that form the magic trap in order to eliminate interference effects. When set symmetrically about the magic frequency, there will be a light shift of(6×10^-7)×±5 MHz=±3Hz. The effects from each beam are broadly cancelled as the intensities are set to be the same. However, variation in the relative intensities of the beams will vary as molecules move around the trap. We estimate from the known geometry of the trap beams and assuming a thermal cloud of molecules at equilibrium that the2σvariation in the beam balance is only 2%. This leads to variation in the light shift0.12Hz that corresponds to a limit onT_2^*of 8.3 s. § PHASE SHIFT AS A FUNCTION OF RAMSEY TIME We observed a phase shift in the Ramsey fringe as a function of the Ramsey time for non-interacting states with the spin-echo pulse, as discussed in the main text. This phase shift is shown in the graph below, with a quadratic fitted to guide the eye. < g r a p h i c s > 2 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Blackmore et al.(2023)Blackmore, Gregory, Hutson, andCornish]Blackmore2023 authorauthorJ. A. Blackmore, authorP. D. Gregory, authorJ. M. Hutson, and authorS. L. Cornish, titletitleDiatomic-Py: A Python module for calculating the rotational and hyperfine structure of ^1Σ molecules, https://doi.org/10.1016/j.cpc.2022.108512journaljournalComputer Physics Communications volume282, pages108512 (year2023)NoStop [Fernley et al.()Fernley, Gregory, Bromley, Kotochigova, and Cornish]Fernley2023 authorauthorL. M. Fernley, authorP. D. Gregory, authorS. L. Bromley, authorS. Kotochigova, and authorS. L. Cornish, titletitleIn preparation, 2023,@noop NoStop
http://arxiv.org/abs/2306.03764v1
20230604131548
On the origin of Mega Radiohalos
[ "L. Beduzzi", "F. Vazza", "G. Brunetti", "V. Cuciti", "D. Wittor", "E. M. Corsini" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO" ]
Dipartimento di Fisica e Astronomia "Galileo Galilei", Università di Padova, Vicolo dell’Osservatorio 3, 35122 Padova, Italy Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 92/3, 40121 Bologna, Italy Istituto di Radio Astronomia, INAF, Via Gobetti 101, 40121 Bologna, Italy Hamburger Sternwarte, University of Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany Osservatorio Astronomico di Padova, INAF, Vicolo Dell'Osservatorio 5, 35122, Padova, Italy We present a first attempt to investigate the origin of radio emitting electrons in the newly discovered class of Mega Radiohalos in clusters of galaxies. We study the evolution of relativistic electrons accreted by the external regions of a simulated cluster of galaxy at high resolution, including the effect of radiative losses and turbulent re-acceleration acting on relativistic electrons. We conclude that turbulent re-acceleration is enough prolonged in time to produce a large reservoir of radio emitting electrons in the large regions illuminated by Mega Radiohalos observed by LOFAR. On the origin of Mega Radiohalos L. Beduzzi1 [email protected] F. Vazza2,3,4 G. Brunetti3 V. Cuciti2,3 D. Wittor4 E. M. Corsini 1 Received / Accepted =============================================================================================================================================================================== § INTRODUCTION The unprecedented sensitivity to large-scale diffuse emission provided by the Low Frequency Array (LOFAR, ) is allowing the discovery of extended radio emission at the extreme periphery of clusters of galaxies <cit.>, including spectacular and volume filling radio emission structures up to almost the virial radius in some cases <cit.>. In particular, <cit.> has reported the discovery of a new class of large diffuse radio sources outside the central regions of clusters of galaxies, well beyond the volume of "classical" radio halos, dubbed "Mega" radio halos (hereafter MegaRH, to be contrasted with the ClassicalRH introduced before). MegaRH show a rather flat synchrotron brightness profile extending beyond the scale of the ClassicalRH emission detected in the same clusters. Limited to the small statistics of detected objects, the transition from the ClassicalRHs to MegaRHs occurs at a radius which is about a half of R_ 500 from the cluster centre, and the the average emissivity of the MegaRHs is ∼ 20-25 times lower than the emissivity of ClassicalRHs. For two clusters, the combination of 50 and 140 MHz LOFAR observations allowed to characterise the synchrotron spectrum of the MegaRHs, as a relatively steep spectrum with spectral index α∼-1.6. MegaRHs allow us to probe the outskirts of galaxy clusters in a ∼ 30 times larger volume than what has been done for decades before, and also to study particle acceleration and magnetic field amplification in the complex regime of very weakly collisional plasmas <cit.>. The observed steep spectrum in MegaRHs would favour a scenario where relativistic electrons are re-accelerated by second order Fermi mechanisms in a turbulent medium <cit.>. However, the discontinuity that is observed in radio brightness profiles of ClassicalRHs and MegaRHs <cit.> suggests that the latter trace a turbulent component that is different from that of ClassicalRHs. In fact, numerical simulations show the presence of a baseline level of large-scale turbulent component at these cluster radii <cit.>, driven by the continuous accretion of matter onto the cluster and providing significant non-hydrostatic pressure support <cit.>. In this work, we present the first attempt to evaluate the lifecycle of relativistic electrons in the outskirts of galaxy clusters under realistic conditions, with an ad-hoc cosmological simulation and methods (Sec.<ref>). This way we can estimate under which circumstance can the observed volume of MegaRHs be filled with radio emitting electrons (Sec.<ref>). Our tentative conclusion, which is rather insensitive to the unconstrained initial distribution of relativistic electrons before the formation of the cluster, is that the relativistic particles transported in large volumes may experience second order Fermi re-acceleration operating on on longer spatial and temporal timescales than usually studied for ClassicalRHs, and may support the emission observed in MegaRHs(Sec.<ref>). § METHODS AND SIMULATIONS We produced a cosmological, adaptive mesh refinement simulation of a cluster of galaxies, using self gravity for ordinary and dark matter, radiative equilibrium cooling, and not other galaxy formation-related physics (e.g. star formation or feedback from supernovae). Magnetic fields are evolved assuming ideal Magneto-Hydrodynamics (MHD) using the the hyperbolic cleaning method <cit.> (see for previous tests and results with a similar numerical setup). We started from simple uniform magnetic field seed of B_0=0.4 nG (comoving) in each direction at z=40. The simulated cluster has a total (ordinary+dark) matter mass of M_ 100=3.8 · 10^14 M_⊙ and a virial radius of R_ 100=1.52  Mpc at z=0. It is the most massive object forming in a total volume of (100 Mpc/h)^3, sampled with a uniform root grid of 128^3 cells and dark matter particles (with an mass resolution of m_DM0=3 · 10^10 M_⊙), and further refined with 4 additional levels of × 2 refinement in spatial resolution each (and × 2^3 refinement in mass resolution of the dark matter component, at each refinement step) with nested regions of decreasing size. The inner (15.6 Mpc)^3 volume, where the cluster forms, is resolved with a maximum dark matter mass resolution of m_DM=7.3 · 10^6 M_⊙ (i.e. m_DM0/8^4), and with a uniform spatial resolution of 70 kpc/cell. We also allowed for 2 extra levels of adaptive mesh refinement within this innermost region, to best model turbulence, by refining on local gas or dark matter overdensities, up to a final maximum resolution of ≈ 12.2  kpc/h= 18.0 kpc (comoving). This resulted in central magnetic field values of ∼ 0.5  μ G in the cluster center, which signals a too low amplification efficiency for the small-scale dynamo captured at this finite spatial resolution <cit.>. In any case, we used for all our calculations a rescaled magnetic field intensity, to take into account the unresolved small-scale amplification, following <cit.>. §.§ Energy evolution of relativistic electrons In post-processing, we injected and propagated ∼ 10^5 passive tracers, which allowed us to track the Lagrangian history of gas matter in this system with the CRaTer code <cit.>. The distribution of particles was created by sampling a fixed gas mass resolution of in the 3-dimensional distribution of gas produced by the MHD simulations, and by evolving forward in time the position of particles, with a simple time integrator r⃗(t+dt) = r⃗(t) + v⃗_⃗ ⃗g⃗a⃗s⃗ dt, after interpolating the 3d-velocity field with a cloud-in-cell spatial interpolation method, as explained in <cit.>. We used the time resolution of our finest saving of snapshot, dt ≈ 90  Myr (nearly constant), which gave us 103 snapshots to analyse. Figure <ref> shows six epochs in our simulation, and the evolving positions of tracers. Each tracer records the time evolution of various physical quantities of interest (gas density, velocity, divergence, vorticity, temperature, and magnetic field intensity), and is meant to track the spatial propagation of families of relativistic electrons frozen onto the gas by the tangled magnetic field[We can safely neglect the unaccounted effect of cosmic ray diffusion, because for ∼ GeV cosmic ray electrons in ∼ μ G magnetic fields, the diffusion length within a timestep is much smaller than our spatial resolution (i.e. the typical spatial diffusion scale is l_ diff≤ 100  pc for each single timestep for radio emitting cosmic ray electrons, while our spatial resolution is 18  kpc).]. Unlike in more computationally expensive approaches, in this paper we are interested in the energy evolution of electrons, rather in their exact spectra, hence we will not make use of Fokker-Planck methods and we simply assume that each tracer samples the energy evolution of a specific energy of electrons. We use the "ultra relativistic" approximation γ≈ E/mc^2 and we computed the combination of loss and gain terms as γ̇≈ |γ/τ_ rad| + |γ/τ_ c| + γ/τ_ adv - |γ/τ_ acc|, where τ_ rad, τ_ c, and τ_ adv are respectively the loss timescales for the radiative, Coulomb and expansion (compression) processes, while τ_ acc represents the acceleration timescale due to turbulent reacceleration. All loss terms are as in <cit.>. The median gas velocity dispersion of our tracers, is σ_v ∼ 500  km/s, and up to ∼ 700 km/s during mergers. We use the second order Fermi acceleration mechanism proposed by <cit.> which assumes super-Alfvenic incompressive turbulence. This mechanism was shown to be a promising candidate to explain the production of diffuse radio emission at the periphery of clusters of galaxies <cit.>. The acceleration timescale for this process is: τ_ acc = 125  Myr L[ kpc]/(500)   B[μ G]/√(n[ cm^-3]/10^-3) (δ V_ turb[ cm/s]/10^8)^3 , in which δ V_ turb= |∇×v⃗| L, i.e. the gas vorticity measured across L (=54  kpc in our case, considering a stencil of 3 cells) is our best estimate of the solenoidal turbulent velocity. In the case of Kolmogorov spectrum, Eq.1 does not depend on the exact choice for the scale L where we measure vorticity, because the turbulent energy flux (F_ turb∝δ V_ turb^3/L) is constant within the inertial range of turbulence <cit.>. § RESULTS The aim of this work is to test whether the turbulence measured in the external regions of the simulated cluster is sufficient to generate a population of radio emitting electrons filling a large intracluster volume, similar to what observed in MegaRHs by LOFAR. Furthermore, by sampling the cluster volume with tracers we want to evaluate whether the evolution and effective lifetime of relativistic electrons in the region of ClassicalRHs are similar to that of electrons found in the periphery. First, we measure the typical evolution of the physical conditions of gas matter accreted onto ClassicaRH and MegaRH regions at the end of the simulation. We can do that in a straightforward way, because our tracer approach allows us to study in time the Lagrangian evolution of the gas matter found anywhere in the simulation at a given time. Based on the radio surface brightness profiles of the four clusters studied in <cit.>, we define the ClassicalRH region as 0 ≤ r ≤ 0.4R_500, where r is the radial distance from the center of mass of our cluser at any given epoch, while the MegaRH region as the region in the 0.4R_500≤ r ≤ R_500 range. Figure <ref> gives the time evolution of the median values of magnetic field, gas density, curl and divergence of the gas velocity field and of temperature for tracers found in the two regions towards the end of our simulation. This clearly shows that the (thermo)dynamical history of gas matter ending up in either the ClassicalRH and or the MegaRH region is very similar. It should be remarked that, although the average volume weighted gas density in MegaRH is about a factor ten smaller than in ClassicalRHs, our tracers give a mass-weighted average of all fields, biased towards the clumpiest end of the gas distribution. Also by studying the trajectories and the evolution of their average distance from the cluster centre of tracers, we infer that the bulk of gas accreted by the MegaRH regions has already crossed once the cluster centre, and has been dispersed to larger radii over the latest ∼ 1-2  Gyr. The relativistic electrons carried by such accreted gas have been already subject to significant turbulent re-acceleration (and compression) already once in their past, while in the final part of their evolution they age in a less dense and magnetised environment than ClassicalRHs, which makes their loss timescale slightly longer (although Inverse Compton losses are always present). This makes the gas matter in the MegaRH region suitable for the re-acceleration of relativisic electrons up and beyond to γ∼ 10^3 (on average), allowing a fraction of them to become radio emittent, similar to the matter ending up in the ClassicalRH region. This is shown in Figure <ref>, which presents the evolution of the median timescales for acceleration and loss processes for electrons with fixed energies: γ=10^2, 10^3 or 10^4. The same Figure also shows the predicted evolution of γ(t) for our population of tracers, obtained by directly integrating γ(t) = γ_ inj + ∫ (dγ/dt) dt in time, where the loss and acceleration terms are computed as in Sec.2.1 and for which we tested two different initial values of the "seed electrons", γ_ inj=100-1000. Even in the (very conservative) scenario in which all seed relativistic electrons have γ_ inj=10^2 before their later accretion onto the main cluster, the combination of compression and turbulent re-acceleration in the subsequent dynamics allow them to reach γ∼ 500-700, on average, when the cluster merger occurs. The fact that the electrons accreted in these regions can keep their initial energy (and even increase it by a factor of a few) instead of loosing it within their respective cooling time is a fundamental finding of this work. For example, taking as a reference γ=10^2 electrons in Fig.<ref>, it is clear that while their loss timescale at the epoch of 8 Gyr (z ≈ 0.6) is of τ_ loss∼ 0.7  Gyr, their energy is not lost after τ_ loss, but rather it keeps increasing on average for the following 5 Gyr. This happens because the mild level of turbulent re-acceleration experienced by electrons accreted onto the MegaRH region (or additionally, adiabatic compression in the case of ClassicalRH) can increase their energy on a similar timescale. Therefore, in this system the turbulent re-acceleration induced in by the major merger (epoch of ≈ 12  Gyr) acts on an already "pre-accelerated" pool of mildly relativistic electrons. Next, we estimate which fraction of the accreted gas matter may produce radio emission at LOFAR frequencies, given its local plasma conditions and its past sequence of loss or gain mechanisms. In the turbulent reacceleration scenario, there is a critical Lorentz factor γ_ c at which the total cooling timescale t_ cool becomes comparable to the acceleration timescale t_ acc. Correspondingly, the emission radio spectrum will steepen at a frequency ν_ c=ξν_ b, where ν_ b is the break frequency and ξ∼ 6-8; the former can be estimated from the maximum value of the Lorentz factor of the electrons associated with each tracer, i.e. ν_ b≈ 4.6 ⟨γ⟩^2 · (B/μ G) [Hz](1+z)^-1 <cit.>. This allows us to translate the γ(t) measured onboard of our tracers as a function of time into an estimate of the maximum observable synchrotron emission frequency. As a first order approximation, the fraction of tracers for which ν_ c is larger then the LOFAR High Band Antenna (HBA, 140 MHz) or LOFAR Low Band Antenna (LBA, 50 MHz) central observing frequencies can tell us the fraction of the simulated volume, which will become potentially radio observable, as a result of the turbulent re-acceleration process. This results can be straightforwardly used also to predict the observability of MegaRH in other radio telescopes routinely used to study ClassicalRH (e.g. with the uGMRT, e.g. , the MWA, e.g. , MeerKAT e.g. ) as well as with the future Square Kilometer Array <cit.>. The upper panels of Figure <ref> show the cumulative distribution of the emission frequencies from all our tracers at three different epochs (just before, during and after the last major merger of the cluster) and for different initial values of the initial energy of electrons, separately for those located in the ClassicalRH and in the MegaRH. The lower panels show, the time evolution of the fraction of tracers in which the computed emission frequency ν_c is larger than the LOFAR HBA observing frequency (150 MHz), as well as the radial profile of the same fraction for the same three reference epochs of the upper panels. In both regions, a large fraction of the tracers gets accelerated beyond the 50-140 MHz range of LOFAR: a ≈ 31-79% of the volume of the ClassicalRH and a ≈ 22-57% fraction of the volume of the MegaRH should emit detectable radio emission in the three investigated epochs. The above fractions are only little changed using γ_ inj=10^2 instead of γ_ inj=10^3. Thus during mergers, the average energy of particles is increased by turbulent re-acceleration to a level at which there is approximate balance between gain and loss terms (from synchrotron and Inverse Compton emission), and the memory on the initial energy of the population of seed electrons is nearly lost. The full time evolution shows that, even if major mergers obviously enhances the fraction of radio emitting particles, the continuous turbulent re-acceleration experienced by the accreted gas always keep a significant fraction of them near to LOFAR frequencies. Intriguingly, the radial decline of the profile qualitatively matches the observed drop in the radial profile of the surface brightness observed in real MegaRH <cit.>. However, we need to resort in the future to more computationally demanding Fokker Planck methods <cit.> in order to fully simulate the synchrotron emission spectra of these accelerated cosmic ray electrons. § DISCUSSION AND CONCLUSIONS The motivation of this work is the recent observational discovery of Mega Radiohalos in clusters of galaxies <cit.>. We produced a new cosmological simulation of the evolution of cluster of galaxies to test if the turbulence generated in its outskirts is able to maintain energetic enough (∼ GeV) electrons in a fraction of the volume. We simulated the formation of one cluster of galaxies with the cosmological code and used Lagragian tracer particles to follow the energy evolution of families of electrons under the effect of radiative and Colomb losses, adiabiatic changes, and under the effect of turbulent re-acceleration in the "adiabatic stochastic acceleration" scenario <cit.>. We found that the formation of MegaRH, despite their unprecedent size, can be understood within the framework of the turbulent re-acceleration model already used to interpret the ClassicalRH phenomenology <cit.>: regardless of the initial energy of cosmic ray electrons, the integrated effect of turbulent re-acceleration by cluster-wide turbulent motions is enough to make 50% of our tracers in the MegaRH volume able to radiate in the LOFAR band (50-140 MHz). Our Lagrangian analysis of the MegaRH region suggests a few crucial and underlooked aspects of electron acceleration in these peripheral regions. First, our tracer analysis allowed us to show that most of the matter found in the MegaRH comes from the disruption and mixing of gas matter initially located in clumps, which has been accreted some Gyr before the final merger by the main cluster, and it does not instead come from smooth gas accretions (Fig. 1). Connected to this, the dynamical histories of the particles in the ClassicalRH or in the MegaRH are very similar, except in the last 1 Gyr after the latest major merger (Fig. 2). Thus a large fraction of the baryons has already crossed the innermost cluster regions at least once, and the relativistic electrons in the MegaRH have been processed at least once by the merger. Finally, our results call for a substantial update of the theoretical picture of the life-time of relativistic electrons in the ICM. In fact, such complex turbulent dynamics generates multiple episodes of re-acceleration, which overall make the effective lifetime of electrons longer than previously thought (Fig. 3). This implies that the pool of "fossil" electrons in the ICM may be significantly more energetic than what expected so far. In this work, we did not simulate in detail the possible injection process of the relativistic electrons later used for the production of radio emission, as in other works <cit.>. However, the fact that the bulk of the gas content of MegaRH comes from the accretion of already formed halos at z=2, as shown by our simulation, makes it is extremely likely that each of these halos has a significant content of relativistic electrons, following from the activity of star formation and active galactic nuclei, which are both peaking around this epoch. In this sense, our initial energy of γ_ inj = 10^2 has to be regarded as a very conservative limit of the actual energy of the relativistic electrons carried by the accreted halos. By building on this first positive test of the turbulent re-acceleration hypothesis for the origin of MegaRH, in forthcoming works we will perform complete simulations both of the possible seeding process of relativistic electorns, as well as of their energy evolution as a function of time, for a larger sample of clusters. § ACKNOWLEDGEMENTS F.V. acknowledges the financial support by the H2020 initiative, through the ERC StG MAGCOW (n. 714196) and from the Cariplo "BREAKTHRU" funds Rif: 2022-2088 CUP J33C22004310003. E.M.C. is supported by MIUR grant PRIN 2017 20173ML3WW-001 and Padua University grants DOR2020-2022 aa
http://arxiv.org/abs/2306.11223v1
20230620011712
Radar Sensing via OTFS Signaling
[ "Kecheng Zhang", "Zhongjie Li", "Weijie Yuan", "Yunlong Cai", "Feifei Gao" ]
eess.SP
[ "eess.SP" ]
Reasoning over the Air: A Reasoning-based Implicit Semantic-Aware Communication Framework Yong Xiao, Senior Member, IEEE, Yiwei Liao, Yingyu Li, Guangming Shi, Fellow, IEEE, H. Vincent Poor, Life Fellow, IEEE, Walid Saad, Fellow, IEEE, Mérouane Debbah, Fellow, IEEE, and Mehdi Bennis, Fellow, IEEE An abridged version of this paper was presented at the Proc. of the IEEE ICC Workshop on Data Driven Intelligence for Networks and Systems, Seoul, South Korea, May 2022<cit.>. (Corresponding author: Guangming Shi) Y. Xiao is with the School of Electronic Information and Communications at the Huazhong University of Science and Technology, Wuhan 430074, China, and also with the Peng Cheng Laboratory, Shenzhen, Guangdong 518055, China, and also with the Pazhou Laboratory (Huangpu), Guangzhou, Guangdong 510555, China (e-mail: [email protected]). Y. Liao is with the School of Electronic Information and Communications at the Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: [email protected]). Y. Li is with the School of Mech. Eng. and Elect. Inform. at the China University of Geosciences, Wuhan, China, 430074 (e-mail: [email protected]). G. Shi is with the Peng Cheng Laboratory, Shenzhen, Guangdong 518055, China, also with the School of Artificial Intelligence, the Xidian University, Xi’an, Shaanxi 710071, China, and also with the Pazhou Laboratory (Huangpu), Guangzhou, Guangdong 510555, China (e-mail: [email protected]). V. Poor is with the Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544 (e-mail: [email protected]). W. Saad is with the Department of Electrical and Computer Engineering at Virginia Tech, Blacksburg, VA 24061 (e-mail: [email protected]). M. Debbah is with the Technology Innovation Institute, and also with the Mohamed Bin Zayed University of Artificial Intelligence, 9639 Masdar City, Abu Dhabi, United Arab Emirates (email: [email protected]). M. Bennis is with the University of Oulu, Oulu, Finland, 90014 (e-mail: [email protected]). July 31, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== By multiplexing information symbols in the delay-Doppler (DD) domain, orthogonal time frequency space (OTFS) is a promising candidate for future wireless communication in high-mobility scenarios. In addition to the superior communication performance, OTFS is also a natural choice for radar sensing since the primary parameters (range and velocity of targets) in radar signal processing can be inferred directly from the delay and Doppler shifts. Though there are several works on OTFS radar sensing, most of them consider the integer parameter estimation only, while the delay and Doppler shifts are usually fractional in the real world. In this paper, we propose a two-step method to estimate the fractional delay and Doppler shifts. We first perform the two-dimensional (2D) correlation between the received and transmitted DD domain symbols to obtain the integer parts of the parameters. Then a difference-based method is implemented to estimate the fractional parts of delay and Doppler indices. Meanwhile, we implement a target detection method based on a generalized likelihood ratio test since the number of potential targets in the sensing scenario is usually unknown. The simulation results show that the proposed method can obtain the delay and Doppler shifts accurately and get the number of sensing targets with a high detection probability. § INTRODUCTION The future wireless systems, such as the sixth-generation (6G) wireless networks, are expected to provide both communication and sensing services simultaneously<cit.>. It is believed that sensing will play an important role together with communications in the next-generation wireless network <cit.>. For example, the future autonomous vehicle will receive a lot of information, including maps in high resolution and real-time navigating information, from the base station (BS). Meanwhile, the vehicle itself should have robust and high-accuracy sensing ability to avoid obstacles <cit.>, which implies the implementation of integrated sensing and communication (ISAC). To realize ISAC, several studies consider orthogonal frequency division multiplexing (OFDM) as the transmitted signal waveform <cit.>. OFDM has good robustness and high spectral efficiency under time-invariant frequency selective channels <cit.>, and low detection complexity for radar target sensing. However, OFDM suffers from several drawbacks such as Doppler intolerance and high peak-to-average power ratio (PAPR). In high-mobility environments, the communication performance of OFDM will be degraded significantly, since the channels in high-mobility environments are fast time-varying, and the orthogonality between OFDM subcarriers will be broken due to severe Doppler effect. Moreover, it will also be challenging to implement OFDM in high-frequency communication systems due to the high PAPR <cit.>. Under these conditions, it is necessary to develop a new waveform that can support both robust radar sensing and high-quality wireless communications in high-mobility environments. A new modulation scheme called orthogonal time frequency space (OTFS) modulation was proposed recently for reliable communications over multi-path channels in which the delay and Doppler shifts differed in each path. By describing the channel in the delay-Doppler (DD) domain <cit.>, OTFS provided an alternative representation of the fast time-varying channel produced by moving objects in wireless environments. With this representation, the information symbols were modulated over the two-dimensional DD domain, which has extra benefits compared to the traditional time-frequency (TF) domain modulation scheme. For example, compared to the fast time-varying channel in the TF domain, the channel in the DD domain is usually sparse and quasi-static <cit.> in high-mobility wireless environments, which can simplify the channel estimation procedure and provide robust communication performance <cit.>. Moreover, the range and velocity parameters of targets can be directly infferred from delay and Doppler shifts, which are essential parameters in radar signal processing, making OTFS a natural choice for realizing ISAC. In our previous work <cit.>, we have unveiled that the OTFS demodulation is exactly the range-Doppler matrix calculation procedure in radar sensing. The complexity of the ISAC system will be reduced greatly since the sensing and communication can be done through the same calculation process if the OTFS signaling is implemented. There have been many researches towards the communication via OTFS waveform <cit.>, which showed the superior communication performance of OTFS under high-mobility scenario, but there are much less studies <cit.> about OTFS radar sensing. For example, in <cit.>, the authors proposed an OTFS-based ISAC system using the maximum likelihood (ML) algorithm to estimate the range and velocity of targets. The proposed system can maintain a superior communication performance compared to OFDM and achieve the radar estimation performance bound at the same time. The work <cit.> proposed an OTFS-based radar system using the matched-filter algorithm for target sensing. By placing randomly generated data symbols with the same power in the whole OTFS frame, the potential targets together with the associated ranges and velocities can be estimated via a matched-filter algorithm. In <cit.>, a generalized likelihood ratio test-based detector using OTFS waveform was proposed. Different from the two works mentioned above, this work performs sensing via the time-domain OTFS signal directly. The results show that the estimation performance is improved compared to the standard fast Fourier transform (FFT) based OFDM radar. However, the potential of OTFS radar sensing requires further exploration. To achieve good performance, the calculation complexity of the algorithm in <cit.> is cubic increased with the frame size, which is relatively high. In the proposed algorithm in <cit.>, both the delay and Doppler shifts are considered as integers. However, this may not be practical in real-world wireless networks where the duration of one OTFS frame is limited, which can result in insufficient Doppler resolution. Hence, the fractional Doppler effect is inevitable in reality. In order to achieve accurate radar sensing, it is necessary to study the use of OTFS in the presence of off-grid targets, although integer delay is typically assumed sufficient for communication design. Therefore, it is necessary to take into account both fractional delay and Doppler shifts when using OTFS for radar sensing. The need to address these issues motivates the development of a method for radar sensing using OTFS in the presence of fractional delay and Doppler shifts. In this paper, a two-step method for the fractional delay and Doppler indices estimation is proposed with the inspiration of the pulse compression technique in radar sensing <cit.>. In the ISAC scenario, the transmitted OTFS frame is designed to provide both communication and sensing services, which means the delay and Doppler domain are fully occupied by information symbols. Under this condition, though the received data is demodulated into the DD domain, it is hard to locate the delay and Doppler shifts of targets due to the overlapped responses from other data symbol. Thus, we perform a 2D correlation-based method to first obtain the integer parts of the delay and Doppler parameters. Then, we propose a difference-based method to estimate the fractional parts of the parameters. Since the number of potential targets is usually unknown in the sensing scenario, we propose a target detection algorithm based on the generalized likelihood ratio test (GLRT). The simulation results show that the proposed algorithm can estimate the delay and Doppler shifts associated with multiple targets accurately. Notations: The boldface lowercase letter and boldface capital letter denote the vector and the matrix, respectively. The superscript T, *, H represent for transpose, conjugate, and the Hermittan operations, respectively; |·| denotes the modulus of a complex number or the cardinality of a set; ∥·∥ denotes the L^2 norm; 𝔼[x] denotes the expectation of x; ∂ is the partial derivative operator; the notation 𝒪 is an asymptotic notation describing the order of errorness; [·]_N denotes the modulo operation with divisor N. § SYSTEM MODEL In this section, we first recap the basic concepts in OTFS and then present the input-output relation of OTFS symbols in the delay-Doppler (DD) domain. §.§ Basic Concepts of OTFS Let N∈ℕ and M∈ℕ denote the number of time slots and subcarriers of an OTFS frame, respectively, where ℕ is the positive integer set. The discrete delay-Doppler plane is denoted as 0.9!Γ={(k/NT,l/MΔ f|k∈[0,N-1],l∈[0,M-1])} Denote the duration of one timeslot as T, and the subcarrier spacing as Δ f, respectively, resulting in the duration of one OTFS frame of NT and the occupied bandwidth of the whole system of MΔ f. The resolutions towards the delay and Doppler shifts are 1/MΔ f and 1/NT, respectively. The cross-ambiguity function for the pulse shaping filter at the transmitter g_tx(t) and the receiver g_rx can be written as A_g_rxg_tx(t,f)Δ=∫_tg_tx(t^')g_rx^*(t^'-t)e^j2πν(t^'-t)dt^'. For ease of exposition, we focus on the case where the transmitter and receiver of OTFS are applied by ideal pulse shaping filters <cit.>, which have the bi-orthogonal property <cit.> as shown below A _g_rxg_tx(t,f)|_t=(n-n^')T-l_τ/MΔ f,(m-m^')Δ f-k_ν/NT =δ[n]δ[m]q_τ_max(t-nT)q_ν_max(f-mΔ f) where q_a(t)=1 for x∈(-a,a) and zero otherwise. §.§ Input-output Relation of OTFS Transmission The OTFS input-output relation is summarized in Figure <ref>. Below are the detailed description of the OTFS transmission procedure. The information bits sequence is mapped into a symbol set in the DD domain, {X_DD[k,l]|k=0,..., N-1,l=0,..., M-1}, where k and l are the Doppler and delay indicies respectively. X_DD∈𝔸, where 𝔸={a_i∈ℂ|i=1,2,...,|A|} is the set of modulation alphabet (e.g. QAM). The OTFS symbols are transformed from the DD domain into the time-frequency (TF) domain via the inverse symplectic finite Fourier transform (ISFFT), which is described below X_TF[n,m]=∑_k=0^N-1∑_l=0^M-1X_DD[k,l]e^j2π(nk/N-ml/M), for n=0,...,N-1 and m=0,...,M-1. Then the TF domain modulator converts the symbols X_TF[n,m] to a continuous time waveform s(t) with the transmitter pulse shaping filter g_tx(t) via the Heisenberg transform <cit.> s(t)=∑_n=0^N-1∑_m=0^M-1X_TF[n,m]g(t-nT)e^j2π mΔ f(t-nT). After passing a time-varying channel, the signal s(t) arrives at the receiver. The received signal r(t) is given by r(t)=∫∫ h(τ,ν)s(t-τ)e^j2πν(t-τ)dτ dν + w(t), where w(t) is the additive white Gaussian noise (AWGN) process with one sider power spectral density (PSD) 𝒩_0, h(τ,ν)∈ℂ is the complex baseband channel impulse response, in which τ and ν represents the channel response to an impulse with delay τ and Doppler shift ν, given by h(τ,ν)=∑_i=1^Ph_iδ(τ-τ_i)δ(ν-ν_i), where P∈ℤ denotes the targets number in the sensing field, and h_i, τ_i and ν_i denote the reflection coefficient, delay, and Doppler shift associated with the ith target, respectively. Denote the range and relative velocity of the ith target as R_i and v_i, then the round-trip delay τ_i∈ℝ and the Doppler frequency ν_i∈ℝ are represented as τ_i=2R_i/c=l_τ_i/MΔ f,ν_i=2f_cv_i/c=k_ν_i/NT, where c is the speed of light and f_c is the carrier frequency, l_τ_i∈ℝ and k_ν_i∈ℝ are the delay and Doppler indices of ith target. In this paper, we consider fractional delay and Doppler indices, and represent the delay and Doppler indices as l_τ_i=l_i+ι_i and k_ν_i=k_i+κ_i, where l_i∈ℤ and k_i∈ℤ are the integer parts of the indices associated with the ith target, and ι_i∈ℝ and κ_i∈ℝ are the fractional part. At the receiver, the received signal r(t) is transformed into the TF domain by Wigner transform <cit.>, which can also be expressed as the cross-ambiguity function A_g_rx,r(t,f) between the receiver pulse shaping filter and the received signal, given by Y_TF(t,f)=A_g_rx,r(t,f) Δ=∫ g_rx^*(t^'-t)r(t^')e^-j2π f(t^'-t)dt^'. By sampling Y(t,f), we have the discrete output as Y_TF[n,m]=Y_TF(t,f)|_t=nT,f=mΔ f, for n=0,...,N-1 and m=0,...M-1. Then, by applying SFFT on Y_TF[n,m], we have the symbols Y_DD[k,l] in the delay-Doppler domain Y_DD[k,l]=1/√(NM)∑_n=0^N-1∑_m=0^M-1Y_TF[n,m]e^-j2π(nk/N-ml/M). Under the case of ideal pulse shaping filter <cit.>, the input-output relationship of OTFS symbols is given by Y_DD[k,l]=∑_n=0^N-1∑_m=0^M-1 X_DD[n,m]h_ω[k-n,l-m] +Z_DD[k,l], where Z_DD∼𝒞𝒩(0,σ^2𝐈) is the effective noise in the DD domain, and h_ω[k,l] is the effective channel in DD domain obtained by h_ω[k,l]=∑_i=1^Ph_iω(k-k_ν_i,l-l_τ_i)e^-j2πν_iτ_i, where ω(k-k_ν_i,l-l_τ_i) is the sampling function, and ν_i and τ_i are given in (<ref>). By adopting the rectangular window <cit.> at both the transmitter and receiver in the TF domain, the sampling function is simplified to be ω(k-k_ν_i,l-l_τ_i)=𝒢(k-k_ν_i)ℱ(l-l_τ_i), 𝒢(k-k_ν_i)Δ=1/N∑_k^'=0^N-1e^-j2π(k-k_ν_i)k^'/N, ℱ(l-l_τ_i)Δ=1/M∑_l^'=0^M-1e^j2π(l-l_τ_i)l^'/M. According to <cit.>, the non-zero entries of the DD domain effective channel are localized identically for both the ideal and rectangular pulse shaping filter, and they only differ from an additional phase offset. Therefore, the designed channel estimation algorithm in this paper can be easily extended to the case of a rectangular pulse shaping filter. § OTFS BASED RADAR SENSING In this section, we introduce the proposed target detection method and the 2D correlation-based method for parameter estimation. §.§ 2D Correlation-based Operation We will describe the 2D correlation-based parameter estimation algorithm in this subsection. Though the received OTFS symbol matrix is represented in the delay and Doppler domain, we cannot localize the targets-of-interest directly from 𝐘_DD due to the existence of information symbols. After passing the time-varying channel, the DD domain symbols under different delay and Doppler bins will overlap with each other, which makes the received DD domain signals contain both the channel responses and the overlapped responses from the DD domain information symbols. Inspired by pulse compression radar sensing, a 2D correlation-based estimator, which can be considered as pulse compression along both the delay and Doppler axes, is implemented to obtain the delay and Doppler parameters of the targets. Denote the matrix after 2D pulse compression (correlation) as 𝐕, then the accumulated correlation coefficient under different delay and Doppler indices can be expressed as V[k,l]=∑_n=0^N-1 ∑_m=0^M-1Y_DD^*[n,m] × X_DD[[n-k]_N,[m-l]_M] To have a better illustration of the proposed method, we establish a toy example of OTFS radar sensing. In this example, we assume there are P=4 targets, and the normalized quadrature phase shift keying (QPSK) information symbols are generated randomly. Let set both M and N to be 32, and the delay and Doppler indices associated with different targets are set as {14.29, 7, 3.37, 11.12} and {11.72, 2, 5.06, 22.65}, respectively. The DD domain received symbol matrix under this specific scenario is shown in Fig.<ref>, and the matrix after pulse compression is represented in Fig.<ref>. From Fig.<ref>, the received symbol matrix Y_DD is dense due to the overlapped responses from DD domain information symbols. It is hard to identify the targets from this matrix. However, after performing a 2D correlation operation, the targets are more localized. We can view this procedure as a special DD domain pulse compression, which has a similar function in radar sensing to enhance the acquisition of delay and Doppler responses. §.§ Target Detection via Generalized Likelihood Ratio Test After obtaining the 2D correlation matrix, the number of targets, i.e., P, remains unknown. Hence, we propose an algorithm to detect the targets sequentially. Once a target is detected, the integer parts of the corresponding parameters can be estimated at the same time. When no more targets are detected, the algorithm stops. Before introducing the details of the target detection algorithm, let's first rewrite (<ref>) in an alternative matrix form, which will make the representation of the detection procedure clearer. By substituting (<ref>) and (<ref>) into (<ref>), after some straightforward linear algebra calculations, we have 𝐯=∑_i=1^Ph_i^*e^j2πl_τ_ik_ν_i/MN𝐗𝐖^H_(τ_i,ν_i)𝐱^*+𝐗^T𝐳^* where the i-th entry of the vector 𝐱^* is the (k_1,l_1)-th entry of the information symbol matrix 𝐗, and 𝐗 is the rearranged matrix of transmitted symbols, i.e., the circulant shifts of 𝐗, the (i, j)-th entry of the matrix is represented as X[i,j]=x[[k_1-k_2]_N,[l_1-l_2]_M]. in which i=k_1M+l_1 and j=k_2M+l_2, and k_(·)∈[0, N-1] and l_(·)∈[0, M-1] are the row and column indices of the original information matrix 𝐗. The (kM+l)-th element in vector 𝐡 is the (k,l)-th entry of the effective channel 𝐡_ω, and 𝐖^H_(τ_i,ν_i) is the matrix form of the effective channel sampling function. The (i,j)-th element of matrix 𝐖, in which i=k_1· M+l_1 and j=k_2· M+l_2, is given by W[i,j]=ω(k_2-k_1-k_ν_i,l_2-l_1-l_τ_i), where ω(ν,τ) is given in (<ref>). In order to detect the number of targets, we model the detection procedure as a binary hypothesis testing, in which hypothesis ℋ_0 and ℋ_1 represent the absence and presence of the ith target, respectively. The observation under the two hypotheses can be given by 𝐯={ 𝐗𝐳^* , under ℋ_0 h^* e^j2πl_τk_ν/MN𝐗𝐖^H_(τ,ν)𝐱^* +𝐗𝐳^* , under ℋ_1.. To solve (<ref>), we treat h, τ, and ν as deterministic unknown variables and resort to the generalized likelihood ratio test (GLRT) <cit.>, which is expressed as Λ(𝐯)=max_h,τ,νp(𝐯|ℋ_1;h,τ,ν)/p(𝐯|ℋ_0)ℋ_1ℋ_0≷η where η is the threshold. By assuming 𝐳∼𝒞𝒩(0,σ^2), we have 𝐗^T𝐳^*∼𝒞𝒩(0,σ^2/MN). Denote h=h^*e^j2πl_τ_ik_ν_i/MN, the GLRT becomes Λ(𝐯)=e^-1/σ^2min_h,τ,ν∥𝐯-h𝐗𝐖^H_(τ,ν)𝐱^*∥^2/e^-1/σ^2∥𝐯∥^2ℋ_1ℋ_0≷η. For fixed τ and ν, the solution of h that maximizes the numerator in (<ref>) is h≈𝐱^T𝐖𝐗^H𝐯/∥𝐱∥^2, where the approximation is due to the approximate identity property of matrix 𝐗^T𝐗^*/MN. The approximately equal sign becomes the equal sign when MN goes to infinity, and readers can refer to <cit.> for proof details. By setting the phase offset to 0 in the proof procedure of <cit.>, it comes to the approximate identity property of 𝐗^T𝐗^*/MN in this paper. Given h̃, we get the GLRT test as follows max_τ,ν|𝐱^T𝐖(τ,ν)𝐗^H𝐯|^2/σ^2∥𝐱∥^2ℋ_1ℋ_0≷η. where η=log(η). Here we use the cell-averaging constant false alarm rate (CA-CFAR) method to calculate the adaptive threshold, and the details for computing η(ν,τ) are given in Appendix <ref>. The targets are decided at those locations where the magnitude of the peaks exceeds the threshold, and the corresponding delay and Doppler indices are considered as the parameters associated with the targets. The algorithm for target detection is summarized in Algorithm <ref>. §.§ Fractional Parameter Estimation By taking the peaks on the delay-Doppler plane Γ, we can only get integer parameters, the fractional parts of the delay and Doppler indices remain unknown, which will lead to an inaccurate sensing result <cit.>. We will introduce a simple method to estimate the fractional part of delay and Doppler indices in this subsection. By observing the 2D correlation matrix shown in Fig.<ref>, we can see that for the targets that have a fractional delay and Doppler indices, there exists power leakage from the corresponding delay-Doppler bins to their neighbors. The explanation of the power leakage caused by the fractional indices can be found in <cit.>. Inspired by this obsession, we propose a difference-based method to estimate the fractional parts of indices, which is explained as follows. Note that the delay and Doppler indices of the i-th target can be represented as k_ν_i=k_i+κ_ν_i and l_τ_i=l_i+ι_τ_i, respectively. Let us first consider calculating the fractional part of the Doppler index under noiseless conditions. In this paper, we assume that there are no two targets that have the same delay or Doppler shifts. After 2D correlation, the row indices of the maximum and the second maximum magnitudes at the l_ith column in the delay-Doppler matrix, i.e., k_ν_1^' and k_ν_2^' are k_ν_1^' =_k∈{⌈ -N/2 ⌉,…,⌈ N/2 ⌉-1}|V[k,l_i]|, k_ν_2^' =_k∈{⌈ -N/2 ⌉,…,⌈ N/2 ⌉-1}\{k_ν_1^'}|V[k,l_i]|. Having k_ν_1^' and k_ν_2^' in hand, we have the following proposition. In noiseless scenarios, the actual Doppler index k_i+κ_ν_i of the i-th target must fall into the interval bounded by k_ν_1^' and k_ν_2^', where k_i=k_ν_1^', and |k_ν_2^'-k_ν_1^'|=1. The ratio between the magnitudes of the correlation coefficients with Doppler indices k_ν_1^' and k_ν_2^' and the same delay index can be approximated by |V[k_ν_1^',l_i]|/|V[k_ν_2^',l_i]|≈|k_ν_2^'-k_ν_1^'-κ_ν_i|/|-κ_ν_i|, where the approximation error is at the order of 𝒪(1/MN). See Appendix <ref>. Therefore, the fractional Doppler can be derived as κ_ν_i=(k_ν_2^'-k_ν_1^')|V[k_ν_2^',l_i]|/|V[k_ν_1^',l_i]|+|V[k_ν_2^',l_i]|, Similarly, by applying the above derivation to the fractional delay taps, we can get ι_τ_i=(l_τ_2^'-l_τ_1^')|V[k_i,l_τ_2^']|/|V[k_i,l_τ_1^']|+|V[k_i,l_τ_2^']|, where l_τ_1^' and l_τ_2^' are the column indices of the maximum and the second maximum magnitudes in the k_ith row of the 2D correlation matrix 𝐕. The algorithm to estimate the fractional parts of the delay and Doppler indices are summarized in Algorithm, where k=[k_1,…,k_P] and l=[l_1,…,l_P] denotes the integer parts of the Doppler and delay indices estimated from the target detection algorithm described in Section <ref>, respectively. §.§ Cramer-Rao Lower Bound (CRLB) In this subsection, we derive the CRLB of the delay and Doppler estimation to evaluate the proposed algorithm. By substituting (<ref>) and (<ref>) into (<ref>), we can rewrite (<ref>) as Y_DD[k,l]=U_DD[k,l]+Z_DD[k,l], where 𝐔_DD is the matrix of transmitted DD domain symbols under a noiseless scenario, whose (k,l)-th entry is U_DD[k,l]=∑_i=1^P∑_n=0^N-1∑_m=0^M-1h_ie^-j2π(k_i+κ_i)(l_i+ι_i)/MNX_DD[n,m] 1/NM∑_k^'=0^N-1e^-j2π(k-n-k_i-κ_i)k^'/N∑_l^'=0^M-1e^j2π(l-m-l_i-ι_i)l^'/M We define θ=[κ_1,...,κ_P,ι_1,...,ι_P]^T, i.e., the fractional parts of delay and Doppler indices. Then the CRLB of the j-th element of θ is the j-th diagonal element of the inverse of the Fisher information matrix <cit.> θ^CRLB_j=[I^-1(θ)]_jj where the the (i,j)-th element of the Fisher information matrix 𝐈(θ)∈ℂ^2P×2P is [𝐈(θ)]_ij=-𝔼[∂^2log f(𝐲|θ)/∂θ_i∂θ_j] where i=1,...,2P and j=1,...,2P, and the expectation is taken with respect to f(𝐲|θ), which is given by f(𝐲|θ)=∏_i^'=1^MN1/√(2πσ^2)e^-|y_i^'-u_i^'|^2/2σ^2 where y_i^' and u_i^' are the i^'-th entry of the vectors 𝐲 and 𝐮, in which i^'=kM+l, i.e., the (k,l)-th entries of the two matrixes 𝐘_DD and 𝐔_DD. Then the logarithm of the likelihood function log f(𝐲|θ) is log f(𝐲|θ)=-MN/2log(2πσ^2)-1/2σ^2∑_i^'=1^MN|y_i^'-u_i^'|^2 Substitute (<ref>) into (<ref>), the (i,j)-th entry of the Fisher matrix becomes [𝐈(θ)]_ij=-1/2σ^2∑_i^'=1^MN[∂ u_i^'/∂θ_i∂ u_i^'^*/∂θ_j+∂ u_i^'^*/∂θ_i∂ u_i^'/∂θ_j] The first derivation of u_i^' towards the p-th projection of θ, ∂ u_i^'/∂θ_p, is expressed in (<ref>), in which p^' is the result of p modulo P. The CRLB is normalized in order to evaluate the performance of the proposed algorithm in terms of the normalized mean square error (NMSE). The CRLB for the fractional delay and Doppler indices are defined by κ_bound=∑_j=1^Pθ_j^CRLB/∥κ_θ∥_2^2, ι_bound=∑_j=P+1^2Pθ_j^CRLB/∥ι_θ∥_2^2, where κ_θ=[κ_1,...,κ_P]^T and ι_θ=[ι_1,...,ι_P]^T. § SIMULATION RESULTS In this section, we investigate the estimation performance under various conditions through Monter Carlo simulations. The simulation results are averaged from 10^4 OTFS frames, and each OTFS frame has N=64 time slots and M=128 subcarriers in the TF domain. The information bits carried by the OTFS frame are generated randomly and mapped to QPSK symbols. The carrier frequency is set as 24 GHz with 39.063 kHz subcarrier spacing. The maximum speed of the target is set to be 440 km/h with a maximum range of 3830 m. We consider two sensing scenarios in the simulation, in which there are P=4 and P=6 targets, respectively. We compare the performance of our proposed estimation algorithm with the conventional periodogram-based estimation algorithm via OFDM sensing, in which the number of subcarriers and time slots for OFDM sensing is set the same as the OTFS sensing to guarantee the same calculation complexity. The detection rate and the false alarm rate are counted and shown in Figure <ref> and Figure <ref>, respectively. The root-mean-square errors (RMSEs) of estimated range and velocity are calculated and drawn versus signal-to-noise ratio (SNR) in Figure <ref> and Figure <ref>, respectively. The comparisons between the NMSE and the theoretical CRLB of the estimations for range and velocity are shown in Figure <ref> and Figure <ref>. Since the proposed parameter estimation algorithm is proved under the assumption that no two targets have the same Doppler or delay, we want to show the performance of the proposed algorithms is guaranteed even though there exist targets that have the same delay or Doppler indices. From the simulation results shown in Figure <ref>, we can conclude that, although there is a little performance degradation, the estimation performance under our assumptions is similar to the scenario where the target parameters are generated randomly (i.e., there exist targets that have the same delay or Doppler indices). Meanwhile, we can see that the proposed fractional parameter algorithm always works better than only estimation the integer part of the delay and Doppler shifts of targets, which shows the effectiveness of the proposed algorithm. It can be observed in Figure <ref> that with the increment of SNR, the detection rate grows. The detector designed in this paper is a false alarm rate detector, and it can be seen from Figure <ref> that the false alarm rate remains nearly constant under different SNRs. The receiver operating characteristic curve (ROC) is showed in Figure <ref> to show the effectiveness of the implemented CA-CFAR detector. The simulation results show that the performance of the implemented CA-CFAR detector is similar on both the OTFS and OFDM sensing. As shown in Figure <ref> and Figure <ref>, the RMSE increase with the increment of SNR, and it becomes a stable value when the SNR is high, which is counterintuitive. Nonetheless, this phenomenon is reasonable due to the following reasons. Firstly, radar systems are typically considered invalid if their detection performance is inadequate under low SNR conditions. Thus, the RMSE is not calculated in such cases. However, determining the level of detection probability at which to commence calculating the RMSE is a challenging task. Therefore, we adopt an approach to calculate the RMSE regardless of the detection probability, as long as the target is detected. Although this approach may produce counterintuitive results, it is a reasonable way to handle this problem. Secondly, the rectangular window used in the effective channel introduces small magnitudes of entries when the fractional parts of Doppler indices are closer to 0.5. Such targets cannot be detected due to the low effective channel magnitudes and high noise levels. Only the targets whose delay and Doppler indices are close to an integer can be detected. Under these conditions, estimating only the integer parts of the parameters will yield lower RMSE. As the SNR increases, the interference from noise decreases, allowing more targets whose fractional parts of parameters are close to 0.5 to be detected. However, the RMSE remains the same due to limited resolution if only the integer part of the Doppler shift is estimated. Therefore, the counterintuitive increase of RMSE with SNR is a result of these factors. More detailed information about the characteristics of the rectangular window can be found in Figure 5 of <cit.>. In addition, after implementing the proposed fractional parameter estimation algorithm, the RMSE becomes lower under the same SNR compared to the simple estimation method that takes the integer parts only. The RMSE decreases since the interference caused by noise reduce with the increment of SNR. We observe that the NMSE of the estimated parameters only decreases slightly under high SNR conditions. The sub-optimal performance in these high SNR regions is attributed to the interference caused by the overlapped responses of the data symbols after passing through the channel. This interference can be categorized as inter-symbol interference in the DD domain, as each element of the transmitting matrix 𝐗DD is assigned a modulated data symbol, resulting in none-zero values. For brevity, we refer to this element placement scheme in 𝐗DD as the 'Full Pilot' strategy (RMSE is calculated under this scenario). In contrast, if only one DD-domain pilot is transmitted in an OTFS frame (i.e., all other elements are zero except the pilot), the NMSE approaches the Cramér-Rao lower bound (CRB). We denote this element placement scheme in 𝐗DD as the 'One Pilot' strategy for brevity. Figures 8 and 9 demonstrate that the NMSE under the 'One Pilot' strategy closely approximates the CRB in high SNR regions. This is because when only one pilot is present in 𝐗DD, the inter-symbol interference in the DD domain is eliminated, allowing the NMSE to approach the CRB under high SNR conditions. As part of our future work, we plan to investigate methods for eliminating the inter-symbol interference in the DD domain by leveraging the relationship between the signal responses in the DD domain and the target parameters. We will explore advanced algorithms, including Machine Learning methods, to address DD-domain inter-symbol interference elimination. § CONCLUSIONS In this paper, we proposed a two-step algorithm for the fractional delay and Doppler shifts estimation. Since the delay and Doppler shifts of the wireless channel can result in overlapped responses of information symbols, it is infeasible to localize the target from the received DD domain matrix directly. To obtain the parameters of targets, a 2D correlation scheme was first performed between the DD domain received and the transmitted data symbol matrix, and the integer part of delay and Doppler shifts can be obtained. After that, we implemented a difference-based method to estimate the fractional parts of the parameters. Since the number of potential targets is usually unknown in practice, we proposed a GLRT-based target detection method to get the number of targets. The simulation results show that the proposed method can detect the number of targets in the sensing scenario with a high detection probability and obtain the delay and Doppler shifts associated with multiple targets accurately. § ADAPTIVE THRESHOLD FOR TARGET DETECTION The GLRT tests under hypotheses ℋ_0 and ℋ_1 are given by S_ℋ_0(ν,τ)=|𝐱^T𝐖(τ,ν)𝐗^H𝐗𝐳^*|^2/σ^2∥𝐱∥^2 S_ℋ_1(ν,τ)=|𝐱^T𝐖(τ,ν)𝐗^H𝐯|^2/σ^2∥𝐱∥^2. Now we derive the adaptive threshold η(ν,τ) for the comparison towards the GLRT statistic S(ν,τ) and decision for ℋ_0 or ℋ_1. Note that S_ℋ_0(ν,τ) is a summation of a sequence of squared Gaussian complex random variables, which can be simplified as |𝐗𝐳^*|^2/σ^2∥𝐱∥^2. Hence, it is exponentially distributed. The derivation of the adaptive threshold is represented below. We first derive the distribution function of S_ℋ_0. From (<ref>) the value of each element in the correlated matrix can be viewed as the superposition of noise and clutters. We assume the interference of the correlated matrix entry contains two parts, i.e., the noise and the clutter, which implies the distribution of the interference follows the Rayleigh distribution. Meanwhile, since σ^2∥𝐱∥^2 is constant, it is equivalent to only compare the numerator of the GLRT statistic, which is just the value of the entry in 𝐯 under ℋ_0, with the threshold. Denote the simplified GLRT statistic as S_ℋ_0=|𝐗𝐳^*|^2. Thus, the statistic S_ℋ_0, which is a squared Rayleigh random variable, follows the exponential distribution, f_S_ℋ_0(s)=1/σ^2e^-s/σ^2 where σ^2 is a redundant variable that can be canceled in the closed-form expression of the adaptive threshold. Following the cell-averaging constant false alarm rate (CACFAR) procedure given in <cit.>, we apply the 2D-CACFAR. For a fixed detection threshold η=α𝒮_ℋ_0, where α is a scaling factor needs to be calculated and 𝒮_ℋ_0 is the mean value of training cells (shown in Fig .*), the false alarm rate is P_fa=𝔼_𝒮_ℋ_0{P[𝒮_ℋ_0>η|ℋ_0]} =𝔼_𝒮_ℋ_0{∫_𝒮_ℋ_0^∞f_S_ℋ_0(s)ds}=E_𝒮_ℋ_0{e^-α/σ^2𝒮_ℋ_0} =M_𝒮_ℋ_0(u)|_u=-α/σ^2 where M_𝒮_ℋ_0(u) is the moment generating function (MGF) of 𝒮_ℋ_0. Denote the set of training cells as Ξ(ν,τ), the expression of 𝒮_ℋ_0 for a specific entry (ν,τ) on the delay-Doppler plane is 𝒮_ℋ_0=1/N_s(∑_(ν^',τ^')∈Ξ(ν,τ)v[ν^',τ^']^2) where N_s is the nubmer of training cells. It can be verified that 𝒮_ℋ_0∼Γ(N_s,σ^2/N_s), where Γ(α,β) represents for Gamma distribution. Substitute the MGF of Gamma distribution in to (<ref>), and we have P_fa=(1/1+α/N_s)^N_s. Thus, α=N_s(P_fa^-1/N_s-1), and the threshold η(ν,τ) is η(ν,τ)=N_s(P_fa^-1/N_s-1)𝒮_ℋ_0(ν,τ). Therefore, we declare a new target if the following condition holds V[k,l]>η(ν,τ), where l=τ MΔ f, and k=ν NT are the delay and Doppler indices, respectively. § PROOF OF PROPOSITION 1 The lower and upper limits of the summation operator in what follows are omitted for brevity. In particular, the indices n, n^', n^'', k, k^', and k^'' are from 0 to N-1 while m, m^', m^'', l, l^', and l^'' are from 0 to M-1. Taking the expectation V[k,l] given by (<ref>) yields 𝔼[V[k,l]]=∑_n,m𝔼{X_DD[n-k,m-l]Z_DD^*[n,m]} + ∑_n,m,k^',l^'𝔼{h^*_ω[n-k^', m-l^']X_DD^*[k^',l^'] × X_DD[n-k,m-l]}. The expectation 𝔼[X_DD^*[k^',l^']X_DD[n-k,m-l]] equals 0 unless k^'=n-k and l^'=m-l due to the independent QPSK entries with unit power in 𝐗_𝐃𝐃 . Meanwhile, since the information symbols are independent of the noise samples, the term 𝔼[X_DD[n-k,m-l]Z_DD^*[n,m]]=0. Thus, (<ref>) can be simplified to be 𝔼[V[k,l]]=MN· h^*_ω[k,l]. The variance of the entry in matrix 𝐕 is var[V[k,l]]=𝔼[V[k,l]^2]-𝔼[V[k,l]]^2 where 𝔼[V[k,l]^2] is given in (<ref>). As before, only the terms with k^'=n^'-k, l^'=m^'-l, k^''=n^''-k, and l^''=m^''-l are non zeros. Thus, the second and the third terms on the right-hand side of (<ref>) can be discarded while the first and last terms are (MN· h_ω^*[k,l])^2 and MN·σ^2, respectively. Consequently, we have 𝔼[V[k,l]^2]=(MN· h_ω^*[k,l])^2+MN·σ^2, which gives the variance of V[k,l], i.e., var[V[k,l]]=MN·σ^2. Now, we consider the variance of V[k,l]/MN. Taking the limitation of var[1/MNV[k,l]] gives lim_M,N→∞var[V[k,l]/MN]=lim_M,N→∞(σ^2/MN)=0. indicating that when M and N are sufficiently large, the variance of 1/MNV[k,l] vanishes. This motivates us to use 1/MNV[k,l] to approximate its expectation, i.e. h_ω^*[k,l], with an approximation error of 𝒪(1/MN). Therefore, the ratio between the magnitudes of the correlation coefficients with the same delay index and Doppler indices k_ν_1^' and k_ν_2^' can be expressed as |V[k_ν_1^',l_i]|/|V[k_ν_2^',l_i]|=|h_ω[k_ν_1^',l_i]|/|h_ω[k_ν_2^',l_i]| =|sin(-κ_ν_iπ)/sin(-κ_ν_iπ/N)|·|sin((k_ν_2^'-k_ν_1^'-κ_ν_i)π)/sin(k_ν_2^'-k_ν_1^'-κ_ν_i/Nπ)| ^-1 =|sin(k_ν_2^'-k_ν_v1^'-κ_ν_i/Nπ)/sin(-κ_ν_iπ/N)|≈|k_ν_2^'-k_ν_1^'-κ_ν_i|/|-κ_ν_i|. Note that the small-angle approximation sin x ≈ x also holds for a sufficiently large N. Finally, we arrive at (<ref>). gbt7714-numerical
http://arxiv.org/abs/2306.09507v2
20230615210400
Winsorized Robust Credibility Models
[ "Qian Zhao", "Chudamani Poudyal" ]
stat.AP
[ "stat.AP", "math.ST", "stat.CO", "stat.ME", "stat.TH" ]
5mm empty Winsorized Robust Credibility Models Qian Zhao[ Qian Zhao, Ph.D., ASA, is an Assistant Professor in the Department of Mathematics, Robert Morris University, Moon Township, PA 15108, USA.    e-mail:  [email protected]] Robert Morris University Chudamani Poudyal[ Chudamani Poudyal, Ph.D., is an Assistant Professor in the Department of Mathematical Sciences, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA.    E-mail:  [email protected]] University of Wisconsin-Milwaukee Abstract. The Bühlmann model, a branch of classical credibility theory, has been successively applied to the premium estimation for group insurance contracts and other insurance specifications. In this paper, we develop a robust Bühlmann credibility via the censored version of loss data, or the censored mean (a robust alternative to traditional individual mean). This framework yields explicit formulas of structural parameters in credibility estimation for both scale-shape distribution families, location-scale distribution families, and their variants, which are commonly used to model insurance risks. The asymptotic properties of the proposed method are provided and corroborated through simulations, and their performance is compared to that of credibility based on the trimmed mean. By varying the censoring/trimming threshold level in several parametric models, we find all structural parameters via censoring are less volatile compared to the corresponding quantities via trimming, and using censored mean as a robust risk measure will reduce the influence of parametric loss assumptions on credibility estimation. Besides, the non-parametric estimations in credibility are discussed using the theory of L-estimators. And a numerical illustration from Wisconsin Local Government Property Insurance Fund indicates that the proposed robust credibility can prevent the effect caused by model mis-specification and capture the risk behavior of loss data in a broader viewpoint. Keywords. Robust Credibility; Premium Estimation; Truncated and Censored Data. § INTRODUCTION The Bühlmann model is a classical linear credibility model that is successively applied at premium estimation and other insurance specifications. The central idea is: for a particular policyholder, we have observed n exposure units of past claims X=(X_1, X_2, , X_n). Assume these losses follow the common risk parameter Θ (also a r.v.) and X_1|Θ, X_2|Θ, , X_n|Θ are independent and identically distributed on condition Θ= θ, then the ideal manual rate (we call it hypothetical mean), for an insured with θ is μ(θ)= [X|θ]. And <cit.> determined the linear credibility premium by minimizing the expected squared loss: min_α, β[(μ(θ)-α- βX)^2] In the last several decades, various adaptations and extensions have been made in the credibility literature following the Bühlmann’s approach. One of the extensions is to investigate robust methods in the area of credibility. Using conventional mean X in (<ref>), excess claims could have significant distorting effects and lead to an unsatisfactory behavior of standard linear credibility in estimators. To overcome this, a number of authors have studied the combination of credibility and robust statistics in order to obtain better risk management. <cit.> proposed the linear credibility estimator with the claims replaced by a robust M- estimator. <cit.>, and investigated the application of robust credibility in the general Bühlmann-Straub Model model. <cit.> extended the study by focusing on the bias-treatment alternatives with the robust portfolio-unbiased procedure and pure robust credibility. The asymptotic optimality of pure robust credibility is also proved in <cit.>. Their main idea is to robustify the claims experience by using a robust estimator instead of the individual mean. The intersection of robust credibility and linear models was also investigated in many literature. <cit.> applied robust statistics to the regression credibility estimation by using the influence function approach of M-estimators. <cit.> and <cit.> introduced a truncation likelihood-based approach for robust-efficient fitting of mixed linear models. These methods yield more accurate premiums when extreme outcomes are present in the data and the procedures are flexible and effective risk-pricing tools in the fields of property and casualty insurance, health care, and real estate. Additionally, the robust estimators can also be applied to the credibility theory for generalized mixed linear models (See <cit.>, <cit.>, <cit.>, and <cit.>). Regarding the performance of robust estimators parametric models in robust credibility and ratemaking, <cit.> discussed the optimum trimming of data in the credibility model. <cit.> compared the performance of multiple robust estimators in regression credibility. (In particular, trimmed mean is one of the important M-estimators which is widely used in actuarial practice.) With a trimmed mean as the risk control, <cit.> investigated the general asymptotic properties of the structural parameters in credibility. This pure robust credibility in the Buhlmann-Straub model can be easily adapted to an insurance contract specifications and handle extreme losses. However, this approach discards all information contained in outlying data points even though they might be legitimate observations from the actual assumed loss model. To address this shortcoming, in this paper we propose a credibility theory based on censoring the original data. More specifically , the ground-up losses are censored both from above and below with different proportions, say, 100a%- and 100b% probabilities, respectively. Asymptotic proprieties of the robust credibility estimators are derived, and simulation studies are used to illustrate the advantage and disadvantage of this approach compared with the method based on trimming the data. μ(θ)=μ+α(W(X_1, X_2, , X_n)-[W]) The remainder of this paper is organized as follows. In Section 2, we present the idea of censored mean as a risk measure. Section 3 develops the credibility theory and the corresponding asymptotic properties based on censoring the data. Parametric model examples of credibility premiums for two types of family distributions – scale-shape family and location-scale family are provided in Section 4. The effect of model choice and parameter estimation method on credibility premium is illustrated through sensitivity analysis in Section 5. In Section 6, we demonstrate how the proposed approach can be used in the non-parameter setting. We conclude the paper with a brief summary of the main findings in Section 7. § ROBUST MEAN AS THE RISK MEASURE In Section 2.1, we outline the idea of trimming and censoring data and show their robustness to outliers that often occur in the risk models. We defined the random variable for trimmed mean and censored mean, respectively. The properties of the trimmed mean as the risk measure have been examined in <cit.> and the counterparts for censored mean are discussed in Section 2.2. The asymptotic properties are detailed in <cit.> and <cit.> and further described in Section 3. §.§ Idea of Trimming and Censoring In actuarial science, loss distributions often can be heavily influenced by extreme values (outliers). To overcome this issue, statisticians consider the strategy of trimming and censoring, which are typically transformations of data by limiting extreme values in the data set, to reduce the effect of possibly spurious outliers (see <cit.>). The trimming is to exclude all outliers, whereas the censoring is to set all outliers to a specified percentile of the data. The estimators based on the truncation and censored data are usually more robust to outliers than their standard counterparts and commonly used in data analysis for life and non-life insurance products. Now, we see how the trimming or censoring strategy work in the credibility model. Consider a loss random variables X with a cumulative distribution function F(x|Θ). The parameter Θ=θ is a (prior) random variable with density π(θ). We denote the corresponding quantile function for X|θ as F^-1(w;θ) (to make things easy, in this paper, we use F^-1(w) instead). This leads to a truncated random variable X_T|θ, with 100p% left-truncated and 100q% right-truncated data, denoting as X_T|θ = F^-1(w)=x; if p≤ w≤ 1-q , 0; otherwise, and another censored random variable X_W|θ, with 100p% left-censored and 100q% right-censored data, denoting as X_W|θ = F^-1(p)=x_p; if w<p, F^-1(w)=x; if p≤ w≤ 1-q, F^-1(1-q)=x_1-q; if w≥ 1-q, where 0≤ p<1-q≤ 1. Then the truncated and censored k moments are derived as [X_T^k|θ] = 1/1-p-q∫_p^1-q (F^-1(w))^k du, [X_W^k|θ] = p^k[F^-1(p)]^k + ∫_p^1-q (F^-1(w))^k du + q^k[F^-1(1-q)]^k, where the proportions p and q can be controlled by the researcher. If we set X_R|θ as a robust random variable for (<ref>) and (<ref>) , then μ_R(p,q,θ)=[X_R|θ] reflects trimmed mean or censored mean with a specific robust type R∈{T,W}. The choice of p and q has a significant effect on this risk control and further credibility premium in Section 4 and thereafter. §.§ Censored Mean as a Risk Measure <cit.> have examined the properties of trimmed mean and derived the coherent properties corresponding to this risk measure. Now we use a similar procedure to discuss the counterparts for censored mean – the alternative but more robust risk measure. In loss models, the censored mean can be used to quantify the risk exposures. Next, we check if this risk control measure possesses the properties/axioms of widely used coherent risk measures. A coherent risk measure satisfies the four desirable properties of subadditivity, monotonicity, positive homogeneity, and translation invariance, see, e.g., <cit.>, p. 42. Let ρ_p,q(X) denotes the censored mean as defined in Eq. (<ref>) and for k=1. For 0 < p < 1-q < 1, ρ_p,q(X) satisfies all the coherent risk measure axioms except the subadditivity. For rotational simplicity, we suppress the underlying parameter θ. Let X and Y be two continuous loss random variables. Now, we investigate each of the axioms of a coherent risk measure one at a time. * Subadditivity: This axiom does not hold. For that, following <cit.>, we give a counter example. Let X and Y be both standard normal but independent random variables and X^c and Y^c be the corresponding comonotonic parts, respectively. Define Z = X+Y and Z^c = X^c+Y^c. Then, clearly Z = X+Y ∼ N(0, (√(2))^2) and Z^c = X^c+Y^c∼ N(0, 2^2). Since the distribution functions of Z and Z^c only cross once at (0,0.5) then for 0 < u < 0.5, we have F_X+Y^-1(u) = F_Z^-1(u) > F_Z^c^-1(u) = F_X^-1(u) + F_Y^-1(u). Thus, for any winsorizing proportions a and b satisfying 0 < a < q̅ < 0.5, we get ρ_p,q(X+Y) = ρ_p,q(Z) = p F_Z^-1(p) + ∫_p^q̅ F_Z^-1(u) du + q F_Z^-1(q̅) > p F_Z^c^-1(p) + ∫_p^q̅ F_Z^c^-1(u) du + q F_Z^c^-1(q̅) = p ( F_X^-1(p) + F_Y^-1(p) ) + ∫_p^q̅( F_X^-1(u) + F_Y^-1(u) ) du + q ( F_X^-1(q̅) + F_Y^-1(q̅) ) = ρ_p,q(X) + ρ_p,q(Y) ρ_p,q(X+Y) > ρ_p,q(X) + ρ_p,q(Y). Counter Example II: Consider a pair of winsorizing proportions p and q satisfying 0 < p = 0.5-h_1 < 0.50 < q̅ = 0.5+h_2 < 1 such that 0 < h_2 < h_1 < 0.5. Then, we have ρ_p,q(X+Y) - ρ_p,q(X) - ρ_p,q(Y) = p ( F_Z^-1(p) - F_X^-1(p) - F_Y^-1(p)_ > 0 ) + ∫_p^q̅( F_Z^-1(u) - F_X^-1(u) - F_Y^-1(u) ) du_> 0 + q ( F_Z^-1(q̅) - F_X^-1(q̅) - F_Y^-1(q̅)_< 0 ) = * Monotonicity: If Pr(X≤ Y)=1 always holds, then so does F_X(t)≥ F_Y(t) for any t. Hence, F_X^-1(u)≤ F_Y^-1(u) for any 0<u<1. Then p F_X^-1(p) + ∫_p^1-q F_X^-1(u) du + q F_X^-1(1-q)≤ p F_Y^-1(p) + ∫_p^1-q F_Y^-1(u) du + q F_Y^-1(1-q). Thus, ρ_p,q(X)≤ρ_p,q(Y). * Positive Homogeneity (scale equivalent): For any positive constant c, by equations (2.1) and (2.2), we have ρ_p,q(c X) = p cx_p + ∫_p^1-qcx du +b cx_1-q=c(p x_p + ∫_p^1-qx du +b x_1-q)=cρ_p,q(X). * Translation Invariance: For any constant c, we have ρ_p,q(X+c) = p (x_p+c) + ∫_p^1-q(x+c) du + q (x_1-q+c) = ( p x_p + ∫_p^1-qx du + q x_1-q) + c ( p + ∫_p^1-q1 du + q ) = ρ_p,q(X) + c. Thus, the censored mean satisfies all the coherent axioms except for the subadditivity. § CREDIBILITY BASED ON ROBUST DATA Let X_1|θ, , X_n|θ be independent and identically distributed (i.i.d.) random variables with a common parametric distribution F(x|θ). Denote the order statistics of X_1, …, X_n by X_(1)≤ s ≤ X_(n). In order to make our notation consistent with that of the Bühlmann credibility theory, we define the following structural parameters of robust random variables with specific proportions p and q, and R in {T,W}. * Collective premium μ_p,q=[[X_R|θ]]=[μ_R(p,q,θ)] with μ_R(p,q,θ) is defined in (<ref>) and (<ref>) * Expectation of process variance v_p,q=[v_R(p,q,θ)] with v_R(p,q,θ) is defined in (<ref>) or (<ref>) * Variance of hypothetical means a_p,q=(μ_R(p,q,θ)) Based upon (<ref>) and (<ref>), the empirical version of the robust mean (first moment) for a vector X = (X_1,,X_n) is R_p,q(X) = T_p,q(X) = 1/n-[n p]-[n q]∑_i=[n p]+1^n-[n q] X_(i), W_p,q(X) = 1/n[ [n p] X_( [n p]+1) + ∑_i=[n p]+1^n-[n q] X_(i) + [n q] X_(n - [n q])) ], where [] denotes the greatest integer part. By the structure of (<ref>), a classic approach to determine the credibility premium with truncated data or censored data is minimizing the expected square loss α, βmin {[μ_R(p,q,θ)-(α +βR_p,q(X)]^2} To find the minimum, we can take partial derivatives with respect to α and β, respectively, and solve the system of equations. The resulting credibility premium with past loss experience is P_R= α +βR_p,q(X) = (R_p,q(X),μ_R(p,q,θ))/(R_p,q(X))R_p,q(X)+(1-(R_p,q(X),μ_R(p,q,θ))/(R_p,q(X))[R_p,q(X)]/μ_p,q) μ_p,q Next, we investigate the asymptotic properties of this proposed premium with truncated and censored data, respectively. §.§ Asymptotic Properties for Truncated Data The sample trimmed mean T_p,q(X) in equation (3.1) distributively converges to the population trimmed mean μ_R(p,q,θ) in equation (<ref>). Besides, <cit.> have shown that for each fixed θ, the process variance of trimmed mean is v_T(p,q,θ) = 1/(1-p-q)^2∫_p^1-q∫_p^1-q(min{u,v}-uv) dF^-1(u) dF^-1(v) Thus the derived robust estimator in the loss model has the following asymptotic normality √(n)[T_p,q(X)-μ_T(p,q,θ) ] ∼ 𝒜𝒩(0, v_T(p,q,θ) ). §.§ Asymptotic Properties for Censored Data By <cit.>, the sample censored mean in equation (3.1) can be written as W_p,q(X)=1/n∑_i=1^nJ (i/n+1)X_(i)+c_n^(1)X_([np^(1)])+c_n^(2)X_([np^(2)]) where J(x) = 1{p^(1)≤ x≤ p^(2)}= 1, if np^(1)≤ x ≤ np^(2); 0, otherwise; with p^(1)= p and p^(2)=1-q and where p and q represent left and right censoring proportions, respectively. Also, lim_n→∞c_n^(1) = c^(1)=p^(1)= p and lim_n→∞c_n^(2) = c^(2)=p^(2)= 1-q. <cit.>, have demonstrated that for each fixed θ, W_p,q(X)d⟶[X_W|θ]=μ_W(p,q,θ), and (W_p,q(X)) d⟶ v_W(p,q,θ)=∫_0^1α^2(u)du where α(u)=1/1-u∫_u^1J(w)H^'(w)(1-w)dw+∑_m=1^21{p^(m)≥ u}c^(m)(1-p^(m))H^'(p^(m)) and H(w)=F^-1(w). Thus the derived robust estimator in the loss model has asymptotic normality √(n)[W_p,q(X)-μ_W(p,q,θ) ] ∼ 𝒜𝒩(0, v_W(p,q,θ) ). <cit.> have proved that equation (<ref>) is equivalent to v_W(p,q,θ)= p H^2(p)+qH^2(1-q)+ ∫_p^1-qH^2(w) dw-[μ_W(p,q,θ)]^2 +2{μ_W(p,q,θ) [p^2H^'(p)-q^2H^'(1-q)]+q^2H(1-q)H^'(1-q)-p^2H(p)H^'(p) } + p^3(1-p)[H^'(p)]^2+q^3(1-q)[H^'(1-q)]^2+2p^2q^2H^'(p)H^'(1-q) = (X_W|θ)+2[μ_W(p,q,θ) (A-B)+B H(1-q)-A H(p)] -(A-B)^2+A^2/p+B^2/q, where A = p^2H^'(p) and B=q^2H^'(1-q). §.§ Robust Credibility Premium To build the linear credibility premium in truncated and censored data, We also prove the following results. Assume that E(|X_i|)<∞ and E(|X_i|^2)<∞. Then as n→∞, we have * [R_p,q(X)]→μ_p,q, * n[(R_p,q(X))] → v_p,q, * ( [(R_p,q(X)] ) → a_p,q, * (R_p,q(X),μ_R(p,q,θ) )→ a_p,q, * (R_p,q(X))→ a_p,q+v_p,q/n. The properties of the trimmed mean have been investigated through <cit.>. Here, we just prove for the censored moment. * By (<ref>) [W_p,q(X)] →[μ_W(p,q,θ)]=μ_p,q as n→∞. * By (<ref>), n[(W_p,q(X))] = [nVar(W_p,q(X)]→[v_W(p,q,θ)]=v_p,q. * The covariance can be written as (W_p,q(X),μ_W(p,q,θ) ) = [W_p,q(X) μ_W(p,q,θ) ]-[W_p,q(X)][μ_W(p,q,θ) ]. Since |W_p,q(X) μ_W(p,q,θ)|≤|W_p,q(X)| |μ_W(p,q,θ)|, and both |W_p,q(X)| and |μ_W(p,q,θ)| are square integrable, then by dominated convergence theorem <cit.>, (W_p,q(X),μ_p,q(θ) ) →[μ_W^2(p,q,θ) ]-([μ_W(p,q,θ)])^2 = a_p,q. * The law of total variance gives us (W_p,q(X)) = ([W_p,q(X)|θ])+[(W_p,q(X)|θ)] →(μ_W(p,q,θ))+[(W_p,q(X))]=a_p,q+v_p,qn. Based on these asymptotic properties, the credibility premium from equation (<ref>) is given as P_W_p,q → a_p,q/a_p,q+v_p,qnW_p,q(X)+(1-a_p,q/a_p,q+v_p,qnμ_p,q/μ_p,q) μ_p,q = Z_W_p,qW_p,q+(1-Z_W_p,q) μ_p,q, where Z_W_p,q=nn+v_p,q/a_p,q The counterparts proof of trimmed moment follows exactly the same procedures. Next, we will see how does this robust credibility premium work on the parametric models and non-parametric cases. § PARAMETRIC EXAMPLES In this section, we use two pairs of parametric combinations, to demonstrate the performance of data methodologies, truncation and censoring, towards the credibility premium estimations. In each comparison, the risk parameter θ follows the common parametric model. This setting can help us understand how does the loss likelihood model solely (the risk model is fixed) affect the ultimate premium estimation. In particular, we investigate the model stability and estimation consistency with various underlying model assumptions and robust methodologies. To display the general application of parametric models, we choose one comparison from the scale-shape distribution family, and the other comes from the location-scale family. §.§ Scale-shape Distribution Family Example The first pair of comparison is between the Exponential - Gamma Model and Pareto - Gamma Model. Both Exponential and Pareto are heavy-skewed distributions that are typically used to fit the loss models. Pareto distribution resembles the shape of Exponential, but has a heavier tail for extreme claim quantities. Now, we discuss their asymptotic properties under the proposed robust credibility structures and then compare their model performance under robust data truncation and data censoring, respectively. Let X_1|θ, , X_n|θ be independent and identically distributed (i.i.d.) random variables, following an Exponential distribution with mean of θ . And the parameter θ is Gamma(α, β) distributed with mean of α/β and variance of α/β^2. Now the credibility premium for the Exponential - Gamma model is derived as follows. For loss random variable X|θ∼ Exp(θ), we have F(x|θ)=1-e^-x/θ=w and the quantile function F^-1(w)=-θlog(1-w). We will look at the structure formulas for trimmed mean and censored mean, separately. Trimmed Version: By (<ref>), the hypothetical mean of the Exponential model is μ_T(p,q,θ) = -θ[1/1-p-q∫_p^1-qlog(1-w)dw]=θ (1-p)[1-log(1-p)]-q(1-log q)1-p-q :=θ m_1T(p,q). The corresponding process variance is derived from (<ref>), that is v_T(p,q,θ) =θ^21/(1-p-q)^2∫_p^1-q∫_p^1-q(min{u,v}-uv) 1/(1-u) 1/(1-v) du dv := θ^2 m_3T(p,q). Censored Version: By (<ref>), the hypothetical mean of the Exponential distribution is μ_W(p,q,θ) = -θ[plog(1-p)+∫_p^1-qlog(1-w)dw+qlog q]=θ[ 1-p-q-log(1-p)], :=θ m_1W(p,q). Regarding the process variance, we start with the formula for censored variance, that is (X_w|θ) = [X_w^2|θ]-^2[X_w|θ] = θ^2{p[log(1-p)]^2+∫_p^1-q[log(1-w)]^2 dw+q(log q)^2}-[μ_W(p,q,θ)]^2 =θ^2{[log(1-p)]^2+2(1-p)[1-log(1-p)]-2q(1-log q)_m_2W(p,q)}-θ^2[m_1W(p,q)]^2 :=θ^2[m_2W(p,q)-[m_1W(p,q)]^2]. Therefore, the process variance by (<ref>) is v_W(p,q,θ) =θ^2{m_2W(p,q)-m_1W^2(p,q)+2[m_1W(p,q) (p^2/1-p-q)-q log q+p^2/1-p log(1-p)] +p^3/1-p+q(1-q)+2p^2q/1-p} :=θ^2 m_3W(p,q). See detailed proof in appendix A.1. Since above mentioned m_1, m_2, and m_3 only depend on the trimming or censoring proportions p and q, not the risk parameter θ, thus we treat them as constants in the following robust structural parameters. Note: m_1 represents the general notation of m_1T and m_1W, and the same for the other m notations. With α>0 and β>0, the collective premium, the variance of hypothetical mean, and the expectation of process variance for this Exponential - Gamma model are determined as μ_p,q =[μ_p,q(θ)]=[θ m_1(p,q)]=[θ]m_1(p,q)= α/β m_1(p,q), a_p,q =(μ_p,q(θ))=(θ m_1(p,q))=(θ)[m_1(p,q)]^2= α/β^2 [m_1(p,q)]^2, v_p,q =[v_p,q(θ)]=[θ^2 m_3(p,q)]=[θ^2]m_3(p,q)=α(α+1)β^2m_3(p,q). Finally, the credibility factor is Z_p,q=nn+v_p,q/a_p,q=nn+(α+1)m_3(p,q)/m_1^2(p,q) and by (<ref>) the credibility premium for this model is P_R_p,q=Z_p,qR_p,q+(1-Z_p,q) μ_p,q , R∈ (W, T). When (p,q)→ (0,0), the values m_1(p,q)→ 1, m_2(p,q)→ 2,m_3(p,q)→ 1, both the trimmed mean and censored mean converge to the sample mean X. Then Z_p,q→nn+(α+1) and the credibility premium becomes lim_(p,q)→ (0,0)P_R_p,q= nn+(α+1) X+α+1n+(α+1) μ where μ=α/β is the collective premium. For the compared Pareto-Gamma model, the loss random variable X|θ∼ Pareto(t,θ), we have F(x)=1-(θ/x+θ)^t=w. The robust structural parameters of credibility premium can be derived through equations (<ref>), (<ref>), (<ref>), and (<ref>). This leads to the same structure credibility parameter as (<ref>), where the m_1, m_2 and m_3 are general notations for both T and W parameters and detailed described in appendix A.2. Hence, the credibility factor Z_p,q=nn+(α+1)m_3(p,q)/m_1^2(p,q). When (p,q)→ (0,0), m_1→1/t-1, m_2→t/t-2- 2t/t-1+1, and m_3→t/t-2- 2t/t-1+1-(1/t-1)^2. Therefore Z_p,q→nn+(α+1)t/t-2 and the credibility premium is lim_(p,q)→ (0,0)P_R_p,q= nn+(α+1)t/t-2 X+(α+1)t/t-2n+(α+1)t/t-2 μ Here, the collective premium has the format of μ=α/β(t-1). §.§ Location-scale Family Distribution Example In contrast to the Exponential and Pareto distributions that belong to the shape-scale family, we pick up Lognormal-Normal and Log-logistic-Normal (the typical location-scale family distribution) in the second pair of comparisons and show the general application of the proposed approach. We looked at the detailed derivation of the log-logistic-normal. For loss random variable X|θ∼ Log-logistic(θ, σ), θ∼ N(μ, v^2), we have F(x)=1/1+e^-log x-θ/σ where -∞<θ<∞ and σ<1. This setting guarantees the existence of the mean and variance. And the quantile function F^-1(w)=e^θ(w/1-w)^σ. Trimmed Version: By equations (<ref>) and (<ref>), the robust hypothetical mean and process variance are μ_T(p,q,θ) =e^θ/1-p-q∫_p^1-q(w/1-w)^σdw:=e^θ m_1T(p,q, σ), v_T(p,q,θ) =e^2θ1/(1-p-q)^2∫_p^1-q∫_p^1-q(min{u,v}-uv) σ u^σ-1/(1-u)^σ+1 σ v^σ-1/(1-v)^σ+1 du dv := e^2θ m_3T(p,q,σ). Censored Version: By equations (<ref>) and (<ref>), the robust estimators are derived as μ_W(p,q,θ) =e^θ{p(p/1-p)^σ+∫_p^1-p(w/1-w)^σdw+ q(1-q/q)^σ}:=e^θ m_1W(p,q, σ) (X_w|θ), = p [F^-1(p)]^2+ ∫_p^1-q[F^-1(w)]^2 dw+b[F^-1(1-q)]^2-[μ_p,q,σ(θ)]^2 = e^2θ[m_2W(p,q,σ)-m_1W^2(p,q,σ)], v_W(p,q,θ) = e^2θ{m_2W(p,q,σ)-m_1W^2(p,q,σ) +2 [ m_1W(p,q,σ) (p^2 Δ_p-q^2 Δ_1-q )+q^2 Δ_1-q (1-q/q)^σ -p^2 Δ_p (p/1-p)^σ] -(p^2 Δ_p-q^2 Δ_q)^2+p^3 Δ_p^2 + q^3 Δ_1-q^2 } :=e^2θ m_3W(p,q,σ), where Δ_p=σ p^σ-1/(1-p)^σ+1 and Δ_1-q=σ(1-q)^σ-1/q^σ+1. Therefore, if the risk parameter θ is normally distributed, the three structural parameters of credibility premium are μ_p,q =[e^θ] m_1(p,q,σ)=e^μ+1/2v^2 m_1(p,q,σ), a_p,q =(e^θ) m_1^2(p,q,σ)=(e^2μ+2v^2-e^2μ+v^2) m_1^2(p,q,σ), v_p,q =[e^2θ] m_3(p,q,σ)=e^2μ+2v^2m_3(p,q,σ). Finally, the credibility factor is Z_a,b=nn+e^2μ+2v^2m_3(p,q,σ)/e^2μ+v^2(e^v^2-1) m_1^2(p,q,σ) . In this model, when (p,q)→ (0,0), m_1(p,q), m_2(p,q), m_3(p,q) depend on variability σ and the integral the quantile and the derivative of quantile functions, and the robust mean converges to the sample mean X. For the counterpart of Lognormal - Normal model, the loss random variable X|θ∼ LN(θ, σ^'2) and prior distribution θ∼ N(μ, v^2). The CDF is F(x|θ)=Φ(ln x-θ/σ^')=w and the quantile function is F^-1(w)=e^θ+σ^'Φ^-1(w). The robust collective premium, variance of hypothetical mean, and expectation of process variance can be derived through equations (<ref>), (<ref>), (<ref>), and (<ref>). And the structural formulas and the credibility model are exactly the same as in (<ref>) and (<ref>), but the m_1, m_2 and m_3 are described separately. see details in appendix A.3. In this combination, when (p,q)→ (0,0), the values m_1(p,q,σ^')→ e^1/2σ^'2, m_2(p,q,σ^')→ e^2σ^'2, and m_3(p,q,σ^')→ e^2σ^'2-e^σ^'2. § NUMERICAL ILLUSTRATION §.§ Exponential-Gamma vs Pareto-Gamma Now we use the above-developed four credibility models to illuminate the benefits of the proposed robust credibility premium estimation procedure and compare the performance of robust estimators based on trimming and censoring data, respectively. We deliberately choose the parameters such that the two models have a comparable premium. For the Exponential-Gamma model, we let losses follow an iid exponential distribution with mean of θ/2, whereas for the Pareto-Gamma model, the scale parameter t is chosen as 3. Besides, the risk parameter θ follows a same Gamma(α,β). So the two models could have a common hypothetical mean μ(θ)=θ/2, resulting in the same collective premium α/2β when (a,b)→ (0,0). Furthermore, the prior distribution of gamma has α=4 and β=2, which could lead to the identical collective premium of μ = 1 regardless the model structure. When the same sample size n is fixed, the credibility factor Z only depends on k_p,q:=v_p,q/a_p,q. The higher the ratio is, the less credibility on the observed loss experience or specific group practice towards the overall premium. We now observe the trend of k_p,q with various trimming and censoring proportions. Figure <ref> shows the values of μ_p,q, v_p,q, a_p,q, k_p,q, and z_p,q for the two models. Left panel shows the values for different p's (increasing from 0 to 1) with q=0 while right panel shows the values for different q's (increasing from 0 to 1) with p=0. Some of the immediate and crucial observations out of Figure <ref> are summarized below: * As seen on the first row, the Expected value of the Hypothetical Mean (EHM) μ_p,0 is an increasing function of p for both models, which is also very intuitive because higher the left trimming/winsorizing proportion – the smaller observed values are eliminated and/or adjusted. With the similar arguments, it is easy to observe that μ_0,q is a decreasing function of q. But the impact of q on EHM is not as strong as that of p. Both models with two methods have the same values at (p,q)=(0,0), leading to a comparable credibility performance. The loaded collective premium of Exponential-Gamma and Pareto-Gamma models are very close to each other in a certain range, say 0≤ p ≤ 0.4 for T curve and 0≤ p ≤ 0.9 for W curve. * Going on the second row, the Expectation of Process Variance (EPV) v_p,0 is an increasing function of p for both models, but v_p,0 is significantly higher for Pareto-Gamma model compared to Exponential-Gamma, reflecting significantly heavier right rail for the former model. Further, for both models, v_p,0 is tremendously higher for trimmed version compared to winsorized values, and the gradient of T curve starts changing from the original point (0%) while the gradient of W curve starts changing around 40%, both indicating trimmed version is more volatile than the corresponding winsorized version. On the other hand, v_0,q is a decreasing function of q for both models, and v_0,q is not significantly higher for Pareto-Gamma model compared to Exponential-Gamma. The decreasing nature of v_0,q simply indicates that higher the right trimming/winsorizing – lower the variance of the models and via both methods. * It appears that the Variance of the Hypothetical Mean (VHM) a_p,0 for Pareto-Gamma model is increasing function consistently in higher rate than the corresponding values for Exponential-Gamma model. This is again simply indicating that the Pareto model has higher uncertainty on the right tail of the distribution. As expected the other important observation is that the a_p,0 curves via trimming approach are increasing in higher rates than the corresponding winsorizing approach. We could make similar interpretation for a_0,q curves which are all decreasing functions of the right trimming/winsorizing proportion 0 < q < 1. * One of the most important observations is that the Bühlmann Credibility Factor (BCF) k_p,q appears in the shape of convex function for both models and via both methods. From the convexity nature of k_p,q curves, what we can infer is that over and/or under trimming/winsorizing will not essentially lead to lower and/or higher credibility. Rather the maximum credibility, i.e., at the minimum value of k_p,q, could be achieved given a reasonable tradeoff between left and/or right trimming/winsorizing proportions. As seen from the right panel, about 3% right censoring of Pareto-Gamma might make the individual experience most credible in potential premium estimation. That is because v_p,q decreases faster than a_p,q, but the speed reverses at some point in this model. Further and observing Pareto-Gamma model, either the very low or very high trimming/winsorizing provide high values of k_p,q leading to low credibility factor Z_p,q. Therefore, assuming the heavier right extreme sample observations, then a reasonable credibility factor Z_p,q could be assigned to the experienced sample data by controlling the influence of those extreme sample observations on the right tail. This concept could be think of like Bias-Variance Tradeoff mechanism for model accuracy and interpretability, see, e.g., <cit.>, p. 219. Comparing the two models, the BCF k_p,q curves for Pareto models are higher than the corresponding Exponential models. This is again indicating that for the same sample size, there will be small credibility for Pareto models than the competing Exponential models. Similarly, the BCF k_p,q curves via trimming approach are higher than the corresponding models via winsorization. * Finally, we investigate the Credibility Factor (CF), Z_p,q. As we can observe the Z_p,0 curves are almost flat for 0 ≤ p ≤ .75, the credibility does not vary much specially for smaller left trimming/winsorizing proportion until it crosses a certain threshold. Note that CF is a non-increasing function of 0 ≤ p < 1 along with lim_p → 1 Z_p,0 = 0, indicating that we put more weight for the collective premium μ_p,0 when the left truncation removes a considerable portion of the individual data, not because of higher homogeneity but because of greater uncertainty of the hypothetical mean, as can be seen in the value of a_p,0. On the other hand observing the right panel, Z_0,q is a decreasing function of q for both models and via both methods. This coincides with our intuition because losses are quite homogeneous when there is a considerable right tail truncation/censorship. That is, lower the right trimming/winsorizing – more heterogeneous the different groups in a single portfolio and needs group specific credibility. Similarly, higher the right trimming/winsorizing – the heterogeneity in the sample data will be removed leading to homogeneous sample data and less credibility. * As our expectation which is also one of the major findings of this scholarly work is that all the five structural parameters via trimming are more volatile compared to the corresponding quantities via winsorizing. §.§ Lognormal-Normal vs Log-logistic-Normal As in the previous example, we expect a comparable premium for the initial credibility estimation. This requires an equivalent premium at (p,q)=(0,0) between Lognormal-Normal and Log-logistic-Normal combinations, thus we set e^1/2σ^'2 = ∫_0^1( w/1-w)^σ^'dw, and the risk parameter θ follows the same normal distribution N(μ, v). In such a situation, the two models could have the same hypothetical mean and collective premium when (a,b)→ (0,0), but σ≠σ^' initially. The setup parameters that satisfy (<ref>) were chosen as: σ=0.3652002,σ^'=0.2, μ=4, v=1. Figure <ref> displays the graphics of the five structural parameters; μ_p,q, v_p,q, a_p,q, k_p,q, and z_p,q for these two models. Some of the important observations out of Figure <ref> are very similar to the corresponding observations from Figure <ref> except v_0,q. Unlike in Figure <ref>, v_0,q curves in Figure <ref> are convex functions of the right trimming/winsorizing proportion 0 < q < 1. Therefore, a right trimming/winsorizing proportion closure to 0.5 will lead to the models having lower expected process variance, v_0,q. The other noticeable difference between Figure <ref> and <ref> is that in Figure <ref> and for W-estimators, the trend of the structural five parameters does not depend on the underlying assumed models. § NON-PARAMETRIC ESTIMATION WITH EXAMPLE §.§ Non-parametric Procedure Empirical Bayes Credibility Models have been widely used in actuarial loss estimation (see <cit.> and <cit.>). Now we examine the empirical Bayes methods based on the robust trimmed or censored data (instead of original data), and display how to use the sample data to estimate the μ_p,q, a_p,q and v_p,q needed for building the credibility premium if the underlying parametric model is unavailable. Suppose there are r groups of insureds in a portfolio and the exposure years for each group are different. Let X_ijk be random variable representing the unit loss in ith group jth year and for kth policy-holder. Besides, we denote θ_i=risk parameter for group i (i=1,, r). n_i=number of years of experience for group i (i=1,, r) m_ij=number of policyholders in group i in year j (j=1,, n_i) m_i= ∑_jm_ij= exposure-years in group i It is assumed that for each group i, X_ijk share the identical risk parameter θ_i (does not change over years), with hypothetical mean E[X_ijk|θ_i]=μ(θ_i) and process variance Var(X_ijk|θ_i)=v(θ_i). Under each risk parameter θ_i, the conditional X_ijk|θ_i are assumed to be independent and identically distributed. Our objective is to estimate the risk premium for each group i, using the robust credibility approaches. And then determine the total premium for the entire portfolio. Let R_ijk(p,q) be the trimmed or censored version of X_ijk by (<ref>) , with 100p% left-trimmed/censored and 100q% right-trimmed/censored. Then as the exposure years m_i→∞, the empirical hypothetical mean converge to the parameters such that μ^(i)(p,q)=∑_j=1^n_i∑_k=1^m_ijR_ijk(p,q)m_i→μ(p,q,θ_i) And the empirical estimate, v^(i)(p,q), of process variance v(p,q,θ_i) comes from (<ref>) and (<ref>) relying on empirical distributions and derivative approximations, respectively (see details sample in appendix A.4). All these lead to the robust version of empirical structural parameters Collective Premium μ_p,q=∑_i=1^rm_i μ^(i)(p,q)/∑_i=1^rm_i Expectation of Process Variance v_p,q=∑_i=1^rm_i v^(i)(p,q)/∑_i=1^rm_i Variance of Hypothetical Mean a_p,q=∑_i=1^rm_i (μ^(i)(p,q)-μ_p,q)^2/∑_i=1^rm_i-1 Again, as the exposure years m_i→∞, these empirical estimates converge to the parameters μ_p,q, a_p,q and v_p,q, respectively. Therefore, the credibility factor for each group i is Z_p,q^(i)=m_i/m_i+v_p,q/a_p,q and the credibility premium for Group i is estimated by P_R^(i) =Z_p,q^(i) μ^(i)(p,q)+(1-Z_p,q^(i)) μ_p,q. §.§ Real Data Illustration Next, a real data analysis is conducted to indicate the benefits of the proposed robust credibility premium estimations. The target under consideration is the Local Government Property Insurance Fund (LGPIF), an insurance pool administered by the Wisconsin Office of the Insurance Commissioner, which has been studied by <cit.>. This insurance fund covers local government properties that include counties, cities, towns, villages, school districts, and library boards. Our goal is to investigate what effect initial assumptions have on the structure parameter estimation and the corresponding group credibility premiums in this insurance portfolio. 1377 LGPIF loss observations were recorded in 2010, and the summary statistics of each type of property are listed in Table <ref>. It is clear that the loss data of each property type resembles a right heavy-tailed distribution. Thus, we will see how the upper trimming/censoring proportion b can take against the extreme claims and affect the credibility premium ultimately. We set b=(0,0.005, 0.01,0.02,0.05,0.10) in robust trimming and censoring, respectively, and then derive the structural parameters for above-mentioned property funds through the procedure (<ref>) – (<ref>). The collective premium for each class and the total estimated premiums are displayed in Table <ref>. In Table <ref>, the impact of robust methods on the overall premium estimation is consistent, all the censored total premiums have a higher level than that of the trimmed counterparts. However, this pattern was not shown for individual type of fund, which means, not every premium based on trimming is below the one based on censoring. Sample size seems to play an important role here. For a small sample data of Misc, 2% (b=0.02) of censoring will alternate the premium from 43525 to 59023, but the new estimation was 6.5% less than the change with trimming framework. The actual collected premium of LGPIF is 25 million each year during 2006 - 2010. Obviously, the Buhlmann credibility premium with original loss leads to an estimation that more than double the real collected value. Meanwhile, 25 million is close to the total estimated premium with 5% of trimming and 10% of censoring, respectively. The finding is very critical when the deductibles of the incoming policies are not known. § CONCLUSION AND DISCUSSION In this paper, we develop a robust Bühlmann credibility via the censored version of loss data and examine the asymptotic properties of structural parameters for a variety of parametric models that are typically used for pricing insurance risks. The simulation study is conducted by varying the left and right censoring proportions from 0% to 100% for two competing conditional loss distributions, in which one resembles the shape of the other, but has a heavier tail for extreme values. Extending the theorem and outcome of <cit.>, we also derive the non-parametric estimation procedure and analyze the sensitivity of censoring proportion to the credibility factor and ultimate premium estimation in the group insurance contract. To distinguish the impact of risk measures, all the results from the proposed censored scheme (W) are compared via the counterpart trimmed experience (T) (see <cit.>). Finally, we discuss the major findings of this scholarly work, which include * Compared to the classical credibility approach, the use of robust credibility (both T and W cases) offers the advantage of preventing the effect caused by extreme losses or model mis-specifications. It can better capture the heavy tail of the underlying loss models and thus improve the perspective of risk control to the insureds. * All the structural parameters via censoring are less volatile compared to the corresponding quantities via trimming. In location-scale examples, the censored scheme even could reduce the influence of model assumptions on credibility estimation and provide a more stable estimation for potential risk management. * A small proportion of censoring or trimming could significantly adjust the collective premium of a group insurance contract and avoid over-pricing issues. And the W procedure allows a larger proportion to handle extreme claims than the T scheme, leading to more financially conservative insurers. § ACKNOWLEDGEMENTS The authors are very appreciative of valuable insights and useful comments provided by an anonymous referee, which helped to significantly improve the paper. 5mm plain plain plain § APPENDIX §.§ A.1 Exponential-Gamma The integral of the second moment quantile function of exponential is ∫_p^1-q[log (1-w)]^2 dw=-∫_1-p^q(log u)^2 du=-[u(log u)^2+2u(1-log u)]|_1-p^q =(1-p)[log (1-p)]^2+2(1-p)[1-log (1-p)]-q(log q)^2-2q(1-log q). Thus the variance of censored mean is (X_w|θ) = [X_w^2|θ]-E^2[X_w|θ] = p [F^-1(p)]^2+ ∫_p^1-q[F^-1(w)]^2 dw+q[F^-1(1-q)]^2-[μ_p,q(θ)]^2 =1θ^2{p[log(1-p)]^2+∫_p^1-q[log(1-w)]^2 dw+q(log q)^2}-[μ_p,q(θ)]^2 =1θ^2{p[log(1-p)]^2+(1-p)[log(1-p)]^2+2(1-p)[1-log(1-p)]-q(log q)^2 -2q(1-logq)+q(log q)^2}-[μ_p,q(θ)]^2 =1θ^2{[log(1-p)]^2+2(1-p)[1-log(1-p)]-2q(1-log q) }-1θ^2[m_1(p,q)]^2 :=1θ^2[m_2W(p,q)-[m_1W(p,q)]^2]. In equation (<ref>), the derivative of exponential quantile functions is H^'(w)=(F^-1)^'(w)=1F^'(F^-1(w))=1F^'(-log(1-w)/θ)=1θ e^-θ-log(1-w)/θ=1θ(1-w). Therefore, the process variance by (<ref>) is v_p,q(θ) =1θ^2{m_2W(p,q)-m_1W^2(p,q)+2[m_1(p,q) (p^2/1-p-q)-q log q+p^2/1-p log(1-p)] +p^3/1-p+q(1-q)+2p^2q/1-p} :=1θ^2m_3W(p,q). §.§ A.2 Pareto-Gamma The integral of the second moment quantile function of Pareto is ∫_p^1-q[(1-w)^-1/t-1]^2 dw = -∫_1-p^q(u^-1/t-1)^2 du = -∫_1-p^qu^-2/t du+2∫_1-p^qu^-1/t du-∫_1-p^q1 du = -[t/t-2 u^t-2/t]|_1-p^q+[2t/t-1 u^t-1/t]|_1-p^q-u|_1-p^q =t/t-2(1-p)^t-2/t-t/t-2 q^t-2/t-2t/t-1(1-p)^t-1/t+2t/t-1 q^t-1/t+(1-p)-q. In equation (<ref>), the derivative of Pareto quantile functions is H^'(w)=1F^'(θ[(1-w)^-1/t-1])=1tθ^t/[θ+θ[(1-w)^-1/t-1]]^t+1=θ(1-w)^-t+1/tt. Trimmed Version: The hypothetical mean is μ_T(p,q,θ) =θ{1/1-p-q∫_p^1-q[(1-w)^-1/t-1]dw} =θ 1/1-p-q{(1-p)[t/t-1(1-p)^-1/t-1]-q(t/t-1q^-1/t-1)} =θ m_1T(p,q). And the process variance is v_T(p,q,θ) =θ^2 1/(1-p-q)^2∫_p^1-q∫_p^1-q(min{u,v}-uv) (1-u)^-t+1/tt (1-v)^-t+1/tt du dv := θ^2 m_3T(p,q) Censored Version: The hypothetical mean is μ_W(p,q,θ) =θ{p[(1-p)^-1/t-1]+∫_p^1-q[(1-w)^-1/t-1]dw+q[q^-1/t-1]} =θ[p[(1-p)^-1/t-1]+(1-p)[t/t-1(1-p)^-1/t-1]-q(t/t-1q^-1/t-1)+q(q^-1/t-1)] :=θ m_1W(p,q). And the censored variance is (X_w|θ) =θ^2{p[(1-p)^-1/t-1]^2+ ∫_p^1-q[(1-w)^-1/t-1]^2 dw+q[q^-1/t-1]^2}-[μ_p,q(θ)]^2 =θ^2{p[(1-p)^-1/t-1]^2+t/t-2(1-p)^t-2/t-2t/t-1(1-p)^t-1/t+(1-p) +q[q^-1/t-1]^2-t/t-2q^t-2/t +2t/t-1 q^t-1/t-q}-[θ m_1(p,q)]^2 :=θ^2[m_2W(p,q)-m_1W^2(p,q)]. Again, by (<ref>), the process variance becomes v_W(p,q,θ) =θ^2{m_2(p,q)-m_1^2(p,q)+2/t[m_1(p,q) (p^2(1-p)^-t+1/t-q^t-1/t)+q^t-1/t(q^-1/t-1)-p^2(1-p)^-t+1/t ((1-p)^-1/t-1)]+1/t^2[p^3(1-p)(1-p)^-2(t+1)/t+q^3(1-q)q^-2(t+1)/t+2p^2q^2(1-p)^-t+1/tq^-t+1/t]} :=θ^2m_3W(p,q). §.§ A.3 Lognormal-Normal The integrals of the Lognormal moment quantile with limited boundaries are ∫_0^aF^-1(w)dw =∫_0^π_axf(x)dx=∫_0^π_axϕ(z)/σ^' xdx=∫_-∞^lnπ_a-θ/σ^'ϕ(z)/σ ^'d(e^θ+zσ) = e^θ+1/2σ^'2∫_-∞^lnπ_a-θ/σ^'1/√(2π)e^-(z-σ^')^2/2dz = e^θ+1/2σ^'2∫_-∞^lnπ_a-θ/σ^'-σ^'1/√(2π)e^-t^2/2dt = e^θ+1/2σ^'2Φ(Φ^-1(a)-σ^') ∫_0^a[F^-1(w)]^2 dw = ∫_0^π_ax^2 f(x)dx = ∫_-∞^lnπ_a-θ/σ^'e^θ+zσ^' ϕ(z)/σ^'d(e^θ+zσ^') = e^2θ∫_-∞^lnπ_a-θ/σ^'e^2zσ^'1/√(2π)e^-z^2/2dz = e^2θ+2σ^'2∫_-∞^lnπ_a-θ/σ^'1/√(2π)e^-(z-2σ^'2)dz = ∫_-∞^lnπ_a-θ/σ^'-2σ^'1/√(2π)e^-t^2/2dt = e^2θ+2σ^'2 Φ(Φ^-1(a)-2σ^'). In equation (<ref>), the derivative of Pareto quantile functions is H^'(w) =1F^'(e^θ+σΦ^-1(w))=σ e^θ+σΦ^-1(w)1/√(2π) e^-(ln e^θ+σΦ^-1(w)-θ/σ)^2/2=√(2π)σ e^[Φ^-1(w)]^2/2+θ+σΦ^-1(w). Trimmed Version: The hypothetical mean is μ_p,q(θ) =1/1-p-q∫_p^1-qe^θ+σ^'Φ^-1(w)dw =e^θ+1/2σ^'2/1-p-q[Φ(Φ^-1(1-q)-σ^')-Φ(Φ^-1(p)-σ^')] =e^θ m_1T(p,q,σ). And the process variance is v_a,b(θ) = 1/(1-p-q)^2∫_p^1-q∫_p^1-q(min{u,v}-uv)2πσ^2 e^[Φ^-1(u)]^2/2+θ+σ^'Φ^-1(u)e^[Φ^-1(v)]^2/2+θ+σ^'Φ^-1(v) du dv := e^2θ m_3T(a,b,σ). Censored Version: The hypothetical mean is μ_p,q(θ) =e^θ{p e^σ^'Φ^-1(p)+∫_p^1-qe^σ^'Φ^-1(w)dw+q e^σ^'Φ^-1(1-q)} =e^θ{p e^σ^'Φ^-1(p)+e^1/2σ^'2[Φ(Φ^-1(1-q)-σ^')-Φ(Φ^-1(p)-σ^')]+q e^Φ^-1(1-q)} :=e^θ m_1W(p,q,σ^'). And the censored variance is (X_w|θ) = p [F^-1(p)]^2+ ∫_p^1-q[F^-1(w)]^2 dw+q[F^-1(1-q)]^2-[μ_p,q(θ)]^2 = e^2θ{p e^2σ^'Φ^-1(p)+ e^2σ^'2[Φ(Φ^-1(1-q)-2σ^')-Φ(Φ^-1(p)-2σ^')] +q e^2σ^'Φ^-1(1-q)} -e^2θ m_1W^2(p,q,σ^') := e^2θ[m_2W(p,q,σ^')-[m_1W(p,q,σ^')]^2]. Finally, the process variance is v_W(p,q,θ) = e^2θ{ m_2W(p,q,σ^')-m_1W^2(p,q,σ^') +2[ m_1W(p,q,σ^') (p^2 Δ_p-q^2 Δ_1-q ) +q^2 Δ_1-q e^σ^'Φ^-1(1-q) -p^2 Δ_p e^σ^'Φ^-1(p)] -(p^2 Δ_p-q^2 Δ_1-q)^2+p^3 Δ_p^2 + q^3 Δ_1-q^2 } :=e^2θ m_3W(p,q,σ^'), where Δ_p=H^'(p)/e^θ and Δ_1-q=H^'(1-q)/e^θ. §.§ A.4 Non-parametric Model Property The empirical sample estimate of (<ref>) is v_p,q = n^2/(n-*np-*nq)^2∑_j=*np+1^n-*nq∑_i=*np+1^n-*nq( min{j/n,i/n}-j/ni/n) ( x_(j+1)-x_(j)) ( x_(i+1)-x_(i)). The sample estimate of the components of (<ref>) are μ_p,q = 1/n( *np x_*np+1+ ∑_i=*np+1^n-*nqx_i+*nq x_n-*nq), (X_W|θ) = 1/n( *np x_*np+1^2+ ∑_i=*np+1^n-*nqx_i^2+*nq x_n-*nq^2) -μ_p,q^2, H(p)= 0.5(x_np+x_np+1); if np is an integer, x_*np; if np is not an integer, H(1-q) = 0.5(x_n-nq+x_n-nq+1); if nq is an integer, x_*n-nq; if nq is not an integer, A = (*np)/n^2(x_*np+1 - x_*np); B = (*nq)/n^2( x_* n-nq - x_* n-nq-1).
http://arxiv.org/abs/2306.06360v1
20230610063928
3D reconstruction using Structure for Motion
[ "Kshitij Karnawat", "Hritvik Choudhari", "Abhimanyu Saxena", "Mudit Singal", "Raajith Gadam" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG", "cs.RO", "65D19" ]
3D reconstruction using Structure for Motion Kshitij Karnawat kshitij [email protected] Hritvik Choudhari hac [email protected] Abhimanyu Saxena asaxena4 [email protected] Mudit Singal msingal [email protected] Raajith Gadam raajithg [email protected] July 31, 2023 ========================================================================================================================================================================================================================== We are working towards 3D reconstruction of indoor spaces using a pair of HDR cameras in a stereo vision configuration mounted on an indoor mobile floor robot that captures various textures and spatial features as 2D images and this data is simultaneously utilized as a feed to our algorithm which will allow us to visualize the depth map. § INTRODUCTION Structure from Motion (SfM) is a technique used in computer vision and photogrammetry to create 3D models from a set of 2D images captured from different viewpoints. The technique involves reconstructing the 3D structure of an object or scene by analyzing the images' geometric relationships and camera parameters. The primary goal of our project is to create a robust and efficient SfM algorithm that can handle large data sets and provide accurate results in a reasonable amount of time while using a stereo camera configuration on a mobile floor robot integrated with Raspberry Pi. We are aiming at making the system autonomous. To achieve this, all of us have conducted a comprehensive study of the existing SfM techniques, their limitations, and their strengths. SfM has several advantages over other techniques used for 3D reconstruction. It is a relatively low-cost technique that only requires a standard camera, making it accessible to a wider range of users. This technique can also handle a larger variety of object geometries and textures, making it more versatile. Additionally, SfM can be used in conjunction with other techniques, such as LiDAR and stereo vision, to create more accurate 3D models. However, SfM also has some limitations, such as the need for a large number of images to construct a reliable 3D model, and the sensitivity to camera calibration and lighting conditions. We are working towards improving the reliability of the model by also integrating it with the stereo vision technique and improving the algorithm. The project also involves the implementation of several algorithms and techniques such as feature extraction, matching, bundle adjustment, and triangulation, among others. § TECHNIQUES EMPLOYED Our project aims to use a pair of HDR cameras in a stereo-vision configuration to reconstruct 3D models of indoor spaces. In this literature review, we will examine the existing research on the use of HDR cameras and stereo vision for 3D reconstruction, as well as related techniques such as feature detection and matching, bundle adjustment, and parallel processing. HDR cameras have been used in several applications related to 3D reconstruction, such as photogrammetry and computer vision. HDR imaging can improve the quality and accuracy of the reconstructed 3D models by providing more detailed and realistic images. The use of HDR cameras has been shown to be particularly useful in scenes with high-contrast lighting, where conventional cameras may struggle to capture all the detail in both bright and dark areas. § APPROACH Below is the flow chart that briefly describes the process to obtain a 3D point cloud from 2D images: § COMPUTING DEPTH MAP FROM STEREO IMAGES We, humans, have evolved to be with two eyes that we can perceive depth. And when we organize cameras analogously, it’s called Stereo-Vision. A stereo-vision system is generally made of two side-by-side cameras looking at the same scene, the following figure shows the setup of a stereo rig with an ideal configuration, aligned perfectly. Stereo vision is another important technique for 3D reconstruction, which involves using multiple cameras to capture images from different viewpoints. Stereo vision can provide more accurate depth information compared to other techniques such as structure from motion (SfM). Several algorithms have been developed to perform stereo matching and generate a 3D reconstruction from the stereo images. Feature detection and matching is a crucial steps in many 3D reconstruction algorithms, including the proposed project. Popular feature detection algorithms include SIFT, SURF, and ORB, while feature matching algorithms include brute force matching and RANSAC. The goal of feature detection and matching is to identify corresponding points in multiple images, which can then be used to generate a 3D point cloud. Bundle adjustment is another important technique used in 3D reconstruction to refine the camera parameters and improve the accuracy of the 3D model. Bundle adjustment algorithms typically minimize the reprojection error between the 2D image points and the corresponding 3D points. Techniques such as the Levenberg-Marquardt algorithm, Gauss-Newton optimization, and the conjugate gradient method have been used for bundle adjustment. Parallel processing can be used to speed up the 3D reconstruction process by distributing the computation across multiple processors or nodes. Parallel processing can be particularly useful for large-scale 3D reconstruction tasks that involve processing a large number of images. In summary, our proposed project aims to use HDR cameras in a stereo vision configuration to reconstruct 3D models of indoor spaces. This approach builds on existing research on the use of HDR imaging, stereo vision, and related techniques such as feature detection and matching, bundle adjustment, and parallel processing. The proposed project has several potential applications in areas such as robotics, interior design, and architecture, and has the potential to make significant contributions to the field of 3D reconstruction. § DEPTH MODEL: VIT-HYBRID Visual Transformers, specifically the DPT (Dense Prediction Transformers) model, are a type of deep learning architecture used for image recognition and understanding tasks. They are inspired by the success of Transformers in natural language processing and aim to apply similar principles to visual data. The DPT model combines the strengths of convolutional neural networks (CNNs) and Transformers. CNNs are powerful in capturing local patterns and spatial hierarchies in images, while Transformers excel at capturing long-range dependencies and modeling relationships between different parts of the input. In the DPT model, the image is divided into a grid of patches, and each patch is treated as a separate token, like how words are treated in natural language processing tasks. These image patches are then fed into a Transformer architecture, consisting of multiple layers of self-attention and feed-forward neural networks. The self-attention mechanism allows the DPT model to capture relationships between different patches, enabling it to understand global context and capture long-range dependencies in the image. By considering interactions between all patches, the model can learn to attend to important visual features and encode them effectively. Dilated convolutions, which are convolutional layers with increased spacing between their filter elements, are utilized in the DPT model to capture multi-scale information from the image. By incorporating dilated convolutions within the self-attention mechanism, the model can process image features at different levels of detail, effectively handling objects of various sizes. Training the DPT model involves optimizing it with a suitable loss function, such as cross-entropy loss, and using a large, labeled dataset of images. During training, the model learns to map input images to their corresponding labels, allowing it to make predictions on new, unseen images during the inference stage. The DPT model has shown promising results in various computer vision tasks, including image classification, object detection, and semantic segmentation. Its ability to capture long-range dependencies and understand global context makes it particularly effective in scenarios where spatial relationships between image elements are crucial. Overall, Visual Transformers like the DPT model represent a novel approach to visual processing, leveraging the strengths of Transformers and CNNs to achieve state-of-the-art performance in image understanding tasks. § POINT CLOUD REGISTRATION Multiway registration refers to the process of aligning multiple entities or datasets simultaneously to achieve a coherent alignment. This can be applied to various types of data, such as point clouds, images, or 3D models. The goal is to find a transformation that aligns all the entities together, ensuring global consistency. The process of multiway alignment typically involves the following steps: * Initialization: - Select one entity as the reference or a common coordinate system. - Initialise the transformations for each entity relative to the reference. * Correspondence Estimation: - Establish correspondences between entities to determine the relationships or associations between their elements. - This can be done through feature matching, nearest neighbour search, or other methods depending on the type of data. * Transformation Estimation: - Compute the transformations (e.g., rotation, translation) that align each entity with the reference or with other entities. - This can be achieved through techniques such as RANSAC, Iterative Closest Point (ICP), or optimization-based approaches. * Global Alignment: - Perform a global alignment step to refine the transformations obtained in the previous step. - This step aims to find a transformation that minimises the global distance or discrepancy among all entities. - It can involve solving an optimization problem that considers the cumulative alignment error or fitting a global transformation model. * Iterative Refinement: - Iterate the correspondence estimation, transformation estimation, and global alignment steps to refine the alignment. - This iterative process helps improve the accuracy and convergence of the alignment by iteratively updating the correspondences and transformations. The specific techniques and algorithms used in multiway alignment depend on the type of data and the application domain. Multiway alignment finds applications in various fields, including computer vision, robotics, medical imaging, and geospatial analysis, where multiple data sources or entities need to be aligned to achieve a consistent and integrated representation. §.§ Iterative Closet Point (ICP) Point-to-plane ICP (Iterative Closest Point) is a widely used algorithm for aligning two point clouds or a point cloud to a surface model. It minimises the distance between corresponding points on the source and target surfaces. Unlike the traditional point-to-point ICP, which minimises the distance between individual points, point-to-plane ICP considers the distance between a point and a plane. It finds the optimal transformation (translation and rotation) that minimises the sum of squared distances between the source points and the planes defined by the target points. This approach is particularly useful when aligning point clouds with complex surfaces, as it takes into account the local surface geometry rather than just individual point positions. By aligning the source point cloud with the target surface, point-to-plane ICP can achieve more accurate registration results compared to point-to-point methods, especially in the presence of noise or partial overlap between the datasets. Given the source point cloud and the target point cloud, the first step is to establish correspondences between the points in the two clouds. This can be done using a nearest neighbour search, where each point in the source cloud is matched with its closest point in the target cloud. For each correspondence (source point and target point), a plane is defined using the neighbouring points in the target cloud. The plane can be represented by a point on the plane and its associated normal vector. The goal is to find the transformation (rotation and translation) that minimises the sum of squared distances between the source points and the planes defined by the target points. This is typically formulated as an optimization problem, where the objective function to minimise is the sum of squared distances.To solve the optimization problem, an iterative process is employed. In each iteration, the current transformation is refined by estimating the gradients of the objective function and updating the transformation accordingly. This process continues until a convergence criterion is met, such as reaching a maximum number of iterations or a small change in the transformation parameters. Once the optimization converges, the final transformation is obtained, which aligns the source point cloud with the target point cloud based on the point-to-plane distance metric.The mathematics involved in Point-to-plane ICP can be complex, as it requires computing distances, normals, gradients, and solving an optimization problem. However, there are various algorithms and libraries available that provide implementations of Point-to-plane ICP, simplifying the usage and handling the underlying mathematical operations. The goal of ICP is to align two point clouds, the old one (the existing points and normals in 3D model) and new one (new points and normals, what we want to integrate to the existing model). ICP returns rotation+translation transform between these two point clouds. The Iterative Closest Point (ICP) minimises the objective function which is the Point to Plane Distance (PPD) between the corresponding points in two point clouds: E(𝐓)=∑_(𝐩, 𝐪) ∈𝒦((𝐩-𝐓𝐪) ·𝐧_𝐩)^2 where np is the normal of point p has shown that the point-to-plane ICP algorithm has a faster convergence speed than the point-to-point ICP algorithm. § HARDWARE Our hardware component comprises the following stack: The Jetson Nano is powered by a GPU and a quad-center ARM Cortex-A57 central processor, making it reasonable for running computationally intensive tasks like image processing and generating inferences from the Depth Prediction Transformer model. The Jetson Nano communicates with the Arduino Nano to drive the robot around within indoor space. The image processing and depth estimation pipeline on the Jetson Nano commonly includes the following steps: 1. Image Acquisition: The Jetson Nano captures images from the 2 MIPI CSI cameras ports, where we have attached Raspberry Pi camera V2. 2. Preprocessing: The left and right images are captured and are resized to downsample for the stereo depth estimation. 3. Depth estimation: Depth information is calculated by two methods. First is by conventional disparity calculation and second is by inferencing a Dense prediction transformer to estimate depth using right or left camera image. 4. Generate point cloud: The depth and corresponding RGB image is published to the local system using ROS image publisher. This data is combined by open3D functions to estimate a point cloud of the scene. 5. Generate 3D scene of environment: The Jetson sends data to the Arduino nano using serial communication and the robot moves around in the space taking images from different viewpoints. § RESULTS We tested the 3D reconstruction of the indoor scene using two depth estimation techniques: * Conventional disparity based depth estimation using pair of stereo images. This resulted in a very coarse depth map and hence the generated point cloud does not give accurate representation of the 3D space. * For the second method, we used Dense Prediction transformers(DPT) to estimate depth using a monocular camera setup. It was observed that the depth map is more robust and consistent than the disparity based method. This is shown in the results below. * Another comparison is carried out between Raspberry Pi camera module and a standard Smartphone camera. It was observed that the resolution of the camera greatly affects the generated depth map and the subsequent point cloud. § POTENTIAL IMPROVEMENTS * Improving the camera resolution for better quality images. * Using a DL model to obtain point cloud registration can yield better results. * Training the vision transformer on bigger dataset with diverse scenes and environments § REFERENCES * https://github.com/facebookresearch/Replica-Dataset * https://towardsdatascience.com/3-d-reconstructionwith-vision-ef0f80cbb299 * https://www.cs.cornell.edu/projects/bigsfm/ * https://www.youtube.com/watchv=DoZJaqBzSso&abchannel=NicolaiNielsen * http://www.open3d.org/docs/release/tutorial/geometry/pointcloud.html * https://docs.opencv.org/3.4/d9/d0c/group calib3d.html#ga1bc1152bd57d63bc524204f21fde6e02 * http://www.open3d.org/docs/release/tutorial/geometry/pointcloud.html * https://doi.org/10.48550/arXiv.2103.13413 * https://doi.org/10.48550/arXiv.1907.01341 * http://www.open3d.org/docs/latest/tutorial/Advanced/multiway_registration.html * Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen Gould, and Anton van den Hen-gel. Vision-and language navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR, 2018. * Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, 2015. * D. Scharstein, H. Hirschmuller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German Conference on Pattern Recognition (GCPR 2014), Munster, Germany, September 2014
http://arxiv.org/abs/2306.04143v1
20230607043002
RISC: A Corpus for Shout Type Classification and Shout Intensity Prediction
[ "Takahiro Fukumori", "Taito Ishida", "Yoichi Yamashita" ]
cs.SD
[ "cs.SD", "eess.AS" ]
RISC: A Corpus for Shout Type Classification and Shout Intensity Prediction Takahiro Fukumori^†, Taito Ishida^††, and Yoichi Yamashita^† ^†College of Information Science and Engineering, Ritsumeikan University, Japan. ^††Graduate School of Information Science and Engineering, Ritsumeikan University, Japan. ============================================================================================================================================================================================================================================== The detection of shouted speech is crucial in audio surveillance and monitoring. Although it is desirable for a security system to be able to identify emergencies, existing corpora provide only a binary label (i.e., shouted or normal) for each speech sample, making it difficult to predict the shout intensity. Furthermore, most corpora comprise only utterances typical of hazardous situations, meaning that classifiers cannot learn to discriminate such utterances from shouts typical of less hazardous situations, such as cheers. Thus, this paper presents a novel research source, the RItsumeikan Shout Corpus (RISC), which contains wide variety types of shouted speech samples collected in recording experiments. Each shouted speech sample in RISC has a shout type and is also assigned shout intensity ratings via a crowdsourcing service. We also present a comprehensive performance comparison among deep learning approaches for speech type classification tasks and a shout intensity prediction task. The results show that feature learning based on the spectral and cepstral domains achieves high performance, no matter which network architecture is used. The results also demonstrate that shout type classification and intensity prediction are still challenging tasks, and RISC is expected to contribute to further development in this research area. Keywords: RISC, Shout corpus, Speech type classification, Shout intensity prediction § INTRODUCTION The development of automated surveillance systems is essential to protect people's safety. To date, computer vision techniques have often been applied to video data captured by cameras <cit.>. Recently, many studies have also focused on the use of audio information recorded by microphones for abnormal situation detection <cit.>. Typical examples of sound categories targeted by conventional research include gunshots <cit.>, alarms <cit.>, rainfall <cit.>, running vehicles <cit.>, and mechanical faults <cit.>. In addition to the detection of such audio events, the ability to distinguish shouted speech from ordinary speech such as daily conversations is highly useful for emergency rescue operations. This problem can be formulated as a specific type of speech classification in which an input speech sample is judged as either a shout or not. In several studies, a labeled corpus comprising shouted and normal speech has been constructed as a training resource <cit.>. As the basis for a practical system of audio surveillance, the conventional corpora used in the literature are insufficient for two reasons. First, the existing corpora contain only binary labels for speech samples, i.e., shouted or normal, rather than a numerical score indicating the shout intensity. Although an audio surveillance system should ideally be able to judge different instances to assign different priorities for rescue, it is not straightforward to compare the level of emergency between shouts based on learning from binary labels only. To solve this problem, each shouted speech sample should be associated with a shout intensity—the degree of `shout-like-ness' perceived by the listener. Here, we should emphasize that the shout intensity cannot be quantified using the sound pressure level of the speech because the sound pressure level greatly depends on the positional relationship between the microphone and the speaker. Second, most of the existing corpora comprise only utterances that typically occur in emergency situations (e.g., “help!” <cit.>). However, people also often shout for joy in nonhazardous situations. Although an audio surveillance system must discriminate between these different shout types, conventional studies have ignored this fact, and the feasibility of such discrimination is still unknown. The ability to predict a speaker's situation (i.e., hazardous or not) and emergency level will require a new shouted speech corpus labeled with shout type and intensity information. This paper presents a novel corpus of shouted speech, the RItsumeikan Shout Corpus (RISC), comprising angry shouts, screams, and cheers collected from a recording experiment at Ritsumeikan University. The process of creating of our corpus started with defining a list of possible sentences of shouted speech. Then, we asked experiment participants to utter each sentence while imagining a situation for which the sentence would be suitable. Finally, based on listening experiments using a crowdsourcing service, each shouted speech sample was assigned shout intensity ratings as crucial information for training emergency detectors. This paper also considers how to predict the speech type or shout intensity for a given speech sample. In recent years, deep learning has become a mainstream approach to shouted speech detection. For example, several methods have been proposed to model the relationship between the temporal variations of speech features and the speech status using convolutional neural networks (CNNs) or recurrent neural networks (RNNs) <cit.>. Most conventional studies have used traditional, manually designed low-dimensional features as the input to these networks. Typical features of this type include the mel-frequency cepstral coefficients (MFCCs) <cit.> and the mel spectrogram <cit.>. Here, we focus on the fact that other recent speech processing tasks have shown the effectiveness of automatic feature extraction from high-dimensional information. For example, temporal waveforms <cit.> and spectrograms <cit.> have been shown to improve the speech recognition performance of deep learning compared with traditional low-dimensional features. This trend can also be seen for speaker identification, for which deep models can automatically extract effective features from input raw speech <cit.>. Following these works, our recent study <cit.> has presented a novel speech classification method based on spectrogram and cepstrogram features obtained by arranging the spectra and cepstra, respectively, as time series. In this paper, we present a comprehensive performance comparison between the conventional methods and our deep spectral–cepstral approach <cit.> based on RISC for not only classification but also regression. The main contributions of this paper are summarized as follows: * We have constructed a novel corpus with various shout types and shout intensity ratings, which can support new recognition challenges in shouted speech detection research. This paper describes the details of its construction pipeline, including in-laboratory speech recording and crowdsourcing-based verification. Furthermore, we have released our corpus on the web[<https://t-fukumori.net/corpus/RISC/en.html>]. * Using the constructed corpus, we present comprehensive results for classification and regression obtained with conventional methods and our deep spectral–cepstral approach. The remainder of this paper is organized as follows. Section <ref> introduces conventional deep approaches for shout detection and the existing speech corpora used for model training. Section <ref> describes the procedure used to construct our corpus. In particular, we explain how we recorded the shouted speech samples and obtained the intensity ratings for each sample. Section <ref> describes the acoustic features and the structures of the deep approaches for detecting shouted speech. In Section <ref>, we present the results of experiments on shouted vs. normal speech classification, shout type classification, and shout intensity prediction based on RISC. Finally, Section <ref> concludes the paper and suggests some possible directions for future work. § RELATED WORKS §.§ Deep approaches for shouted speech detection Deep neural networks (DNNs) have dramatically improved the performance of speech analysis technology in recent years. Additionally, for shouted speech detection, DNNs have been shown to outperform conventional classifiers, such as Gaussian mixture models and hidden Markov models <cit.>. Therefore, a recent focus of related research has been the design of features that are suitable as inputs to DNNs. For example, Laffitte et al. <cit.> used the MFCCs and energy components to train a deep architecture consisting of restricted Boltzmann machines and deep belief networks for shouted speech detection. Baghel et al. <cit.> also calculated the MFCCs and their second derivatives for use as inputs to a DNN. Gaviria et al. <cit.> used the MFCCs and the mel spectrogram in their deep learning model. A recent method presented by Baghel et al. <cit.> relied on calculating an integrated linear prediction residual <cit.>, representing the period information of vocal fold vibration. The network architecture used in <cit.> consisted of an autoencoder, an attention mechanism, and bidirectional gated recurrent units (GRUs). Shouted speech is sometimes considered as one of the target classification categories for environmental sound recognition <cit.>. For example, Mun et al. <cit.> used the MFCCs to train DNNs in experiments on a home surveillance environment database. Valenti et al. <cit.> also detected acoustic sound events using DNNs and RNNs; their features were the log-mel spectrogram and its first derivative. These previous studies also used traditional and handcraft speech features. Our recent work <cit.> presented a novel approach based on learning descriptive features from the spectral and cepstral domains for shouted speech detection. Specifically, we used two types of high-dimensional features, spectrograms and cepstrograms, as inputs to a deep architecture. This feature learning approach showed superior performance over conventional low-dimensional features. As the major difference between the present paper and the previous conference version <cit.>, this paper presents a comprehensive performance comparison among different deep approaches on our new corpus. Furthermore, whereas previous studies, including our recent work <cit.>, have addressed only the classification task, this paper reports the performance for not only classification but also shout intensity prediction. §.§ Existing corpora for shouted speech detection A training corpus is essential for developing a shouted-speech detector, and many research groups have constructed their own corpora for this purpose. Table <ref> presents comparisons between existing corpora and our new corpus. Nandwana et al. <cit.> recorded screams and neutral speech samples for binary classification. The corpus used in <cit.> contained shouted speech collected in subway trains, with each speech sample labeled as “scream,” “shout,” “conversation,” or “noise.” Mesbahi et al. <cit.> collected 91 shouted speech samples from various web sources, including screams, expressions of panic, baby cries, cries of pain, etc., for speech characteristic analysis. The corpus constructed in <cit.> comprised Indian English utterances with binary labels (i.e., normal and shouted). In <cit.>, the authors subsequently presented a speech dataset taken from news debates in Indian English, with categories of normal speech, shouted speech, and noise. Notably, in these works, the situation in which a person is shouting is usually assumed to be unknown or a hazardous one only. An exception is a Finnish corpus presented by Pohjalainen et al. <cit.>, in which the speakers uttered and shouted general sentences that could be encountered in both hazardous and nonhazardous. However, they used no situation labels for classifier training, and there was no discussion of shout type classification. Some corpora targeting environmental sound or emotional speech recognition also contain shouted speech as one of the classification categories. For example, an environmental sound corpus <cit.> that was used in the DCASE 2016 Challenge Task 3 <cit.> includes samples of children's shouts in a residential area. P. Foggia et al. <cit.> collected four kinds of audio clips, namely, screams, glass breaking, gunshots, and background noise, as the main categories for audio surveillance. Hsu et al. <cit.> assigned labels of either “laughter,” “breathing,” “shout,” or “background” to a subset of the speech samples in the emotional speech corpus named NNIME <cit.>. The speech samples in the above corpora have categorical labels but no continuous values; consequently, they cannot support regressor training. Furthermore, the corpus developers sometimes intentionally screened speech samples during their recording experiments, although their guidelines or reasons are mostly unknown. For example, raised voices were not admitted as shouting in the corpus construction process of <cit.>. In <cit.>, when the sound pressure level difference between a speaker's normal and shouted speech instances did not exceed a predefined threshold, the speaker's utterances were rerecorded. A recording engineer also instructed the speaker to repeat the utterance until they clearly recognized that the speaker was shouting. In our study, we avoided the influence of any single person's subjectivity by asking crowdsourcing workers to rate the intensity of each shouted speech sample, and we determined the shout types based on the speaker's intentions. Furthermore, where most conventional corpora remain undisclosed, we have made our corpus available on the web. § RITSUMEIKAN SHOUT CORPUS This section describes the details of the creation of our corpus. We first listed a set of sentences to be uttered (see Section <ref>). Then, speakers uttered each sentence under our instruction (see Section <ref>). Finally, each utterance was evaluated for the shout intensity via crowdsourcing (see Section <ref>). §.§ Listing sentences to be uttered Corpus developers usually provide speakers with “scripts” to smoothly conduct recording experiments. Similarly, we prepared a set of sentences to be uttered as follows. We first asked a group of five graduate students (four male and one female) who were engaged in spoken language research to list 55 possible sentences that people could shout in general. These candidate sentences were then presented to five male undergraduate students who did not have specialized knowledge of spoken language. They were asked whether each sentence is appropriate for shouting, and most students disagreed on only two sentences. We eliminated these two sentences from the candidate list, leaving 53 sentences. Shouts can occur in various situations, and it is desirable for surveillance systems to be able to discriminate between screams and cheers. Therefore, we asked the five undergraduate students to evaluate whether each sentence among the candidates could be uttered in a hazardous situation. Specifically, for each of the 53 sentences, they were asked to select one impression from among three classes: “sentence specific to highly hazardous situations (hereinafter, the H class),” “sentence specific to less hazardous situations (hereinafter, the L class),” and “sentence that is difficult to classify into hazardous or less hazardous (hereinafter, the H/L class).” Based on these responses, we labeled each sentence with the class that received the largest number of votes among the three classes. If two classes tied for first place, the sentence was classified as belonging to the H/L class. For each class, the 53 sentences were sorted in descending order based on the number of votes, and only the top-ranked sentences were extracted; specifically, we selected 20 sentences for the H class (e.g., “help,” “shut up,” etc., in Japanese), 20 sentences for the L class (e.g., “go,” “yes,” etc., in Japanese), and five sentences for the H/L class (e.g., “really?,” “hey,” etc., in Japanese). Finally, five vowels were added to the H/L-class sentences, resulting in a list of 50 sentences for utterance. The RISC webpage lists all the sentences. §.§ Speech recording We recruited 50 graduate and undergraduate students (29 male and 21 female) and asked them to utter the 50 sentences in two different utterance styles: normal and shouted. After explaining the purpose and use of the recordings to the participants, we obtained informed consent from each participant. All speech samples were recorded in a studio with the characteristics described in Table <ref>-A. The speaker position was 0.5 m away from the microphone installed at the center of the studio, and the microphone's vertical position was the same as that of the speaker's mouth. Table <ref>-B lists the recording equipment and conditions. We fixed the input level during recording so as to prevent clipping even at 110 dBA. Before the main recording, each participant conducted a 10-minute test recording to practice their utterances. During the main recording, each participant first uttered the 50 sentences as normal speech and then shouted the same sentences. We provided a 3-minute break, including rehydration, every 25 sentences during the normal speech recordings. A similar intermission was given every ten sentences during the shouted speech recordings to allow the participants to rest their throats. We instructed the speakers to imagine the situations in which they were shouting when shouting the sentences in the H and L classes. However, we gave no special instructions for the H/L-class sentences; the speakers were allowed to shout freely. We also provided no example or objective criteria for including emotion in the utterances, and the speakers were allowed to act out shouting situations that they considered suitable for a given type of utterance in the H, L, and H/L classes. During the recording experiments, speech was rerecorded only when the speaker wished it, the speaker misspoke a sentence, or a recording accident occurred. A total of 5,000 speech samples were collected, including 2,500 instances each of normal and shouted speech. Furthermore, we divided all speech samples in RISC into the following four classes: (i) Normal: 2,500 normal speech samples, consisting of utterances of all 50 sentences; (ii) Shout-H: 1,000 shouted speech samples, consisting of utterances of the 20 sentences in the H class; (iii) Shout-L: 1,000 shouted speech samples, consisting of utterances of the 20 sentences in the L class; and (iv) Shout-H/L: 500 shouted speech samples, consisting of utterances of the five vowels and five sentences in the H/L class. These can be used for shout type classification. §.§ Scoring the shout intensity of each speech sample We crowdsourced a listening experiment to add shout intensity ratings to the shouted speech samples. First, we randomly shuffled the 2,500 shouted speech samples in the dataset and divided them into 125 subsets, each containing 20 speech samples. The maximum amplitude of each speech sample was normalized to 30,000 to avoid sound-pressure-level-based judgment. There was a 200 ms nonspeech interval before and after each speech sample. To guard against the participation of insincere, unreliable workers who would affect the labeling quality, one dummy speech sample was mixed in the 20 shouted speech samples in each subset. This dummy sample was a normal speech utterance of “hello” in Japanese by a male speaker. Thus, a single crowdsourcing task comprised 21 speech ratings, in which one of the speech samples was a dummy. Workers who judged the dummy sample to be a shout were considered spammers. Each worker was allowed to participate in the experiment no more than three times, and a different subset was assigned for evaluation each time. The number of unique workers who participated in this listening experiment was 693. We asked the workers to wear headphones or earphones and to rate the shout intensity of each speech sample on a seven-point scale from 1 (not a shout at all) to 7 (very shout-like). The following procedure was applied to collect ten high-quality ratings per speech sample. We first assigned 12 workers to a single task. A worker who scored two or higher for the dummy speech sample in each task was considered a spammer, and all responses from that person were deleted. For subsets that received 11 or more ratings, we randomly selected ten of those ratings. Each speech sample can thus be assigned a single shout intensity score by averaging the ten selected ratings. In summary, RISC contains 2,500 shouted speech samples that have shout intensity ratings ranging from 1 to 7 in addition to 2,500 normal speech samples. Figure <ref> shows scatter plots of the shout intensity ratings obtained in the above listening experiment. The dots in the figure represent the average of ten workers' ratings for (a) each speaker and (b) each sentence. Specifically, Figure <ref> (a) shows the scores of the 50 speech samples uttered by each speaker, where `f' and `m' in the speaker indexes on the horizontal axis represent female and male speakers, respectively. On the other hand, Figure <ref> (b) shows the scores of the 50 speakers for each sentence. The horizontal axis is the sentence index, with 01–05 representing vowel sentences and 06–10, 11–30, and 31–50 representing sentences in the H/L class, L class, and H class, respectively. The speaker-specific results in Figure <ref> (a) show that speakers f1, m8, and m27 received low scores overall, while speakers f2, f21, and m6 obtained scores higher than the others. This indicates that the listeners' perception of the intensity of the shouts varied greatly depending on the speaker. In the results by sentence in Figure <ref> (b), although the intensities of all sentences tended to vary uniformly, the scores for sentences in the H class tended to be higher than those for the other sentences. This could be because the linguistic information of these sentences influenced either the listeners or the speakers who uttered the sentences while imagining being in a highly-hazardous situation. § SHOUT RECOGNITION This paper aims to provide not only a corpus but also comprehensive benchmarks on that corpus. To this end, this section describes a deep approach for shouted vs. normal speech classification and shout intensity prediction. First, we explain the speech features in the spectral and cepstral domains that are used in both conventional methods and the proposed method (see Section <ref>). Next, we provide the details of DNN architectures whose inputs are single features (see Section <ref>). Finally, we describe our method, in which the outputs of single-feature DNNs are concatenated to yield classification and intensity prediction results (see Section <ref>). §.§ Speech feature extraction For a given audio segment, we partitioned it into successive frames using a Hamming window with a length of 1,024 points (i.e., 64 ms) and a hop length of 512 points (i.e., 32 ms); subsequently, we obtained the features for every 20 frames. Figure <ref> summarizes the extraction of these speech features. It should be noted that neither conventional methods nor our method use the sound pressure level of the input speech as a speech feature for shout recognition, as this feature is highly dependent on the positional relationship between the speaker and the microphone. Below we provide the details of the MFCCs and the mel spectrogram, which are used in conventional methods <cit.> (hereinafter, conventional low-level features). Time series of MFCCs (tMFCCs): The MFCCs are typical cepstral features. Most conventional methods of shouted speech detection used MFCCs with dimensions ranging between 8 and 60 <cit.>. Following <cit.>, we extracted 30-dimensional MFCCs from each frame and concatenated the vectors over 20 frames, resulting in a 600-dimensional cepstral feature vector. Mel spectrogram: The mel spectrogram belongs to the spectral domain and has been used in recent studies pertaining to sound event detection <cit.>, with dimensions ranging between 25 and 40. We extracted a 30-dimensional mel spectrogram, whose number of dimensions is the same as that of the MFCCs. Herein, we propose learning features that are suitable for shouted vs. normal speech classification instead of using the conventional features extracted as described above. The features used in this study (hereinafter, high-level features) are described below. Spectrogram: A spectrogram represents the temporal variation of a spectrum. Specifically, applying the short-time Fourier transform to a speech signal yields a 512-dimensional vector of the power spectrum for each frame, and concatenating the vectors of 20 frames results in a 10,240-dimensional spectrogram vector. Recent studies on sound event detection have used spectrograms as inputs to DNNs and demonstrated their descriptiveness for such target tasks <cit.>. Hence, we used this high-dimensional spectrogram to learn effective spectral features. Cepstrogram: Applying the inverse discrete Fourier transform to the log power spectrum yields a cepstrum, and the concatenation of the cepstra of multiple frames yields a cepstrogram. The cepstrogram represents the temporal variations in the vocal tract and vocal cords. We set the dimensionality of each cepstrum equal to that of the spectrogram, i.e., 512, resulting in a 10,240-dimensional cepstrogram vector. The performance of each feature was investigated experimentally. §.§ Network architecture We used CNN, GRU, and CNN–GRU models to analyze the acoustic and speech features. We trained these networks as classifiers and regressors using single features. Figure <ref> shows the architecture of each type of network, whose hyperparameters depend on the number of feature dimensions. The detailed settings are as follows: Each single-feature CNN model comprised three sets of convolutional and pooling layers followed by two fully connected (FC) layers, as shown in Figure <ref> (a). Each of these models treated a set of features collected over 20 frames as an image. Each convolutional layer contained a 5× 5 kernel with a stride of 1, a padding of 2, and 16 channels. The max pooling layers each contained a 5×1 kernel for our high-dimensional features or a 3×1 kernel for the conventional low-dimensional features. The layer parameters d_1, d_2, d_3, d_4, and d_5 in the figure were set to 512, 102, 20, 4, and 64 respectively, for high-dimensional input features and 30, 10, 3, 1, and 16 respectively, for low-dimensional input features. Each single-feature GRU model comprised a bidirectional GRU (BiGRU) layer and two FC layers, as shown in Figure <ref> (b). The input to each of these models was a time series of features from 20 frames. The layer parameters d_1 and d_2 in the figure were set to 1,024 and 64, respectively, for high-dimensional input features and 60 and 16, respectively, for low-dimensional input features. Each single-feature CNN–GRU model comprised three sets of convolutional and pooling layers followed by a BiGRU layer and two FC layers, as shown in Figure <ref> (c). Each of these models took feature images as inputs, and the output of the third max pooling layer was passed to the BiGRU layer as a time series of frame features. We set the parameters of the convolutional and pooling layers (i.e., d_1 to d_5 in the figure) to the same values as those in the single-feature CNNs. The remaining parameter, d_6, was set to 64 or 16 for high-dimensional or conventional low-dimensional features, respectively. Each network used rectified linear units (ReLUs) as activation functions in each layer. The structure of the last layer in Figure <ref> and the loss function both differed between the speech type classification task and the shout intensity prediction task. §.§ Spectral–cepstral fusion for classification and regression Our deep spectral–cepstral fusion approach uses features from both domains. Figure <ref> shows our DNN architecture, comprising two single-feature networks as described in Section <ref> and an FC layer. First, we pretrained the single-feature-based networks using either spectral or cepstral features. Subsequently, we concatenated the outputs from the last ReLU layers of these two single-feature networks and input them into the FC layer. The number of dimensions of the concatenated features, d, was 128 for high-dimensional features and 32 for low-dimensional ones. The concatenated features were then passed to the last layer to obtain the final classification-or-prediction result. We fine-tuned the entire network using a training dataset, resulting in a feature extractor specific to either shouted vs. normal speech classification or shout intensity prediction. § RECOGNITION RESULTS We conducted three experiments using RISC. In Experiment 1, the test speech samples were classified into two classes: normal and shouted speech. In Experiment 2, the input speech samples were classified into four categories: normal speech (Normal) and three types of shouted speech (Shout-H, Shout-L, and Shout-H/L). In Experiment 3, we predicted the shout intensity shown in Figure <ref>. Experiment 1 focused on the general task of conventional shouted speech detection problems, while Experiments 2 and 3 focused on the detection of urgent and critical situations considering the demands of practical surveillance systems. §.§ Common settings Throughout the experiments, each speech sample in the corpus was downsampled from a sampling frequency of 48 kHz to 16 kHz. To consider different noise conditions in the tests, we used NOISE-X92 <cit.> to add factory noise at the following eight signal-to-noise ratios (SNRs): ∞, 20, 10, 5, 0, -5, -10, and -20 dB. We implemented the networks shown in Figures <ref> and <ref> using PyTorch. All networks were trained using the Adam optimizer <cit.> with an initial learning rate of 0.0001 and momentum parameters of 0.9 and 0.999 on two NVIDIA RTX A6000 GPUs. The batch size was 256, and 100 epochs were used for training. Fivefold cross-validation was performed by partitioning the corpus into 40 training-verification speakers and ten test speakers. In addition to the features described in Section <ref>, we also tested MFCCs and their second derivatives, MFCCs_ΔΔ, as used in <cit.> and the original network architecture of the cited work. As performance measures, we used the F1-score, the weighted F1-score, and the root mean square error (RMSE) for the binary classification, the four-class classification, and the intensity prediction tasks, respectively. §.§ Experiment 1: Binary classification We regarded the 2,500 normal speech samples and the 2,500 shouted speech samples as negative and positive examples, respectively. The last layer in Figures <ref> and <ref> was designed as a combination of an FC layer, FC (1), and a sigmoid function, Sigmoid (1); this layer classified the input speech as shouted speech if the output from the sigmoid function was greater than 0.5 and as normal speech otherwise. The mean squared error (MSE) was used as the loss function to train the network. Table <ref> shows the comprehensive evaluation results obtained with the different types of features and network architectures under the eight SNR conditions, and the average F1-scores are provided as well. The symbol “+” in the table represents the use of the two corresponding features in a fusion network of the form shown in Figure <ref>. Among the network architectures, the CNNs achieved higher F1-score than the other architectures. Focusing on the performance of the single features, we find that the high-level features (i.e., spectrogram or cepstrogram features) achieved better F1-score than the conventional low-level features in the same domain (i.e., the mel spectrogram or MFCCs). In particular, with a decrease in the SNR, the conventional low-level features suffered a sharp decrease in the F1-score, more so than our high-level features. §.§ Experiment 2: Four-class classification Next, we experimented with a four-class classifier using the following labels: Normal, Shout-H, Shout-L, and Shout-H/L. The last layer of the networks consisted of an FC layer, FC (4), and a softmax function Softmax (4). The largest output from the softmax function indicated the classification result. Cross-entropy was used as the loss function. Table <ref> summarizes the weighted F1-scores of the classification results for normal speech and three types of shouted speech. Experiment 2 addressed four-class classification, which is more difficult than the task addressed in Experiment 1, and the weighted F1-score decreased overall. However, Table <ref> shows a clear tendency for our high-dimensional features to show superior performance compared to the low-dimensional features. Furthermore, the combination of spectral and cepstral features (i.e., “Spectrogram + Cepstrogram”) performed the best except for an SNR of -20 dB. Regarding the classification network, higher weighted F1-score were achieved at SNRs above 10 dB and below 5 dB by GRU and CNN–GRU models, respectively. To analyze the classification results in more detail, Figure <ref> shows the confusion matrix for the CNN–GRU model with an SNR of 0 dB and Spectrogram + Cepstrogram features. We can see in this matrix that the normal speech (Normal) could be separated from the shouted speech (Shout-H, Shout-L, and Shout-H/L) with more than 80% accuracy. On the other hand, the levels of discrimination accuracy for shouted speech in the three shout classes, i.e., Shout-H, Shout-L, and Shout-H/L, were 52.6%, 47.5%, and 46.6%, respectively. These findings reflect the difficulty of classifying shout types based only on acoustic features. In future work, we should investigate whether the linguistic information that can be obtained through automatic speech recognition can improve the shout type classification performance. §.§ Experiment 3: Shout intensity prediction Finally, we trained a regressor using the 2,500 shouted speech samples collected for all 50 sentences and their shout intensities. The last layer of each network consisted of only an FC layer, FC (1). Since the ratings ranged from 1 to 7, we bounded the outputs of FC (1) to this range. We used the MSE as the loss function. Table <ref> reports the RMSEs between the actual and predicted shout intensity values. We can see from the results that the prediction error with the high-dimensional features was reduced compared with that obtained with the conventional low-dimensional features. Among all network architectures, the CNNs achieved the lowest RMSEs under most SNR conditions. The CNN model with Spectrogram + Cepstrogram features achieved the best performance averaged over all SNRs. Figure <ref> presents a scatter plot produced for the CNN model with Spectrogram + Cepstrogram features that shows the relationship between the actual and predicted values. Although a positive correlation is evident, there is still room for improvement in the prediction accuracy even when we apply feature learning based on the spectral and cepstral domains. These results demonstrate that our new corpus RISC presents a challenging task for research on shout detection. § CONCLUSION This paper has presented RISC, a new corpus comprising diverse shouted speech samples, such as angry shouts, screams, and cheers, along with their shout type and intensity information. We have described a detailed pipeline for corpus construction, which has mostly not been specified to date in the literature on shouted speech detection. To provide a comprehensive performance comparison between deep approaches as a benchmark, we performed experiments focusing on two speech type classification tasks and an intensity prediction task. From the results achieved using various combinations of network architectures and speech features, we observed that feature learning based on spectrograms and cepstrograms achieved high performance on all three tasks, no matter which network architecture is used. We also found that shout type classification and intensity prediction, which have not been addressed in previous studies, are still challenging even for the high-dimensional feature learning approach. In future work, we should improve the performance on these tasks by developing effective deep architectures. Another possible strategy is to introduce linguistic information obtained through automatic speech recognition. Thus, toward the construction of sophisticated audio surveillance systems, research on shouted speech detection needs to be integrated with natural language processing. § ACKNOWLEDGMENTS This work was supported by JSPS KAKENHI Grant Number JP21K14381. This study was approved by the research ethics committee of Ritsumeikan University (permission number: BKC-LSMH-2021-081). IEEEtran
http://arxiv.org/abs/2306.10977v1
20230619143652
Prediction model for rare events in longitudinal follow-up and resampling methods
[ "Pierre Druilhet", "Mathieu Berthe", "Stéphanie Léger" ]
stat.ME
[ "stat.ME", "cs.LG", "stat.AP", "stat.ML", "62P10" ]
Prediction model for rare events in longitudinal follow-up and resampling methods Mathieu Berthe ^1 & Pierre Druilhet ^2 & Stéphanie Léger ^3 ^1,2,3 Université Clermont Auvergne Laboratoire de Mathématiques Blaise Pascal UMR 6620 - CNRS Campus des Cézeaux 3, Place Vasarely TSA 60026 ­ CS 60026 63178 Aubière Cedex ^1 [email protected] ^2 [email protected] ^3 [email protected] We consider the problem of model building for rare events prediction in longitudinal follow-up studies. In this paper, we compare several resampling methods to improve standard regression models on a real life example. We evaluate the effect of the sampling rate on the predictive performances of the models. To evaluate the predictive performance of a longitudinal model, we consider a validation technique that takes into account time and corresponds to the actual use in real life. Keywords : Rare events, longitudinal follow-up, oversampling, undersampling, SMOTE, ensemble-based methods, logistic regression. § INTRODUCTION Prediction models for rare events appears in many research fields such as economic <cit.>, politics <cit.>, fraud detection <cit.> or bank regulation <cit.>. Modeling and predicting binary rare events present several difficulties. Strong imbalance between event and non-events induce biased estimations and poor predictive performances, usually underestimating the probability of event occurrences. In recent years, several strategies have been proposed to improve misclassification. For example, <cit.> propose an explanatory logistic regression model with bias correction in a case-control study. <cit.> have developed a new regression model based on extreme value theory. More recently, <cit.> improve the learning function in SVM by a low-cost post-processing strategy. Another way to the improve predictive performance of a model with rare events is to rebalance artificially the dataset by resampling methods. For example, oversampling methods creates artificially new observations in the minority class, whereas undersampling methods delete observations in the majority class. Hybrid methods combine both oversampling and undersampling methods. The choice of resampling rate, that is the final ratio between events and non-events, is a crucial point to improve predictive performance of the model. It is known that the optimal rate is highly dependent on the dataset <cit.>. Futhermore, resampling methods induce additional randomness in the dataset. The most common way to reduce this extra-variablity is to use aggregation methods <cit.>. Other strategies to improve classifiers with rare events have been considered, such as weighting training instances <cit.> or using different misclassification costs for minority and majority events <cit.>. The aim of the paper is to compare several resampling and aggregation methods on a real-life longitudinal follow-up study. We discuss the way to evaluate predictive performance in the case of longitudinal studies and then choose the optimal sampling rate adapted to our data set. In Section <ref>, we review resampling and ensemble based methods. We also discuss the way to evaluate the predictive performance adapted to longitudinal follow-up. In Section <ref>, we compare several strategies applied to a real life example: we have followed a soccer teams during one year and we aimed to evaluate the risk of muscle injury before each match. We discuss the crucial choice of the sampling rate and the effect of aggregation methods. We also show that SMOTE methods <cit.> applied to our dataset performs poorly. § PREDICTION MODELS AND SAMPLING METHODS In this section, we present several resampling methods combined with aggregation to improve the predictive performance of a logistic regression. §.§ Standard logistic regression Here, we recall the bases of logistic regression. For an individual i (i=1,...,n), let x_i be the k+1-vector of the k explanatory variables plus the constant and let y_i ∈{ 0,1} be the binary response which follows a Bernoulli distribution with parameter π_i=P(y_i=1 | x_i). In the standard logistic regression, it is assumed that π_i=1/1+e^-x_i' β where x' is the transpose of x and β'=(β_0,β_1,...,β_k) is the vector of unknown parameters, usually estimated by maximum likelihood (see e.g. <cit.>). The asymptotic variance of β is V(β̂)=[∑_i=1^nπ_i(1-π_i) 𝐱_i𝐱_i ^']^-1 For a new individual x, the probability of the event y=1 is predicted by π(x)=1/1+e^-x'_iβ=ℙ(y=1 | x)   When the dataset contains few events, say less than 5%, it is known that logistic regression underestimate the probability of events and then poor predictive performances (see <cit.>) §.§ Balancing unbalanced dataset To overcome drawbacks induced by the unbalanced datasets, several sampling methods can be used to artificially rebalance the dataset. Several resampling methods on real data are compared in <cit.>. <cit.> show that oversampling is better than undersampling and <cit.> that random oversampling or undersampling methods improve substantially the predictive performance of the models so that more sophisticated oversampling or down-sizing methods approaches appear unnecessary. All these studies show that the best resampling method is highly dependent on the dataset. In this section, we review the most common sampling methods, which can be used alone or combined. §.§.§ Undersampling methods The first way to rebalance an unbalanced dataset is to reduce the number of observations in the majority class (non-events). A random undersampling with rate r, 0<r<1, creates a new dataset by removing at random from the initial dataset a proportion r of observations from the majority class. If r=0, then all the observations of the majority class are kept. If r=0.7, then 70% of the observations of the majority class are removed. In the case of very rare events, <cit.> propose to used case-control designs <cit.>. This strategy is equivalent to selecting randomly one non-event for every event, resulting in a completely balanced dataset. In that case, if the proportion of events is p, then the rate of the undersampling is r=(1-2p)/(1-p). Another more sophisticated strategy has been proposed in <cit.>: for each event, the idea is to remove a non-event that form a Tomek link. <cit.> considers situations where Tomek link methods does not guarantee a performance gain. The main drawback of undersampling methods is the loss of information when the number of removed observations is large. In Section <ref>, we consider aggregated methods that limit this loss of information. §.§.§ Oversampling methods At the opposite of undersampling methods, oversampling methods increase artificially the number of observations in the minority class (events). A random oversampling with rate (a:b) creates new observations by duplicating at random observations in the minority class until there are a non-events for b events in the new dataset. An oversampling (1:1) results in a completely balanced dataset. An oversampling (2:1) results in a dataset with 2 non-events for 1 event. SMOTE <cit.> is a more sophisticated method that creates synthetic observations in the minority class as follows: for each event observation, choose at random one of the k nearest neighbors that belongs to the minority class, with k fixed. The new synthetic observation is chosen at random between these two observations. It is also possible to reiterate the process to increase the oversampling rate. Figure <ref> shows the effect of SMOTE with k=2 and with one synthetic observation generated by events. §.§.§ Hybrid sampling It is known that these methods have some cons. Random undersampling can discard potentially useful data, whereas random oversampling creates exact copies of existing instances that may induce overfitting. To overcome these features, a solution is to mix undersampling and oversampling methods. For example, a random undersampling method with rate c combined with a (a:b)-oversampling method consists in removing at random a proportion c of non-event and then perform an oversampling to obtain a non-events for b events. As a remark, random over/under sampling methods can be seen as weighted logistic regressions <cit.> where the weights are random. For the resampled dataset, the log-likelihood of the logistic regression can be written: ln L_w(β | 𝐲) = ∑_i / y_i=1 w_i ln(π_i) + ∑_i / y_i=0 w_i ln(1-π_i) where the weight w_i is the number of replication of x_i for y_i=1 in the random oversampling process and w_i=0/1 for y_i=0 in the random undersampling process. §.§ Ensemble-based methods Each sampling method described above induces a supplementary part of randomness in the dataset and therefore more variability in the predictions. Ensemble-based methods are the most common way to reduce this variability. The idea is to create K datasets from the same resampling scheme and to aggregate the predictors. Therefore, for a new individual i with covariate x_i, the predicted probability of event π_i is given by π_i =1/K∑_k=1^Kπ_i^[k] , where π_i^[k] is the predictor obtained from the k^th dataset. The choice of K will be discussed in Section <ref>. As a variant, when using a pure oversampling methods, the non-events may be replaced by K bootstrap samples, similarly to Bagging <cit.>. In the same way, when using a pure oversampling method, the events may be replaced by a bootstrap sample of them. The effects of this bootstrap variant on the aggregated predictors are displayed in Table <ref>. §.§ Predictive performance evaluation in longitudinal follow-up To evaluate the predictive performance of a model, training and test datasets should be chosen carefully. In longitudinal follow-up studies, events are highly dependent on the past and change the future. In this context, it is impossible to use standard validation strategies like cross validation or random split of the dataset into learning and test datasets. Indeed, with this strategy, the risk is to confuse causes and consequences and to overestimate predictive performances. Therefore, it is more natural to use a longitudinal strategy (see Fig. <ref>) that corresponds to the way the models are used in real life: at time t, we only use previous information to predict the risk π_ti to have an event on the individual i, then we compare our prediction with the real observation y_ti. At the end, we have a collection of (π̂_tj,y_tj), t=1,...,T and i=1,...,I. The usual way to compare the ability of several models to predict a binary response is to compare their ROC curves, AUCs or Peirce indices. We recall that a ROC curve is a parametric curve defined as follows: for a given threshold 0≤γ≤ 1, we predict y_ti by ŷ_ti=0 if π_ti <γ and ŷ_ti=1 if π_ti≥γ. Then, we compare the predicted response ŷ_ti with the real outcome y_ti. The sensitivity and the specificity, which depend on γ are defined by sensitivity(γ)=TPTP+FN specificity(γ)=TNTN+FN with TN, TP, FN, FP are the number of true negative, true positive, false negative, false positive. For example TP=#{y_ti=1 , ŷ_ti=1}. The ROC curve is therefore the parametric curve {(sensitivity(γ), 1-specificity(γ)) ; 0≤γ≤ 1}. As shown in <cit.>, the choice of an evaluation metric plays an important role in learning on unbalanced data. From the ROC curve, we can derived two global metrics: the area under ROC curve (AUC) and the Pierce index (PI) defined by PI=max_γ∈ [0,1]{sensitivity(γ) + specificity(γ) -1 }. which is particularly adapted to rare events. The Pearce index represents a good compromise between sensitivity and specificity. It can be shown that PI=1-d^*, where d^* is the Manhattan distance between the point (0,1) and its closest point on the ROC curve. It is also the euclidean distance between the further point on the ROC curve from the diagonal, up to a factor √(2). The model with the highest AUC or PI will be considered as the best predictive model. § COMPARISON OF RESAMPLING METHODS IN A REAL LIFE LONGITUDINAL FOLLOW-UP. In this section, we apply and compare the methods described in Section <ref> in a real life situation. We have followed a soccer teams of the french Ligue 1 Championship during the season 2018-2019. We aim to build a model that evaluate the individual risk of non-contact muscle injury for each player before each match. To build the model, we use the data collected during the seasons 2015-2018 and the season 2018 until the match. A review of football player injury prediction methods can be found in <cit.>. Several predictive methods are compared in <cit.>. From <cit.>, the average incidence of muscle injuries for a player during a match is about 4%. In our dataset we observe a similar rate, so that non-contact muscle injuries are considered as rare events.   To evaluate the predictive performances of the model, we use the longitudinal validation described in Section <ref>. Before each match, we predict the risk of muscle injury for each player i based on all preceding observations. Then, we and compare the prediction with the real outcomes, that is, muscle injury or not of player i during the match. Of course, players that do not play the match are not considered.   During the season, 50 matches had been played and 16 non-contact muscle injuries have been observed. To train the model before the first match, we use the data collected during the seasons 2015-2018. Then, iteratively, we use the data collected until the day before each match of the season 2018-2019 to predict the probability of injury for the next match. §.§ The dataset The dataset include 42 soccer players on which data are collected daily and during matches. After each match, the response variable is observed : y=1 if an injury is observed and y=0 otherwise. For each player, we have the following covariates that are considered in the literature as risk factors. - Cumulative workload during training and matches over 21 days. - Cumulative playing time over 21 days. - Recovery time: number of days since the last match. - Risk of relapse: ratio between the number of days disability due to injury and the average number of days of disability in the team. It aims to quantify the risk of relapse after an injury. - Acceleration ratio: ratio between the number of accelerations performed over the 7 days preceding the match and the number of accelerations performed on the 21 days preceding the match - Deceleration ratio: ratio between the number of deceleration performed over the 7 days preceding the match and the number of deceleration performed on the 21 days preceding the match - Speed ratio: ratio between the average speed over the 7 days preceding the match and the average speed over the 21 days preceding the match. - Player ID: player identifier. Workload, Cumulative playing time and Recovery time allow to quantify player activity. Acceleration, deceleration and speed ratio are used to assess the player sport performance before the match. Another important covariate is the player ID. In usual longitudinal studies, the aim is to extrapolate the model on other individuals. Therefore individuals (here, players) are considered as random effects. In our case, we want to predict future observations on the individuals that are included in the studies. Therefore, players are considered as fixed effect, allowing to personalize the risk of injury. We will not consider interaction between factors, since they have not shown, in preliminary studies, any improvement of the predictive ability of the models, mainly due to overfitting. §.§ Comparison of resampling methods In this section, we compare the predicitive performance of several resampling strategies applied to logistic regression. The performances metrics are evaluated on the 50 matches played during the season 2018-2019, by using the longitudinal validation described in Section <ref>. Several resampling methods are evaluated: undersampling alone, undersampling + bootstrap on events, oversampling alone, oversampling + bootstrap on the events, both oversampling and undersampling. When several sampling strategies are combined, we first use undersampling, then oversampling or SMOTE. §.§.§ Effect of sampling rates on predictive performances Here, we evaluate the effect of the balancing rate on AUC in random oversampling, SMOTE and random undersampling methods applied to logistic regression. The results are displayed in Fig. <ref>. Each method is run 15 times. Then, we compute the average AUC over the runs. We also compute the standard deviation for both metric. Note that the initial dataset imbalance is (25:1), that is 25 non-events for 1 event. For random oversampling (Fig. <ref>.a, red line), the average AUC increases from 0.72 to 0.77 when the sampling rate goes from (25:1) to or (5:3). Then, the AUC decreases, probably due to an overfit on the events. For SMOTE (Fig. <ref>.a, blue line) the effect on AUC is always negative. This is mainly due to events that are isolated in the covariate space and therefore create synthetic events in the middle of non-events: for example, in Fig. <ref> two isolated events on the left induce two synthetic events in the middle of a cluster of non-events. In Fig. <ref>.b, we can see that undersampling methods slightly improve the average AUC for an undersampling rate between 0.2 and 0.3 with a AUC gain about 0.03. When the sampling rate is too large, say greater than 0.7 for our dataset, the predictive performance of the model worsen since too many non-event individuals are removed. §.§.§ Comparison of several pure and hybrid resampling methods Here, we compare random oversampling and undersampling methods studied in Section <ref> with hybrid methods, SMOTE or plain logistic regression. Again, for each strategy, we run 15 times the model. So, we obtain an average AUC and Peirce index with related standard deviations. Sensitivities and specificities are calculated for the run whose Peirce index is the closest to the average. The results are displayed in Table <ref> and for some models, ROC curves are displayed in Fig. <ref>. The plain logistic regression, i.e. without additional resampling method, has an AUC equal to 0.72 and a Peirce index equal to 0.510 with a sensitivity 0.75 and a specificity 0.76. Random oversampling improves the prediction performance for a large range of sampling rates. For example, an oversampling rate of (5:3) gives average AUC and Peirce equal to 0.78 and 0.56, the sensibility increases to 0.75 whereas the specificity slightly decreases from to 0.75. As already seen in Section <ref>, SMOTE methods give poor results and undersampling should be used with caution, only with a small removal rate. In conclusion, for our dataset, the resampling methods with highest AUC and Peirce index are pure random oversampling (5:3) followed by hybrid undersampling 0.3 / oversampling (5:3). Note that the second method has a slightly lower average Peirce index (0.547) for the same average AUC. §.§.§ Ensemble-based methods Resampling methods add randomness in the output. Ensemble based methods, described in section <ref>, aim to stabilize the model and in some situations to improve the predictive performance, similarly to Bagging methods. There is no consensus about the right number of aggregations, which is usually between 20 and 100 for Bagging methods <cit.>, depending on the dataset. To evaluate the effect of the number of aggregations needed to stabilize the prediction for our dataset, we display, in Fig. <ref>, AUC and Peirce index against to the number of aggregations for two of the best models obtained in Section <ref>: the first one is an hybrid undersampling with r= 0.3 and overampling (5:3) and the second one is an undersampling with r=0.5 combined with a bootstrap sampling on the events. For the two models, AUC is stabilized after 20 iterations (Fig. <ref>.a and <ref>.b) whereas Peirce index needs more iterations to be stabilized (Fig. <ref>.c and <ref>.d) .   To save computer time, we now compare ensemble-based methods with 20 iterations for the models used in Section <ref>. The results are displayed in Table <ref>, line 1-6, whose means and standard deviations of AUC and Peirce index, sensitivity and specificity are obtained in the same way as in Table <ref>. We omit SMOTE methods that have shown poor results. It can be observed that the main effect of aggregation methods is to reduce the variability of AUC and Peirce index. For example, for random undersampling with rate 0.3, the standard deviation of AUC decreases from 0.063 to 0.005. For random oversampling (5:3) it decreases from 0.007 to 0.002. The effects of aggregation methods on the mean AUC and Peirce index depend on the resampling methods. For undersampling, aggregation methods improve slightly the average AUC and Peirce index, whereas there is no significant effect for oversampling. In table <ref>, line 8-11, we have considered a bootstrap of the events when an undersampling method is used and a bootstrap sample of the non-events when an oversampling methods is used. It is seen that Bootstrap has no significant effect on the mean AUC and Peirce index, but increases their variability. In line 7 of the same table, we have performed a stratified bootstrap on the events and non-events. The predictive performance is better than that of the plain logistic regression but lower than over/under sampling or hybrid methods with optimized rate. Among all the models considered here, the best predictive models are the hybrid models undersampling 0.5 and oversampling (1:1) or undersampling 0.3 and oversampling (5:3). §.§.§ Longitudinal validation vs cross-validation Longitudinal validation described in Section <ref> corresponds to the way the model is used in practice. It is therefore the most relevant method to evaluate the predictive performance. Usual cross-validation methods such as leave-one-out cross-validation (LOOCV) use future information to predict the outcome. For example, for our data set analyzed with plain logistic regression, i.e. without resampling methods, AUC and Peirce indices obtained by LOOCV are equal to 0.84 and 0.665 whereas they are equal to 0.72 and 0.51 for longitudinal validation. For the best strategy found in 3.3.2, i.e. 0.5 undersampling followed by oversampling (5:3) with aggregation, AUC and Peirce index are equal to 0.85 and 0.672 for LOOCV and to 0.78 and 0.566 for longitudinal validation. So, we can see that LOOCV overestimate the true predictive performance of the models. Another validation strategy consists in using the dataset based on the seasons 2015-2018 to train the model and the dataset of the season 2018-2019 to test the model <cit.>. This approach is relevant if it is not possible to update the model with fresh data or if we want to use the model for other individuals or players. However, in the case of individual follow-up, the model loose information from the near past. For example, with this validation approach, AUC and Peirce index are equal to 0.650 and 0.25 for the plain logistic regression 0.681 and 0.31 for the hybrid undersampling 0.5, oversampling (5:3). We can see that this validation strategy tends to underestimate the predictive performance of the model as it is used in practice. § CONCLUSION We have shown how resampling methods can improve subtantially predictive models for rare events. The best resampling method and the optimal sampling rate are specific to each dataset. Most often they are calibrated by cross validation. However, in the case of longitudinal follow-up, usual cross-validation methods tend to overestimate the predictive quality of the model. Therefore, it is important to use a validation method adapted to longitudinal follow-up. Pure random oversampling or hybrid under/oversampling with optimized sampling rate appear to be the most effective method to improve a logistic regression for rare events. SMOTE was ineffective for our dataset structure, mainly due to isolated events in the space of explanatory variables. Moreover, ensemble-based methods and predictor aggregation reduce the effects of the variability of resampling methods onto the predictors. §.§ Acknowledgment The authors are grateful to Olivier Brachet (Innovation Performance Analytics) for having provided the dataset. §.§ Funding This research has been supported by the European Regional Development Fund and the Region Auvergne-Rhone-Alpes apalike
http://arxiv.org/abs/2306.04521v1
20230607153046
On large regular (1,1,k)-mixed graphs
[ "C. Dalfó", "G. Erskine", "G. Exoo", "M. A. Fiol", "N. López", "A. Messegué", "J. Tuite" ]
math.CO
[ "math.CO", "05C50, 05C20, 15A18, 20C30" ]
Comparison of SeDuMi and SDPT3 Solvers for Stability of Continuous-time Linear System Guangda Xulabel1 June 7, 2023 ===================================================================================== An (r,z,k)-mixed graph G has every vertex with undirected degree r, directed in- and out-degree z, and diameter k. In this paper, we study the case r=z=1, proposing some new constructions of (1,1,k)-mixed graphs with a large number of vertices N. Our study is based on computer techniques for small values of k and the use of graphs on alphabets for general k. In the former case, the constructions are either Cayley or lift graphs. In the latter case, some infinite families of (1,1,k)-mixed graphs are proposed with diameter of the order of 2log_2 N. Keywords: Mixed graph, Moore bound, Cayley graph, Lift graph. Mathematics Subject Classification: 05C50, 05C20, 15A18, 20C30. § INTRODUCTION The relationship between vertices or nodes in interconnection networks can be undirected or directed depending on whether the communication between nodes is two-way or only one-way. Mixed graphs arise in this case and in many other practical situations, where both kinds of connections are needed. Urban street networks are perhaps the most popular ones. Thus, a mixed graph G=(V,E,A) has a set V=V(G)={u_1,u_2,…} of vertices, a set E=E(G) of edges or unordered pairs of vertices {u,v}, for u,v∈ V, and a set A=A(G) of arcs, directed edges, or ordered pair of vertices uv≡ (u,v). For a given vertex u, its undirected degree r(u) is the number of edges incident to vertex u. Moreover, its out-degree z^+(u) is the number of arcs emanating from u, whereas its in-degree z^-(u) is the number of arcs going to u. If z^+(u)=z^-(u)=z and r(u)=r, for all u ∈ V, then G is said to be a totally regular (r,z)-mixed graph with whole degree d=r+z. The distance from vertex u to vertex v is denoted by (u,v). Notice that, when the out-degree z is not zero, the distance (u,v) is not necessarily equal to the distance (v,u). If the mixed graph G has diameter k, its distance matrix _i, for i=0,1,…,k, has entries (_i)_uv=1 if (u,v)=i, and (_i)_uv=0 otherwise. So, _0= (the identity matrix) and _1= (the adjacency matrix of G). Mixed graphs were first considered in the context of the degree/diameter problem by Bosák <cit.>. The degree/diameter problem for mixed graphs reads as follows: Given three natural numbers r, z, and k, find the largest possible number of vertices N(r,z,k) in a mixed graph G with maximum undirected degree r, maximum directed out-degree z, and diameter k. For mixed graphs, an upper bound for N(r,z,k), known as a Moore(-like) bound M(r,z,k), was obtained by Buset, El Amiri, Erskine, Miller, and Pérez-Rosés <cit.> (also by Dalfó, Fiol, and López <cit.> with an alternative computation). The Moore bound for an (r,z)-mixed graph with diameter k is M(r,z,k)=Au_1^k+1-1/u_1-1+Bu_2^k+1-1/u_2-1, where u_1 =z+r-1-√(v)/2, u_2=z+r-1+√(v)/2, A =√(v)-(z+r+1)/2√(v), B=√(v)+(z+r+1)/2√(v), v =(z+r)^2+2(z-r)+1. This bound applies whether or not G is totally regular, but it is elementary to show that a Moore mixed graph must be totally regular. Thus, a Moore (r,z,k)-mixed graph is a graph with diameter k, maximum undirected degree r≥ 1, maximum out-degree z≥ 1, and order given by M(r,z,k). An example of a Moore (3,1,2)-mixed graph is the Bosák graph <cit.>, see Figure <ref>. Bosák <cit.> gave a necessary condition for the existence of a mixed Moore graph with diameter k=2. Such graphs have the property that for any ordered pair (u,v) of vertices, there is a unique walk of length at most 2 between them. In general, there are infinitely many pairs (r,z) satisfying Bosák necessary condition for which the existence of a mixed Moore graph is not known yet. Nguyen, Miller, and Gimbert <cit.> proved the existence and unicity of some Moore mixed graphs of diameter 2. López, Miret, and Fernández, <cit.> proved that there is no Moore (r,z,2)-mixed graph when the pair (r,z) equals (3,3), (3,4), or (7,2). For diameter k ≥ 3, it was proved that mixed Moore graphs do not exist, see Nguyen, Miller, and Gimbert <cit.>. In the case of total regularity, this result also follows from the improved bound in Dalfó, Fiol, and López <cit.>, where it was shown that the order N of an (r, z)-regular mixed graph G with diameter k≥ 3 satisfies N≤ M(r,z,k)-r, where M(r,z,k) is given by (<ref>). In general, a mixed graph with maximum undirected degree r, maximum directed out-degree z, diameter k, and order N=M(r,z,k)-δ is said to have defect δ. A mixed graph with defect one is called an almost mixed Moore graph. Thus, the result in (<ref>) can be rephrased by saying that r is a lower bound for the defect of the mixed graph. In the case r=z=1, such a result was drastically improved by Tuite and Erskine <cit.> by showing that a lower bound δ(k) for the defect of a (1,1)-regular mixed graph with diameter k≥ 1 satisfies the recurrence δ(k+6)=δ(k)+f_k-1+f_k+4, where the initial values of δ(k), for k=1,…,6, are 0,1,1,2,3,5, and f_k are the Fibonacci numbers starting from f_0=f_1=1, namely, 1,1,2,3,5,8,13,21,… Alternatively, starting from δ(1)=0 and δ(2)=1, we have δ(k+2)=δ(k+1)+δ(k) if k+2≢1,2 (mod 6), and δ(k+2)=δ(k+1)+δ(k)+1, otherwise. For more results on degree/diameter problem for graphs, digraphs, and mixed graphs, see the comprehensive survey by Miller and Širáň <cit.>. For more results on mixed graphs, see Buset, López, and Miret <cit.>, Dalfó <cit.>, Dalfó, Fiol, and López <cit.>, Erskine <cit.>, Jørgensen <cit.>, López, Pérez-Rosés, and Pujolàs <cit.>, Nguyen, Miller, and Gimbert <cit.>, and Tuite and Erskine <cit.>. In this paper, we deal with (1,1,k)-mixed graphs, that is, mixed graphs with undirected degree r=1, directed out-degree z=1, and with diameter k. Our study is based on computer techniques for small values of k, and the use of graphs on alphabets for general k. In the former case, the constructions are either Cayley or lift graphs. In the latter case, some infinite families of (1,1,k)-mixed graphs are proposed with N vertices and diameter k of the order of 2log_2 N. Most of the proposed constructions are closely related to line digraphs. Given a digraph G, its line digraph LG has vertices representing the arcs of G, and vertex x_1x_2 is adjacent to vertex y_1y_2 in LG if the arc (x_1,x_2) is adjacent to the arc (y_1,y_2) in G, that is, if y_1=x_2. The k-iterated line digraph is defined recursively as L^kG=L^k-1(LG). Let K_d^+ be the complete symmetric digraph with d vertices with loops, and K_d+1 the complete symmetric digraph on d+1 vertices (in these complete graphs each edge is seen as a digon, or pair of opposite arcs). Then, two well know families of iterated line digraphs are the De Bruijn digraphs B(d,k)=L^k(K_d^+), and the Kautz digraphs K(d,k)=L^k(K_d+1). Both B(d,k) and K(d,k) have diameter k but De Bruijn digraphs have d^k vertices, whereas Kautz digraphs have d^k+d^k-1 vertices. See, for instance, Fiol, Yebra, and Alegre <cit.>, and Miller and Širáň <cit.>. § SOME INFINITE FAMILIES OF (1,1,K)-MIXED GRAPHS In this section, we propose some infinite families of (1,1,k)-mixed graphs with exponential order. All of them have vertices with out-degree z=1. When, moreover, all the vertices have in-degree 1, we refer to them as (1,1,z)-regular mixed graphs. If we denote by f(r,z,k) the order of a largest (r,z,k)-mixed graph, which is upper bounded by the (exponential) Moore bound M(r,z,k), all the described graphs provide exponential lower bounds for f(1,1,k). Let us first give some basic properties of (1,1,k)-mixed graphs. It is readily seen that the Moore bound satisfies the Fibonacci-type recurrence M(1,1,k)=M(1,1,k-1)+M(1,1,k-2)-2, starting from M(1,1,0)=1 and M(1,1,1)=3. From this, or just applying (<ref>), we obtain that the corresponding Moore bound is M(1,1,k)=(1-2/√(5))(1-√(5)/2)^k+1+(1+2/√(5))(1+√(5)/2)^k+1-2. The obtained values for k=2,…,16 are shown in Table <ref>. Then, for large values of k, the Moore bound M(1,1,k) is of the order of M(1,1,k) ∼(1+2/√(5))(1+√(5)/2)^k+1≈ 1.8944· 1.6180^k+1. §.§ The mixed graphs E(n) The first construction is the simplest one. Given n≥ 2, the graph E(n) is defined as follows. As before, label the Fibonacci numbers so that f_0 = f_1 = 1. Consider a Moore tree of radius n with base vertex u_0. The set of vertices at distance i from u_0 is referred to as the vertices at level i. There are f_i+1 vertices at level i. We can partition these vertices into two sets: V_i contains the f_i vertices at level i incident through an arc from level i-1, and W_i contains the f_i-1 vertices at level i incident through an edge from level i-1. To complete the graph, we must consider two cases, depending on whether f_n is even or odd. If f_n is even, then we add a matching among the vertices of V_n and add an arc from each vertex in level n to u_0. In this case, the diameter is 2n. Note that the maximum distance occurs from a level-1 vertex to a level-t vertex on the opposite edge (where the two edges are based on the two level-1 vertices). If f_n is odd, then we must modify the construction slightly. In this case, when we add a matching among the vertices of V_n, there is one vertex v_1 of V_n missed by the matching. So, we must add another vertex v_2, join this vertex to v_1 by an edge, and then add an arc from v_2 to the base vertex u_0. All other vertices at level n have arcs directly to u_0. In this case, the diameter is 2n+1, where the maximum distance occurs from a level-1 vertex (in the edge not containing v_1) to v_2. So, the graph E(n) has diameter 2n or 2n+1, and order M(1,1,n) or M(1,1,n) + 1. This bound is very weak for small diameters but at least it gives a first explicit construction that gives an exponential lower bound. In the following subsections, we show that we can do it better. §.§ The mixed graphs F(n) Given n≥ 2, the (1,1,k)-mixed graph F(n) has vertices labeled with a:x_1… x_n, where a∈{+1,-1}, with x_i∈ℤ_3, and x_i+1≠ x_i for i=1,…,n-1. The adjacencies are as follows: (i) a:x_1x_2… x_n ∼ -a:x_1x_2… x_n (edges); (ii) a:x_1x_2… x_n → a:x_2x_3… x_n(x_n+a) (arcs). Thus, F(n) has 3· 2^n vertices, it is an out-regular graph but not in-regular since vertices a:x_1x_2… x_n and a:x'_1x_2… x_n are both adjacent to a:x_2… (x_n+a). The mixed graph F(3) is shown in Figure <ref>. It is easily checked that the mapping a|x_1x_2… x_n ↦ -a|x_1 x_2…x_n, where 0=1, 1=0, and 2=2, is an automorphism of F(n). This is because x_n+a=x_n-a. The diameter of the mixed graph F(n) is k=2n. Let us see that there is a path of length at most 2n from vertex =a|x_1… x_n to vertex =b|y_1… y_n. Taking into account the automorphism in (<ref>), we can assume that a=+1. Notice that, with at most two steps, depending on the values of a, x_n, and y_1 (at the beginning) or a, y_i, and y_i+1 (in the sequel), we can add a new digit of . Thus, in principle, we would need at most 2n steps but, possibly, one last step to fix the first digit to the one of (for example, b). However, in what follows, we show that the first two digits y_1 and y_2 can be `placed', so reaching a vertex of the form a'|… y_1y_2, with at most 3 steps. * If y_1=x_n, the first step is not necessary. * If y_1=x_n+1, go through the arc +1|x_1… x_n→ +1|x_2… x_ny_1. * If y_1=x_n-1 and y_2=y_1-1, go through the edge and two arcs +1|x_1… x_n∼ -1|x_1x_2… x_n→ -1|x_2… x_ny_1 → -1|x_3… x_ny_1y_2. * If y_1=x_n-1(=x_n+2) and y_2=y_1+1, go through the three arcs +1|x_1… x_n → +1|x_2… x_nx_n+1→ +1|x_3… x_n+1x_n+2 =+1|x_3… x_n+1y_1 → +1|x_4… x_ny_1y_2. Thus, to reach , we need at most 3+2(n-2)+1=2n steps. Finally, it is not difficult to find vertices that are at distance 2n. For instance, for n odd, go from =+1|0101… 0 to =-1|2020… 2; and, for n even, go from =+1|1010… 0 to =+1|2020… 0, §.§.§ A numeric construction An alternative presentation F[n] of F(n) is as follows: Given n≥ 1, let N'=3·2^n-1 so that the number of vertices of F[n] is 2N'. The vertices of F[n] are labeled as α|i, where α∈{1,2}, and i∈ℤ_N'. Let 1=2 and 2=1. Then, the adjacencies of F[n] defining the same mixed graph as those in (i) and (ii) are: α|i ∼ α|i α|i → α|-2i+α To show that both constructions give the same mixed graph, F[n]≅ F(n), define first the mapping π from the two digits x_1x_2 to ℤ_6 as follows π(01)=0, π(10)=1, π(12)=2, π(21)=3, π(20)=4, π(02)=5. Then, it is easy to check that, for n=2, the mapping ψ from the vertices of F(2) to the vertices of F[2] defined as ψ(a|x_1x_2)=α(a)|π(x_1x_2), where α(a)=a+3/2, is an isomorphism from F(2) to F[2]. (Note that α(-1)=1 and α(+1)=2). From this, we can use induction. First, let us assume that ψ' is an isomorphism from F(n-1) to F[n-1] of the form ψ'(a|x_1x_2… x_n-1)=α(a)|π'(x_1x_2… x_n-1), where the linear mapping α is defined as above, and π' is a mapping from the sequences x_1x_2… x_n-1 to the elements of ℤ_N', with N'=3· 2^n-1. Then, we claim that the mapping ψ from the vertices of F(n) to the vertices of F[n] defined as ψ(a|x_1x_2… x_n)=α(a)|-2·π'(x_1x_2… x_n-1)+α(x_n-x_n-1) (mod N), where N=3· 2^n, is an isomorphism from F(n) to F[n]. Indeed, since ψ' is an isomorphism from F(n-1) to F[n], we have that ψ'Γ=Γψ' and ψ'Γ^+=Γ^+ψ', where Γ and Γ^+ denote undirected and directed adjacency, respectively. Thus, from ψ'Γ(a|x1… x_n-1) =ψ'(-a|x_1… x_n-1)=α(-a)|π'(x_1… x_n-1), Γψ'(a|x_1… x_n-1) =Γ(α(a)|π'(x_1… x_n-1)=α(a)|π'(x_1… x_n-1), and ψ'Γ^+(a|x_1… x_n-1) =ψ'(a|x_2… x_n-1x_n-1+a) =α(a)|π'(x_2… x_n-1x_n-1+a), Γ^+ψ'(a|x_1… x_n-1) =Γ^+(α(a)|π'(x_1… x_n-1)) =α(a)|-2·π'(x_1… x_n-1)+α(a), we conclude that α(-a)=α(a) for every a∈{+1,-1} (as it is immediate to check), and π'(x_2… x_n-1(x_n-1+a))=-2·π'(x_1… x_n-1)+α(a). Now, we can assume that a=+1 (because of the automorphism (<ref>)), and let a'=x_n-x_n-1. Then, since clearly ψΓ=Γψ, edges map to edges, we focus on proving that the same holds for the arcs, that is, ψ^+Γ=Γψ^+. With this aim, we need to prove that the following two calculations, where we use (<ref>), give the same result: ψΓ^+(+1|x_1… x_n) =ψ(+1|x_2… x_nx_n+1) =2|-2·π'(x_2… x_n)+2, Γ^+ψ(+1|x_1… x_n) =Γ^+(-2·π'(x_1… x_n-1)+α(a') =2|4·π'(x_1… x_n-1)-2α(a')+2. The required equality follows since, from (<ref>) with a' instead of a, we have -2·π'(x_2… x_n) =2·π'(x_2… x_n-1(x_n-1+a'))=-2[-2·π'(x_1… x_n-1)+α(a')] =4·π'(x_1… x_n-1)-2α(a'). In Figure <ref>, every vertex has been labeled according to both presentations. Using this presentation, we extend (and again prove) Proposition <ref>. The diameter of F(n) is k=2n. More precisely, there is a path of length n or n-1 between any pair of edges α|i-α|i and α'|i'-α'|i'. Moreover, there is a path of length between n-1 and 2n between any pair of vertices. Let us consider a tree rooted at a pair of vertices of an edge, _1=1|i and _2=2|i, and suppose the n=2r+1 is odd (the case of even n is similar). Then, * The vertices at distances 1,2 of _1 or _2 are α|-2i+1, α|-2i+2 with α=1,2. * The vertices at distances 3,4 of _1 or _2 are α|4i, α|4i-1, α|4i-2 and α|4i-3 with α=1,2. * The vertices at distances 5,6 of _1 or _2 are α|-8i+1, α|-8i+2, …, α|-8i+8 with α=1,2. ⋮ * The vertices at distances 2n-3,2n-2 of _1 or _2 are α|2^n-1+r with r=0,-1,…,-2^n-1+1 and α=1,2. * The vertices at distances 2n-1,2n of _1 or _2 are α|-2^n+r with r=1,2,…,2^n and α=1,2. See Figure <ref> for the case of F(3), which has 24 vertices. Note that, from the pair of vertices 1|i and 2|i, the 3-rd and 4-th columns contain all the `consecutive' vertices of F(3) from α|4i-3 to a|4i+8, with α=1,2. More precisely, from vertex 2|i (we can fix α because of the automorphism), we reach all of such vertices with at most 6 steps, except 2|4i+1 (in boldface, on the top of the 4-th column), which would require the 7 adjacencies `-→-→-→-'. But this vertex is reached following the path `→→→-→-' (in boldface, in the 5-th column). In general, using the notation f(α|i)=α|-2i+α and g(α|i)=α|i, we have the following: Let N'=3·2^n-1. Then, * If n is even, then the exception vertex is g(fg)^n(2|i) (mod N) =1|2^n i (2n+1 steps) but (gf)^n-1f^2(2|i) (mod N) =1|2^n i (2n steps). * If n is odd, then the exception vertex is g(fg)^n(2|i)(mod N) =2|2^n-1 i+1 (2n+1 steps) but (gf)^n-1f^2(2|i) (mod N) =2|2^n-1 i+1 (2n steps). §.§ The mixed graphs F^*(n) A variation of the mixed graphs F(n) allows us to obtain (1,1,k)-regular mixed graphs that we denote F^*(n). Given n≥ 2, the (1,1,k)-regular mixed graph F^*(n) has vertices labeled as those of F(n). That is, a|x_1… x_n, where a∈{+1,-1} and x_i∈ℤ_3. Now the adjacencies are as follows: a|x_1x_2… x_n ∼ -a|x_1x_2… x_n a|x_1x_2… x_n → a|x_2x_3… x_n(x_n+a(x_2-x_1)) , where, when computed modulo 3, we take x_2-x_1∈{+1,-1}. Hence, the vertices a|x_1x_2… x_n and a|x'_1x_2… x_n, with x_1'≠ x_1, are adjacent to different vertices of the form a|x_2… (x_n± 1). For example, the mixed graph F^*(3) is shown in Figure <ref>. §.§.§ An alternative presentation To study some properties of F^*(n), it is useful to work with the following equivalent presentation: The vertices are now labeled as a|b:a_1… a_n-1, where a,a_i∈{+1,-1} for i=1,…,n-1, and b∈ℤ_3. Then, the adjacencies (<ref>) and (<ref>) become a|b:a_1a_2… a_n-1 ∼ -a|b:a_1a_2… a_n-1 a|b:a_1a_2… a_n-1 → a|b+a_1:a_2a_3… a_n-1 aa_1 Notice that a vertex a|x_1x_2… x_n with the old presentation is now labeled as a|b:a_1… a_n-1 with b=x_1 and a_i=x_i+1-x_i for i=1,…,n-1. From this, it is readily checked that the `new' adjacencies are as mentioned. The group of automorphisms of F^*(n) is isomorphic to the dihedral group D_3. Using the new notation, let us first show that the following mappings, Φ and Ψ, are automorphisms of F^*(n): Φ(a|b:a_1a_2… a_n-1) =a|ϕ(b):a_1 a_2…a_n-1; Ψ(a|b:a_1a_2… a_n-1) =a|b+1:a_1a_2… a_n-1, where ϕ(0)=1, ϕ(1)=0, ϕ(2)=2, and a_i=-a_i for i=1,…,n-1. To prove that Φ is an automorphism of F^*(n), observe that the vertex in (<ref>) is adjacent, through an edge, to a|ϕ(b):a_1 a_2…a_n-1 =Φ(a|b:a_1a_2… a_n-1), and, through an arc, to a|ϕ(b)+a_1:a_2…a_n-1 aa_1 =Φ( a|b+a_1:a_2a_3… a_n-1 aa_1), where the last equality holds since ϕ(b+a_1)=ϕ(b)+a_1, and aa_1=aa_1 for every b∈ℤ_3 and a,a_1∈{+1,-1}. Similarly, we can prove that Ψ is also an automorphism of F^*(n). Clearly, Φ is involutive, and Ψ has order three. Moreover, (ΦΨ)^2= 𝕀 (the identity). Then, the automorphism group (F^*(n)) must contain the subgroup ⟨Φ,Ψ⟩=D_3. It is easy to see that the graph F^*(n) has exactly three digons between pairs of vertices of the form -1:xyxy… xy and -1:yxyx… yx when n is even, or +1:xyxy… x and +1:yxyx… y when n is odd; see again Figure <ref>. Thus, any automorphism of F^*(n) must interchange these digons; hence, the automorphism group has at most 3!=6 elements. Consequently, (F^*(n))≅ D_3≅ S_3, as claimed. Before giving the diameter of F^*(n), we show that, for every vertex , there is only a possible vertex at distance 2n+1 from . Suppose first that n is even (the case of odd n is similar). It is clear that, excepting possibly one case, from vertex =a|b:a_1a_2… a_n-1 to vertex =a'|b':y_1y_2… y_n-1, there is a path with at most 2n steps of the form - → - →⋯ - →, where `-' stands for `∼' (edge) or `∅' (nothing), and `→' represents an arc. The exception occurs when all the edges of the path are necessary. That is: * If b'=b+Σ +a (where Σ=∑_i=1^n-1 a_i, and so that b'≠ b+Σ), then the first two steps are a|b:a_1a_2… a_n-1 ∼ a|b:a_1a_2… a_n-1 → a|b+a_1:a_2a_3… a_n-1 aa_1. * If y_1=aa_2, then the next two steps are a|b+a_1:a_2a_3… a_n-1 aa_1 ∼ a|b+a_1:a_2a_3… a_n-1 aa_1 → a|b+a_1+a_2:a_3… a_n-1 aa_1 aa_2. * If y_1=aa_3, then the next two steps are a|b+a_1+a_2:a_3… a_n-1 aa_1 aa_2 ∼ a|b+a_1+a_2:a_3… a_n-1 aa_1 aa_2 → a|b+a_1+a_2+a_3:a_4… a_n-1 aa_1 aa_2 aa_3. ⋮ * If y_n-1=aaa_1=-a_1, then the last two steps are a|b+Σ:aa_1 aa_2 aa_3…aa_n-1 ∼ a|b+Σ:aa_1 aa_2…aa_n-1 → a|b+Σ+aa_1:aa_2…aa_n-1 a_1 = a|b':y_1y_2… y_n-1. Thus, if a≠ a' (a'=a), the vertex =a|b+Σ+aa_1:aa_2 aa_3…aa_n-1 a_1 is not reached from in this way. Similarly, if n is odd, the exception is the vertex =a|b+Σ+aa_1:aa_2 aa_3… aa_n-1 a_1. §.§ The mixed graphs F'(n) If necessary, the three digons of F^*(n) can be removed and replaced by three new edges of the form +1:xyxy… xy ∼ +1:yxyx… yx , -1:xyxy… yx ∼ -1:yxyx… xy . So, we obtain the new mixed graph F'(n), with N=3· 2^n-6 vertices and diameter k≤ 2n. More precisely, F'(2) is isomorphic to the Kautz digraph K(2,2) with N=6 vertices and diameter k=2; and when n∈{3,4}, the mixed graph F'(n) has diameter k=2n-1. For instance, the mixed graph F'(4), with N=42 vertices and diameter k=7, is shown in Figure <ref>. In all the other cases, when n≥ 5, computational results seem to show that the diameter of F'(n) is always k=2n. §.§ The mixed graphs G(n) We define a (1,1,k)-regular mixed graph G(n), for n≥ 2, as follows: the vertices are of the form x_0|x_1… x_n, where x_i∈ℤ_2 for i=0,1,…,n. More precisely, the vertices are: ∘ For any n: 1|00…0 and 1|11…1; ∘ For odd n: 0|0101…0 and 0|1010…1; ∘ For even n: 1|0101…01 and 1|1010…10; ∘ For the other vertices, 0|x_1… x_n and 1|x_1… x_n, with x_i∈ℤ_2. So, the number of vertices of G(n) is 2^n+1-4. The adjacencies (with arithmetic modulo 2) through edges are: (i) For any n: 1|00…0 ∼ 1|11…1; (ii) For odd n: 1|0101…0 ∼ 1|1010…1; (iii) For even n: 0|0101…01 ∼ 0|1010…10; (iv) For the other vertices, x_0|x_1… x_n ∼ (x_0+1)|x_1… x_n. The adjacencies through arcs are: (v) x_0|x_1… x_n → x_0|x_2… x_n (x_1+x_0). The graph G(n) is an in- and out-regular mixed graph with r=z=1. Its only nontrivial automorphism is the one that sends x_0|=x_0|x_1 x_2 x_3 … to x_0|=x_0 |x_1 x_2 x_3…, where x_i=x_i+1 for i=1,2,3,… In Figure <ref>, we show the mixed graph G(3). Looking at the results for n≤ 12 obtained by computer, we are led to conjecture that the diameter of G(n) is k=2n-1. At first sight, the proof of this result seems to be involved, although we managed to prove the following. The diameter of G(n) is at most 2n. Consider the digraph G^+(n) defined by considering all 2^n+1 vertices of the form 0|x_1… x_n and 1|x_1… x_n, with x_i∈ℤ_2, with undirected adjacencies as in (iv), and directed adjacencies as in (v). Then, G^+(n) has the self-loops at vertices 0|00… 0 and 0|11… 1 and one digon (or two opposite arcs) between 0|0101… 01 and 0|1010… 10 for even n, and 0|0101… 0 and 0|1010… 1 for odd n. In fact, if every edge of G^+(n) is `contracted' to a vertex, what remains is the De Bruijn digraph B(2,n), with 2^n vertices and diameter n. Moreover, notice that G(n) is obtained by removing the above four vertices and adding the edges in (i), (ii), and (iii). By way of examples, Figure <ref> shows the graph G^+(2), whereas Figure <ref> shows the mixed graph G^+(3) `hanging' from a vertex with eccentricity 2n=6. Consequently, since the diameter of G(n) is upper bounded by the diameter of G^+(n), we concentrate on proving that the diameter of G^+(n) is 2n for n>1 (G(1) has diameter 3). The proof is constructive because we show a walk of length at most 2n between any pair of vertices. To this end, we take the following steps: * There is a walk of length at most 2n from vertex x_0|=x_0|x_1x_2… x_n to vertex (x_n+y_n)|y_1y_2… y_n. Indeed, as x_i+x_i=0 for any value of x_i, we get x_0|x_1x_2x_3… x_n ∼ (x_1+y_1)|x_1x_2x_3… x_n → (x_1+y_1)|x_2x_3… x_n y_1 ∼ (x_2+y_2)|x_2x_3… x_ny_1 → (x_2+y_2)|x_3… x_n y_1 y_2 ⋮ ∼ (x_n+y_n)|x_ny_1y_2y_3… y_n-1→ (x_n+y_n)|y_1y_2… y_n. Thus, the initial vertex x_0|x and the step pattern `∼→∼→(2n)⋯⋯∼→' uniquely determine the destiny vertex. * Clearly, some of the steps in (<ref>) are not necessary if some of the following situations occur: * The `intersection' of the sequences =x_1x_2… x_n and =y_1y_2… y_n (that is, the maximum length of the last subsequence of that coincides with a first subsequence of ), denoted |∩|, is greater than zero. (For instance, for =0… 010 and =100… 0, we get |∩|=2.) In this case, the first ℓ=|∩| step pairs `∼→' of the walk in (<ref>) are useless and can be avoided. Then, we say that we save 2ℓ steps. * Some of the following equalities hold: x_0=x_1+y_1, or x_i+y_i=x_i+1+y_i+1 for some i=1,…,n-1. In this case, some steps `∼' are absent. More precisely, if either both equalities x_i=y_i and x_i+1=y_i+1 (or both inequalities x_i≠ y_i and x_i+1≠ y_i+1) hold, then the step `∼' through an edge leading to (x_i+1+y_i+1)|x_i+1… x_ny_1… y_i… is absent. So, we save 1 step. Thus, if we can save some steps, one last step (x_n+y_n)|∼ (x_n+y_n)| assures a walk of length at most 2n from x_0| to y_0| for any y_0∈{0,1}. * In the `worst case', the walk in (<ref>) consists of exactly 2n steps (vertices at maximum distance) if |∩|=0 and none of the equalities in (b) holds. Assuming first that x_0=0 (the case x_0=1 is similar), the latter occurs when x_1+y_1=1⇒ y_1=x_1, x_2+y_2=0⇒ y_2=x_2, x_3+y_3=1⇒ y_3=x_3, and so on. Consequently, starting from 0|=0|x_1x_2x_3… x_n, we only need to test the destiny vertices of the form 1|=1|x_1 x_2 x_3… x_n (n even), and 0|= 0|x_1 x_2 x_3…x_n (n odd), with the additional constraints |∩|=|∩|=0. * For these cases, the strategy is to put first the last digit of destiny. Namely, if n is even, 0|x_1x_2x_3… x_n → 0|x_2x_3… x_n x_1 ∼ (x_2+x_1)|x_2x_3… x_n x_1 → (x_2+x_1)|x_3x_4… x_n x_1 x_1 ∼ (x_3+x_2)|x_3x_4… x_n x_1 x_1 → (x_3+x_2)|x_4x_5… x_n x_1 x_1 x_2 ∼ (x_4+x_3)|x_4 x_5… x_n x_1 x_1 x_2 → (x_4+x_3)|x_5… x_n x_1 x_1 x_2 x_3 ⋮ ∼ (x_n+x_n-1)|x_n x_1x_1x_2…x_n-1 x_n-2 → (x_n+x_n-1)| x_1x_1x_2… x_n-2x_n-1 ∼ (x_1+x_n)|x_1x_1x_2… x_n-2x_n-1 → (x_1+x_n)|x_1x_2x_3… x_n ∼ 1|(x_1+x_n)|x_1x_2x_3… x_n. This walk can have 2n+2 steps whenever all steps `∼' through edges are present. This is the case when x_2+x_1≠ 0, x_3+x_2≠ x_2+x_1, x_4+x_3≠ x_3+x_2,…, x_1+x_n≠ x_n+x_n-1, and x_1+x_n≠ 1. In turn, this implies the n+1 equalities x_1 =x_3, x_3=x_5, …, x_n-3=x_n-1, x_n-1=x_1, x_1 =x_2, x_2=x_4, x_4=x_6, …, x_n-2=x_n, x_n=x_1. Note that these sequences of equalities form two cycles (with odd and even subscripts) rooted at x_1. Thus, the number of inequalities, if any, must be at least 2. In this case, at least 2 steps `∼' are absent in (<ref>), and we have a walk of length at most 2n between the vertices considered. Otherwise, if all the equalities (<ref>)–(<ref>) hold, the initial vertex must be 0|000(n)… 00 (the first digit x_1 can be fixed to 0 since the mixed graph has an automorphism that sends x_0|x_1x_2… x_n to x_0|x_1 x_2 …x_n), and the destiny vertex is 0|1010(n)… 10. The same reasoning for n odd leads that, in the worst case (walk in (<ref>) of length 2n+2), the initial vertex is 0|000… 0 and the final vertex 0|1010… 1. In such cases, we have a particular walk of the desired length. * There is a walk of length 2n from 0|000… 0 to 1|1010… 10 (n even) or to 0|1010… 01 (n odd) by using the following step pattern ∼ → →∼ → ∼ → (2n)⋯⋯∼ → →. For instance, for n=6, we get 0|000000 ∼ 1|000000→ 1|000001 → 1|000011 ∼ 0|000011 → 0|000110 ∼ 1|000110 → 1|001101 ∼ 0|001101 → 0|011010 ∼ 1|011010 → 1|110101 → 1|101010, and, for n=7, 0|0000000 ∼ 1|0000000 → 1|0000001 → 1|0000011 ∼ 0|0000011 → 0|0000110 ∼ 1|0000110 → 1|0001101 ∼ 0|0001101 → 0|0011010 ∼ 1|0011010 → 1|0110101 ∼ 0|0110101 → 0|1101010 → 0|1010101. * The case x=1 is similar, and we only mention the main facts. Now, the `worst case' (2n steps) in the walk in (<ref>) (2n steps) occurs when, starting from 1|=1|x_1x_2x_3… x_n, we want to reach the destiny vertices of the form 0|=0|x_1 x_2 x_3x_4…x_n (n even), or 1|=1|x_1 x_2 x_3x_4… x_n (n odd), with the additional constraints |∩|=|∩|=0. Now, following the same strategy as in step 4 above, it turns out that for the case of 2n+2 steps, the following conditions must hold (assuming n odd, the even case is similar): x_1 =x_2, x_2=x_3, …, x_n-1=x_n, x_n≠ x_1, which are clearly incompatible, and at least there must be another inequality (the last one in (<ref>) is forced since the final vertex has x_0=1). Again, at least 2 steps `∼' are absent in (<ref>), and we have a walk of length at most 2n between the vertices considered. For example, for n=5, and assuming that x_4≠ x_5 and x_1=0, the walk of 10 steps from 1|00001 to 1|01011 is: 1|00001 → 1|00011 ∼ 0|00011 → 0|00110 ∼ 1|00110 → 1|01101 ∼ 0|01101 → 0|11010 → 0|10101 → 0|01011 ∼ 1|01011. This completes the proof. In fact, we implicitly proved the following. For every n>1, the mixed graph G^+(n) satisfies the following. (i) The vertices 0|00… 0 and 0|11… 1 have maximum eccentricity 2n. (ii) The vertices 1|00… 0 and 1|11… 1 have eccentricity 2n-1. (iii) If n≥ 5, the vertices 1|00… 01 and 1|11… 10 have eccentricity 2n-2. (i) and (ii) follow from the previous reasoning. To prove (iii), we only need to check the distance from 1|00… 01 to 0|00… 0. A shortest path between these two vertices is 1|00… 01∼ 0|00… 01→ 0|0… 010→⋯→ 0|10… 00 ∼ 1|10… 00→ 1|00… 00∼ 0|00… 0 of length n+3≤ 2n-2 if n≥ 5. Let Ψ_0 and Ψ_1 be the functions that map a vertex x| to its adjacent vertex from an edge or an arc, respectively. That is, Ψ_0(x_0|x_1x_2… x_n) =x_0|x_1x_2… x_n, Ψ_1(x_0|x_1x_2… x_n) =x_0|x_1x_2… (x_0+x_1). Let Φ=(ϕ_1,ϕ_2,…,ϕ_n) be the function that maps every x_i to either x_i or x_i, for i=1,2,…,n. For any fixed functions Ψ_j and Φ, and first digit x_0=0,1, we have Ψ_j(x_0|Φ())=Φ(Ψ_j(x_0|)), where Φ only acts on the digits x_1,x_2,…,x_n. Ψ_0(x_0|Φ()) =Ψ_0(x_0|ϕ_1(x_1)ϕ_2(x_2)…ϕ_n(x_n))=x_0|ϕ_1(x_1)ϕ_2(x_2)…ϕ_n(x_n) =Φ(Ψ_0(x_0|)). Ψ_1(x_0|Φ()) =Ψ_1(x_0|ϕ_1(x_1)ϕ_2(x_2)…ϕ_n(x_n))=x_0|ϕ_2(x_2)…ϕ_n(x_n)(x_0ϕ_1(x_1)) =Φ(Ψ_1(x_0|)). Another property of the mixed graph G^+(n) for n>1 is that from every pair of (not necessarily distinct) vertices u and v, there is at least a walk of length 2n from u to v. For instance, for n=2, fixing as before x_1=0 and setting y=x_0+x_2, we have the following walks of length 4 from x_0|0 x_2 to every vertex of G^+(2). x_0|0 x_2 ∼ x_0|0x_2 ∼ x_0|0x_2 → x_0|x_2x_0 → x_0|x_0 (x_0+x_2) = x_0|x_0 y → x_0|x_2 x_0 ∼ x_0|x_2 x_0 → x_0|x_0(x_0+x_2)∼ x_0|x_0(x_0+x_2) = x_0|x_0y ∼ x_0|0 x_2 → x_0|x_2 x_0 ∼ x_0|x_2 x_0 → x_0|x_0 (x_0+x_2) = x_0|x_0y ∼ x_0|0x_2 → x_0|x_2x_0 → x_0|x_0(x_0+x_2) ∼ x_0|x_0(x_0+x_2) = x_0|x_0y → x_0|x_2 x_0 → x_0|x_0 (x_0+x_2) → x_0|(x_0+x_2) 0 ∼ x_0|(x_0+x_2) 0 = x_0|y0 → x_0|x_2 x_0 → x_0|x_0 (x_0+x_2) ∼ x_0|(x_0+x_2) → x_0|(x_0+x_2) 1 = x_0|y1 ∼ x_0|0 x_2 → x_0|x_2 x_0 → x_0|x_0(x_0+x_2) → x_0|(x_0+x_2) 0 = x_0|y0 → x_0|x_2 x_0 ∼ x_0|x_2x_0 → x_0|x_0(x_0+x_2) 0 → x_0|(x_0+x_2) 1 = x_0|y1. Working with the adjacency matrix of G^+(2) (indexed according to Figure <ref>), the above property is apparent when we look at the power ^4. = ( [ 1 1 0 0 0 0 0 0; 1 0 1 0 0 0 0 0; 0 0 0 1 0 0 1 0; 0 0 1 0 1 0 0 0; 0 0 0 1 0 1 0 0; 0 1 0 0 1 0 0 0; 0 0 0 0 0 1 0 1; 0 0 0 0 0 0 1 1 ]), ^4 = ( [ 5 3 3 1 1 1 1 1; 3 3 1 3 1 1 3 1; 1 1 3 1 3 3 1 3; 1 1 1 5 1 3 3 1; 1 3 3 1 5 1 1 1; 3 1 3 3 1 3 1 1; 1 3 1 1 3 1 3 3; 1 1 1 1 1 3 3 5; ]). §.§ The n-line mixed graphs Let G=(V,A) be a 2-regular digraph with a given 1-factorization, that is, containing two arc-disjoint spanning 1-regular digraphs H_1 and H_2. Assuming that the arcs of H_1 have color blue and the arcs of H_2 have color red, we can also think about a (proper) arc-coloring γ of G. Then, if xy represents an arc of G, we denote its color as γ(xy). Given an integer n≥ 3, the vertices of the n-line mixed graph H(n)=H_n(G) are the set of n-walks in G, x_1x_2… x_n-1x_n, with x_i∈ V and x_ix_i+1∈ A, for i=1,…,n-1. The adjacencies of H(n) are as follows: x_1x_2… x_n-1x_n ∼ y_1x_2… x_n-1x_n where γ(y_1x_2)≠γ(x_1x_2); and x_1x_2… x_n-1x_n → x_2… x_n-1x_ny_n+1 where γ(x_ny_n+1)=red if γ(x_1x_2)=γ(x_n-1x_n), and γ(x_ny_n+1)=blue if γ(x_1x_2)≠γ(x_n-1x_n). The reason for the name of H_n(G) is because when we contract all its edges, so identifying the vertices in (<ref>), the resulting digraph is the (n-1)-iterated line digraph L^n-1(G) of G, see Fiol, Yebra, and Alegre <cit.>. Indeed, under such an operation, each pair of vertices in (<ref>) becomes a vertex that can be represented by the sequence x_2x_3… x_n, which, according to (<ref>), is adjacent to the two vertices x_3… x_ny_n+1 with y_n+1∈Γ^+ (x_n) in G. In the following result, we describe other basic properties of H_n(G). Let G=(V,A) be a digraph with r vertices and diameter s, having a 1-factorization. For a given n≥ 3, the following holds. (i) The mixed graph H_n=H_n(G) has N=r· 2^n-1 vertices, and it is totally (1,1)-regular with no digons. (ii) The diameter of H_n satisfies k≤ 2(s+n)-3. (i) Every vertex x_1… x_n of H_n corresponds to a walk of G with first vertex x_1, which gives r possibilities and, since G is 2-regular, for every other x_i, i=2,…,n, we have 2 possible options. This provides the value of N. To show total (1,1) regularity, it is enough to prove that H_n is 1-in-regular. Indeed, any vertex adjacent to x_1x_2… x_n, with γ(x_n-1x_n)=blue (respectively, γ(x_n-1x_n)= red) must be of the form yx_1… x_n-2x_n-1 with γ(yx_1)≠γ(x_n-2x_n-1) (respectively, with γ(yx_1)= γ(x_n-2x_n-1)). But, in both cases, there is only one possible choice for vertex y. With respect to the absence of digons, notice that a vertex =x_1x_2… x_n-1x_n belongs to a digon if, after two steps, we come back to , which means that x_1x_2… x_n-1x_n=x_3x_4… x_ny_n+1y_n+2 and, hence, x_i=x_3=⋯ and x_2=x_4=⋯. In other words, vertex $̆ must be of the formxyxy⋯xy(neven) orxyxy⋯x(nodd), andGitself must have a digon between verticesxandy. Assuming thatnis even andγ(xy)=blue (the other cases are similar), the digon should be=xyxy… xy → =yxyx… yx → .But the last adjacency is not possible since both the first and last arcs ofwould have colorγ(yx)=red and, hence, so should be the color ofxy, a contradiction.(ii)Given both verticesx_1x_2…x_n-1x_nandy_1y_2…y_n-1y_n, let us consider a shortest path inGof length at mostsfromx_ntoy_2. Then, using both types of adjacencies, we can go fromx_1x_2…x_n-1x_nto a vertex of the formz_1…y_2. From this vertex, we now reach the vertexyy_2…y_nin at most2(n-2)steps. Finally, if necessary, we can changeybyy_1. In total, we usek≤2s+2(n-2)+1=2(s+n)-3steps, as claimed. For example, ifGis the complete symmetric digraphK_3(edges seen as digons) with vertices inℤ_3, blue arcsi→i+1and red arcsi→i-1fori=0,1,2, the adjacencies ofH_n(K_3), with3·2^n-1vertices, are x_1x_2… x_n-1x_n ∼ y_1x_2… x_n-1x_n, y_1≠ x_1,x_2, x_1x_2… x_n-1x_n → x_2x_3… x_n y_n+1, y_n+1=x_n-(x_2-x_1)(x_n-x_n-1). Thus, the(1,1)-regular mixed graphsH_3(K_3)andH_4(K_3), with diameterk=5andk=6, respectively, are shown in Figure <ref>. In this case, when we contract all the edges ofH_n(K_3), we obtain the(n-1)-iterated line digraph ofK_3, which, as commented in Introduction, is isomorphic to the Kautz digraphK(2,n-1). § A FIRST COMPUTATIONAL APPROACH: THE (1,1,K)-MIXED GRAPHS WITH DIAMETER AT MOST 6 The Moore boundM(1,1,k)coincides with the number of binary words of lengthℓ≤kwithout consecutive zeroes. In this sense, the corresponding Moore tree can be rooted to a vertex labeled with the empty word. Every vertex labeled with a wordω(of lengthℓ) with the last symbol different from 0 is joined by an edge to a vertex labeledω0(of lengthℓ+1), for all0 ≤ℓ≤k-1. Moreover, the arcs are defined byω→ω1(see an example in Figure <ref>). This new description of the Moore tree is very useful for performing an exhaustive computational search of the largest mixed graphs for some small values of the diameterk. Leta(ℓ)be the number of vertices at distanceℓfrom the root in the Moore tree. Using the above-mentioned labeling, it is easy to see thata(ℓ)satisfies the recurrence equation a(ℓ)=a(ℓ-1)+a(ℓ-2), with initial conditionsa(0)=1anda(1)=2. Indeed,a(ℓ)is the number of words of lengthℓ(whose symbols are in the alphabetΣ={0,1}) without consecutive zeroes. The words of lengthℓnon-ending with 0 are constructed by a word of lengthℓ-1by adding0. This givesa(ℓ-1). Moreover, the words of lengthℓending with0are constructed by adding1. This givesa(ℓ-2)=b(ℓ), whereb(ℓ)is the number of vertices at distanceℓfrom the root joined by an edge to a vertex at distanceℓ-1. Sob(ℓ)satisfies the same recurrence relation asa(ℓ)but with initial conditionsb(0)=0andb(1)=1. Finally, letc(ℓ)=a(ℓ)-b(ℓ)=a(ℓ-1), that is, the number of vertices at distanceℓfrom the root pointed by an arc from a vertex at distanceℓ-1. Again,c(ℓ)satisfies the same type of recurrence relation but with initial conditionsc(0)=1andc(1)=1. Thus,a(ℓ),b(ℓ), andc(ℓ)are all Fibonnaci-like numbers. For instance,a(ℓ)equals the following closed formulaa(ℓ)=5 + 3√(5)/10(1+√(5)/2)^ℓ + 5 - 3√(5)/10(1-√(5)/2)^ℓ.Note that the sequence obtained froma(ℓ)corresponds to the Fibonacci numbers starting witha(0)=1anda(1)=2(see the sequence A000045 in <cit.>). Similar formulas can be obtained forb(ℓ)andc(ℓ). Now, we can perform an algorithmic exhaustive search to find all the largest(1,1,k)-mixed graphs with order close to the Moore bound. For instance, in the case of almost mixed Moore graphs (with diameterkand orderM(1,1,k)-1), the number of different cases of mixed graphs to analyze is bounded byN(k), whereN(k)is computed next. * We remove a vertex in the Moore tree at distance k from the empty word. Notice there are a(k) different choices for this vertex. * Now, we count the number N_1 of possibilities to complete the undirected part of the mixed graph. We recall that the number of perfect matchings in a complete graph of even order n is (n-1)!! This number N_1 depends on what vertex has been removed in the previous step. If the removed vertex has a label ending with 0, that is, it is a vertex hanging from an edge, then there are c(k)+1 vertices in the graph without an incident edge. So, N_1=c(k)!! Otherwise, there are c(k)-1 vertices in the graph without an incident edge, so N_1=(c(k)-2)!! * The number of possibilities to complete the directed part of the graph is upper bounded by the number of mappings from the set of words of length k without fixing points. This is precisely the number of derangements D_a(k). Notice that mappings, including assignations from a word of length k to its predecessor, are not valid. Putting all together,N(k)<D_a(k)(b(k)c(k)!!+c(k)(c(k)-2)!!). Of course,N(k)grows very fast withkbut the number of cases to analyze fork≤4is reasonable (see Table <ref>). As a consequence, computing the diameter of the889980putative almost Moore(1,1)-mixed graphs with diameterk=4, we have the following result. (In fact, this calculation is easily done by using the result by Tuite and Erskhine <cit.> that such graphs are not totally regular.) There is no almost (1,1,4)-mixed Moore graph. A similar method can be implemented to perform an exhaustive search for ordersM(1,1,k)-δfor smallδ. In these cases, the removal ofδdifferent vertices of the Moore tree (step 1) has many more choices but the number of operations in steps 2 and 3 sometimes is reduced. This is precisely what we do forn=M(1,1,4)-3=16, where there are two cases to take into account: * The removal of three distinct words ω_1,ω_2,ω_3 of length 4 (corresponding to three distinct vertices at distance 4 from the root of the Moore tree). * Given any word ω of length 3, the deletion of either the set of words {ω,ω1,ω'} (when ω ends in 0) or {ω,ω0,ω1} (when ω ends in 1), where ω'≠ω1 is any word of length 4. It remains to add the corresponding edges and arcs in the pruned Moore tree. The computational exhaustive search shows there is no(1,1)-mixed Moore graph of diameter4and order16. Now the maximum order becomesn=14for a mixed graph with parametersr=z=1andk=4. There are many more possibilities to prune the Moore tree, so we decide to implement a direct method to perform an exhaustive search in this case: taking the perfect matching with a set of verticesV={0,1,…,13}and wherei ∼i+1for all eveni, we add the three arcs(0,2),(1,5)and(5,7). Looking at vertex0as the root of the Moore tree, the existence of these three arcs in the mixed graph is given becauseδ=5in this case. Now we proceed with the exhaustive search by adding the remaining arcs in the graph. There are11!possibilities but excluding avoided permutations (those permutations with elements of order at most2or including edges of the perfect matching) significantly reduces the number of cases to analyze. After computing the diameter of all these mixed graphs and keeping those non-isomorphic mixed graphs with diameterk=4, we have the following result. The maximum order for a (1,1)-mixed regular graph of diameter k=4 is 14. There are 27 of such mixed graphs (see Table <ref>), and only one of them is a Cayley graph. Namely, that of the dihedral group D_7 with generators r and s, and presentation ⟨ r,s | r^7=s^2=(rs)^2=1 ⟩, also obtained as the line digraph of C_7, see the mixed graph at the top left in Figure <ref>. The spectra of all27mixed graphs with the largest order can be described with the help of the (complex) rootsα_ijof the irreducible polynomialsp_i(x) ∈ℚ(x)given below: [ p_1(x)=x^4 + x^3 - 2x^2 - x + 2,; p_2(x)=x^3 + x^2 - 2x - 1,; p_3(x)=x^3-x+1,; p_4(x)=x^3 + 2x^2 - x - 3,; p_5(x)= x^4 + x^3 - x^2 - x + 1,; p_6(x)=x^6 + x^5 - 3x^4 - x^3 + 5x^2 - 4,; p_7(x)=x^9 + 3x^8 - 6x^6 + 2x^5 + 11x^4 - 3x^3 - 9x^2 + 3x + 3,; p_8(x)=x^6 + x^5 - 3x^4 - 2x^3 + 5x^2 + 2x - 3,; p_9(x)=x^6 + x^5 - x^4 + 3x^2 - 1. ] § A SECOND COMPUTATIONAL APPROACH: CAYLEY OR LIFT (1,1,K)-MIXED GRAPHS WITH SMALL DIAMETER To obtain the results of this section, we followed a different strategy. We mainly concentrate our search on looking at large(1,1,k)-mixed graphs that are either Cayley or lift graphs. Let us first recall these two classes of graphs. Given a finite groupΩwith generating setS⊆Ω, the Cayley graph(Ω,S)has vertices representing the elements ofΩ, and arcs fromωtoωsfor everyω∈Ωands∈S. Notice that ifs,s^-1∈S, then we have an edge (as two opposite arcs) betweenωandωs. Thus, ifS=S_1∪S_2whereS_1=S_1^-1andS_2∩S_2^-1=∅, the Cayley graph(Ω,S)is an(r,z)-mixed graph with undirected degreer=|S_1|and directed degreez=|S_2|. Given a digraphG, or base graph, and a finite groupΩwith generating setS, a voltage assignment α is a mappingα:E→S, that is, a labeling of the arcs with the elements ofS. Then, the lift digraphG^αhas vertex setV(G^α)=V×Ωand arc setE(G^α)=E×S, where there is an arc from vertex(u,g)to vertex(v,gα(uv))if and only ifuv∈E. In particular, the Cayley digraphCay(Ω,S)withS={g_1,…,g_r}can be seen as the lifted digraphG^α, whereG=K_1^r(a singleton withV={u}andE={e_1,…,e_r}arerloops) and voltage assignment α(e_i)=g_ifori=1,…,r. An example of a lift digraph is shown in Figure <ref>. The results obtained by computer search are shown in Table <ref>, see next section. In what follows, we comment upon some of the cases. Notice that for diameterk=2,3,4, the known(1,1,k)-mixed graphs have the maximum possible order. The mixed graph of diameterk=2is the Kautz digraphK(2,2). The graph withk=3is isomorphic to the line digraph of the cycleC_5. Some of the maximal graphs with diameterk=4were already shown in Figure <ref>. Two maximal graphs of diameterk=5are shown in Figure <ref>. The graph of order72listed in the table fork=8is a lift graph using the dihedral groupD_18of order18. This group consists of the18symmetries of the nonagon. To describe our graph, we consider a regular nonagon whose vertices are labeled0to8in clockwise order. Label the elements ofD_18as follows. There are nine counter-clockwise rotations, each through an angle2 πk/9and denoted Rot(k), for0 ≤k < 9. Finally, there are the nine reflections Ref(k) about the line through vertexkand the midpoint of the opposite side. This notation is used to specify the voltages on the edges and arcs of the base graph shown in Figure <ref>. The graph of order544fork=13is a lift of the base graph shown in Figure <ref> with voltages in the groupℤ_17:ℤ_8. The remaining graphs are partially identified as notes following Table <ref>. Where a graph is identified as a lift using a voltage group of order half the order of the graph, the base graph is an undirected edge together with a directed loop at each vertex. A complete description of such larger graphs, especially those that use unfamiliar groups, would take a lot of pages. The interested reader can address the third author to request more information. § TABLE OF LARGE (1,1,K)-MIXED GRAPHS A summary of the results for a(1,1)-regular mixed graphs with diameterkat most16is shown in Table <ref>, where the lower bounds come from the mentioned constructions. Moreover, the upper bounds follow by Proposition <ref>(k=4), a computer exploration(k=5), and the numbersM(1,1,k)-δ(k)withδ(k)given in (<ref>) and adjusted even parity (sincer=1, the graph contains a perfect matching and, so, it must have even order), see Tuite and Erskine <cit.>. * Cayley graph on SmallGroup(54,6): ℤ_9 : ℤ_6. * Lift group is the dihedral group of order 18. * Lift group is AGL(1,8)=(_2^3):_7. * Cayley graph on SmallGroup(144,182). * Lift group is A_5×ℤ_2. * Cayley graph on PSL(2,7):ℤ_2. * Lift group is ℤ_17 : ℤ_8. * Cayley graph on SmallGroup(800,1191). * Lift group is SmallGroup(512,1727). * Lift group is SmallGroup(800,1191). § STATEMENTS & DECLARATIONS §.§ Funding The research of C. Dalfó, M. A. Fiol, N. López, and A. Messegué has been supported by AGAUR from the Catalan Government under project 2021SGR00434 and MICINN from the Spanish Government under project PID2020-115442RB-I00. The research of M. A. Fiol was also supported by a grant from the Universitat Politècnica de Catalunya with references AGRUPS-2022 and AGRUPS-2023. J. Tuite was supported by EPSRC grant EP/W522338/1. §.§ Competing Interests The authors have no relevant financial or non-financial interests to disclose. §.§ Author Contributions All authors contributed to the study's conception and design. Material preparation, data collection, and analysis were performed by all the authors, after much work was done. All authors contributed to the first draft of the manuscript, which was improved by all of them. All authors read and approved the final manuscript. §.§ Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request. 99bmp98 E. Baskoro, M. Miller, and J. Plesník, On the structure of digraphs with order close to the Moore bound, Graphs Combin. 14 (1998), no. 2, 109–119. b79 J. Bosák, Partially directed Moore graphs, Math. Slovaca 29 (1979) 181–196. baemp15 D. Buset, M. El Amiri, G. Erskine, M. Miller, and H. Pérez-Rosés, A revised Moore bound for mixed graphs, Discrete Math. 339 (2016), no. 8, 2066–2069. blm17 D. Buset, N. López, and J. M. Miret, The unique mixed almost Moore graph with parameters k = 2, r = 2 and z = 1, J. Intercon. Networks 17 (2017) 1741005. Da19 C. Dalfó, A new general family of mixed graphs, Discrete Appl. Math269 (2019) 99–106. DaFi16 C. Dalfó and M. A. Fiol, Cospectral digraphs from locally line digraphs, Linear Algebra Appl.500 (2016) 52–62. dfl17 C. Dalfó, M. A. Fiol, and N. López, Sequence mixed graphs, Discrete Applied Math. 219 (2017) 110–116. dfl18 C. Dalfó, M. A. Fiol, and N. López, An improved upper bound for the order of mixed graphs, Discrete Math. 341 (2018), no. 10, 2872–2877. dfl18bis C. Dalfó, M. A. Fiol, and N. López, On bipartite-mixed graphs, J. Graph Theory 89 (2018) 386–394. dr09 T. Dobravec and B. Robič, Restricted shortest paths in 2-circulant graphs, Comput. Commun. 32 (2009), no. 4, 685–690. Dw H. Dweighter, Elementary problems and solutions, problem E2569, Amer. Math. Monthly82 (1975), no. 10, 1010. efh80 P. Erdős, S. Fajtlowicz, and A. J. Hoffman, Maximum degree in graphs of diameter 2, Networks10 (1980) 87–90. e17 G. Erskine, Mixed Moore Cayley graphs, J. Intercon. Networks 17, no. 03n04, (2017) 1741010. FaMoCh V. Faber, J. W. Moore, and W. Y. C. Chen, Cycle prefix digraphs for symmetric interconnection networks, Networks23 (1993) 641–649. FiYeAl84 M. A. Fiol, J. L. A. Yebra, and I. Alegre, Line digraph iterations and the (d,k) digraph problem, IEEE Trans. Comput. C-33 (1984) 400–403. g01 J. Gimbert, Enumeration of almost Moore digraphs of diameter two, Discrete Math. 231 (2001) 177–190. HoMc65 A. J. Hoffman and M. H. McAndrew, The polynomial of a directed graph, Proc. Amer. Math. Soc.16 (1965) 303–309. J15 L. K. Jørgensen, New mixed Moore graphs and directed strongly regular graphs, Discrete Math. 338 (6) (2015) 1011–1016. lm16 N. López and J. M. Miret, On mixed almost Moore graphs of diameter two, Electron. J. Combin. 23(2) (2016) 1–14. lmf15 N. López, J. M. Miret, and C. Fernández, Non existence of some mixed Moore graphs of diameter 2 using SAT, Discrete Math. 339(2) (2016) 589–596. lpp14 N. López, H. Pérez-Rosés, and J. Pujolàs, Mixed Moore Cayley graphs, Electron. Notes Discrete Math. 46 (2014) 193–200. nauty B. D. McKay and A. Piperno, Nauty and Traces User's Guide (Version 2.27). Technical Report, Computer Science Department, Australian National University (2021). ms13 M. Miller and J. Širáň, Moore graphs and beyond: A survey of the degree/diameter problem, Electron. J. Combin. 20(2) (2013) #DS14v2. nmg07 M. H. Nguyen, M. Miller, and J. Gimbert, On mixed Moore graphs, Discrete Math. 307 (2007) 964–970. OIES OEIS Foundation Inc. (2022), The On-Line Encyclopedia of Integer Sequences, Published electronically at . tg19 J. Tuite and G. Erskine, On total regularity of mixed graphs with order close to the Moore bound, Graphs Combin. 35 (2019), no. 6, 1253–1272. te22 J. Tuite, and G. Erskine, On networks with order close to the Moore bound, Graphs Combin. 38 (2022), no. 5, 143.
http://arxiv.org/abs/2306.01416v1
20230602100520
Algorithmic realization of the solution to the sign conflict problem for hanging nodes on hp-hexahedral Nédélec elements
[ "Sebastian Kinnewig", "Thomas Wick", "Sven Beuchler" ]
math.NA
[ "math.NA", "cs.NA" ]
1,2]S. Kinnewig 1,2]T. Wick 1,2]S. Beuchler [1] Leibniz University Hannover, Institute of Applied Mathematics, Welfengarten 1, 30167 Hannover, Germany [2] Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering - Innovation Across Disciplines), Leibniz University Hannover, Germany Algorithmic realization of the solution to the sign conflict problem for hanging nodes on hp-hexahedral Nédélec elements [ ============================================================================================================================== While working with Nédélec elements on adaptively refined meshes with hanging nodes, the orientation of the hanging edges and faces must be taken into account. Indeed, for non-orientable meshes, there was no solution and implementation available to date. The problem statement and corresponding algorithms are described in great detail. As a model problem, the time-harmonic Maxwell's equations are adopted because Nédélec elements constitute their natural discretization. The implementation is performed within the finite element library deal.II. The algorithms and implementation are demonstrated through four numerical examples on different uniformly and adaptively refined meshes. § INTRODUCTION The system of Maxwell's equations <cit.> are fundamental to many fields of research and have numerous practical applications, from Magnetic Induction Tomography (MIT) in medicine <cit.>, geoelectromagnetic modeling in geophysics <cit.> to quantum computing <cit.>, and quantum communication <cit.> in optics. As this work is part of the cluster of excellence PhoenixD[<https://www.phoenixd.uni-hannover.de/en/>], we consider applications from the area of photonics and optics. As the designing process of optical components can be challenging, simulations are necessary for support. This involves the simulation of electromagnetic waves within the components, which is done by solving Maxwell's problem for which Nédélec elements form the natural basis. As we consider the time harmonic indefinite Maxwell's problem in this work, specialized techniques are required to solve these kinds of problems. In the literature, several solution techniques are proposed. There are overlapping domain decomposition, see, e.g. the recent publication <cit.> and the references therein, and nonoverlapping domain decomposition methods, <cit.>, or ℋ-matrices <cit.> which are designed for the time-harmonic case. Note that the system is highly indefinite. Therefore, it becomes very challenging to develop an efficient solver, <cit.>. Alternatives in the positive definite case are multigrid techniques, <cit.>, <cit.>, or FETI-DP-like algorithms, <cit.>, <cit.>. Even with these methods, it remains computationally expensive to solve Maxwell's problems. Therefore adaptive strategies, such as local grid refinement, that can keep computational costs reasonable, while increasing the accuracy are highly desirable. This can be achieved with heuristic error indicators, geometry-oriented refinement, residual-based error control, or goal-oriented error control. With this, adaptive grid refinement is one key component in numerical simulations that enables us to handle more complex problems, for example, multi-scale problems for the simulations of integrated optical components. Our choice for a suitable programming platform is motivated by modern available FEM libraries that include support for high-order Nédélec elements. Various open-source finite element libraries allow the use of Nédélec elements of polynomial degree p≥2. The library <cit.> can handle unstructured grids with a maximum of p=2, while <cit.> can support a maximum of p=3. <cit.> utilized the basis functions introduced by Schöberl and Zaglmayr <cit.> to implement high polynomial functions on unstructured grids. <cit.> implements the Nédélec functions based on the hierarchical polynomial basis from Demkowicz <cit.>. Also, the following libraries implement high polynomial Nédélec elements, <cit.> (unstructured), <cit.>, and <cit.> (unstructured). And <cit.>, an extension of that implements optimized Schwarz domain decomposition methods, which is a well-established method for solving ill-posed Maxwell's problems. We select <cit.> as it offers high-polynomial Nédélec basis functions based on Schöberl and Zaglmayr's basis function sets for the complete De-Rham sequence <cit.>. Also, is well established with a large user basis and good accessibility thanks to its comprehensive documentation, which are essential for sustainable software development and uses tensor product elements. Additionally, it is designed with adaptive mesh refinement in mind, providing a range of functionalities for the computation of error estimators. Due to the use of quadrilateral and hexahedral elements, the local mesh refinement requires the usage of hanging nodes. As a starting point for our implementation of hanging nodes, we use the work of Ledger and Kynch <cit.> in two dimensions. The key objective of this work is to address a long-standing open problem that concerns the design of algorithms and corresponding implementation in three dimensions on non-orientable meshes of the Nédélec basis functions on locally refined grids. As previously mentioned, the authors <cit.> considered high-polynomial Nédélec basis functions to capture skin effects that appear in the MIT problem. Therefore they described a procedure to overcome the sign conflict on hp-Nédélec elements. In , prior work already utilized hanging nodes for Nédélec elements, for example, the work of Bürg <cit.>. But there, the old implementation was used, which can only be applied to oriented grids. In this work, we extend the class , which can also be applied to non-orientable grids. The extension to three dimensions is non-trivial, as we shall see, and particularly an open problem in . The main work here relies upon the high number of possible configurations we have to cope with. To overcome the sign conflict in the case of hanging edges and faces, we need to adapt the associated constraint matrix that restricts the additional Degrees of Freedom (DoFs) introduced by the hanging edges and faces accordingly. In the three-dimensional case, we have to consider hanging faces. One face has 2^3 possible orientations and is refined into four child faces. Consequently, we have to deal with 2^15 possible configurations. As dealing with every case individually would be even more cumbersome, we perform intelligent grid modifications to reduce the number of cases beforehand significantly. Our goal is to resolve sign conflicts regardless of the polynomial degree involved. To achieve this, we need to comprehend the structure of the constraint matrix so we can develop algorithms that can deal with any given polynomial degree. As one of our aims is to make these results accessible, we provide the most crucial steps as pseudo-code. These accomplishments are exemplarily applied to the time-harmonic Maxwell's equations, which are solved for four different configurations. Therein, our primary purpose is to show that our algorithms work and our implementation is correct. This is demonstrated through qualitative comparisons and some quantitative results in terms of a computational error analysis. The outline of this work is as follows. To start our discussion, we will describe the basic operators and polynomials required for the Nédélec basis. Then, we will introduce the 𝐇_curl conforming basis functions for both two-dimensional and three-dimensional cases, i.e., the Nédélec elements. In section <ref>, we explain the sign conflict that arises for the Nédélec elements in detail and explain how to overcome the sign conflict, along with some pseudo-code examples. In section <ref>, we start with the motivation for using non-uniform grids and present the sign conflict that arises in the context of non-uniform grids for Nédélec elements. We also provide a detailed explanation of how to overcome this sign conflict, with some examples of pseudo-code. Section <ref> discusses the time-harmonic Maxwell's equations. Section <ref> showcases our implementation by presenting the numerical results of some benchmark problems. § 𝐇_CURL-CONFORMING ELEMENT SPACE We start our problem discussion by comprehensively describing the underlying mathematical spaces to describe the sign conflict. For the discretization of 𝐇_curl, one must ensure tangential continuity. The De-Rham cohomology (see figure <ref>) <cit.> tells us that the space with the corresponding properties is the Nédélec space, <cit.>, which is introduced in the following. Our discussion begins with the introduction of the fundamental operators. After that, we make a definition by case, one for the two-dimensional Nédélec elements and one for the three-dimensional Nédélec elements. We like to point the reader to <cit.> for more details. §.§ Fundamental operators For a comprehensive description of the mathematical spaces, we start our discussion by introducing the necessary operators to describe 𝐇_curl. Therefore, let us assume a scalar ψ:ℝ→ℝ and a⃗, b⃗, c⃗, v⃗∈ℝ^d, d ∈{2,3} to be d-dimensional vectors. Then the gradient of ψ is given by ∇ψ = ( ∂ψ/∂ x_1, …, ∂ψ/∂ x_d), and the divergence of v⃗ is given by div(v⃗) ∇·v⃗∑_i = 1^d∂ v_i /∂ x_i. Next, a⃗·b⃗∑_i=1^d a_i b_i denotes the scalar product. For the description of the cross-product, we need to perform a case analysis, one for the two-dimensional case and one for the three-dimensional case. [ d = 2: d = 3:; ([ a_1; a_2 ]) ×([ b_1; b_2 ]) = a_1 b_2 - a_2 b_1. ([ a_1; a_2; a_3 ]) ×([ b_1; b_2; b_3 ]) = ( [ a_2 b_3 - a_3 b_1; a_3 b_1 - a_1 b_3; a_1 b_2 - a_2 b_1 ]) ] with this, we can furthermore write down the description of the curl operator [ d = 2: d = 3:; curl(v⃗) = ∇×v⃗ = ∂ v_2/∂ x_1 - ∂ b_1/∂ x_2 curl(v⃗) = ∇×v⃗ = ( [ ∂ v_3/∂ x_2 - ∂ v_2/∂ x_3; ∂ v_1/∂ x_3 - ∂ v_3/∂ x_1; ∂ v_2/∂ x_1 - ∂ v_1/∂ x_2 ]). ] The double cross-product between three vectors is the next operator we need to describe 𝐇_curl later. The Graßmann identity gives the cross-product between these three vectors a⃗× ( b⃗×c⃗ ) = ( a⃗·c⃗ ) b⃗ - (a⃗·b⃗ ) c⃗. The Graßmann identity is essential for defining the cross-product between three vectors in the two-dimensional case. Based on this definition, we extend the definition of the curl operator to apply to scalar functions in the two-dimensional case curl(ψ) = ( [ ∂ψ/∂ x_2; - ∂ψ/∂ x_1 ]). §.§ Legendre and Integrated Legendre Polynomials We aim to construct linear independent curl-conforming shape functions with tensor products from one-dimensional orthogonal polynomials. For the polynomial basis, we choose Legendre <cit.> and integrated Legendre polynomials <cit.> as they will provide good sparsity in the involved element matrices <cit.>[Chapter 5.2.1]. In the following, we denote the Legendre polynomials by l_i(x)=1/2^i i!d^i/dx^i (x^2-1)^i ∈ℒ_2(-1,1), where i ∈{0,…,p} stands for the polynomial degree. With the following recursive formula, an efficient point evaluation of the Legendre polynomials is possible. For x ∈ [-1,1] let [ l_0(x) = 1; l_1(x) = x; (n + 1) l_n+1(x) = (2n + 1) l_n x - n l_n-1, for n ≥ 1. ] These polynomials span P^p([-1,1]), particularly because they fulfill the orthogonality property ∫_-1^1 l_i(x) l_j(x) x = 2/2 i + 1δ_ij. From the Legendre polynomials, we can define the integrated Legendre polynomials by L_n(x) ∫_-1^x l_n-1(ξ) ξ for x∈[-1,1]. Similar to before, we can define the integrated Legendre polynomials with a recursive formula, which allows for an efficient point evaluation. [ L_1(x) = x; L_2(x) = 1/2( x^2 - 1 ); (n + 1) L_n+1(x) = (2n - 1)x L_n(x) - (n - 2) L_n-1(x), for n ≥ 2 ] For the recursion formula to work, we included L_1 even though L_1(x) ≠∫_-1^x l_0(ξ) ξ. Above, we have gathered all the necessary tools to construct the Nédélec space. As the curl operator behaves quite differently between the two- and the three-dimensional case, we continue with a definition by cases. §.§ Two-dimensional Nédélec elements Based on the De-Rham cohomology, we must choose our basis functions out of the Nédélec space V_h. Therefore, we want to introduce the definition of the space V_h in the following. The concept to employ integrated Legendre polynomials as polynomial basis was introduced in <cit.>, for the notation we follow the work of S. Zaglmayr <cit.>. The enumeration of vertices and edges is based on the implementation in  <cit.>. We define the quadrilateral reference element as 𝒞^2 = [0, 1] × [0, 1] with the following parametrization of Figure <ref>. We continue by defining the set of all edges ℰ = { E_m }_0 ≤ m < 4 with local edge-ordering E_m = { v_i, v_j } where (i,j) ∈{(0,2), (1,3), (0,1), (2,3)}. We denote the cell itself with local vertex-ordering C = {v_0, v_1, v_2, v_3}. The polynomial order is given by p⃗ = ( { p_E }_E ∈ℰ, p_C ). Based on this, we construct the 𝐇_curl conforming basis function, where we choose a definition that will provide a good sparsity pattern for the resulting element matrices. 2|c| 𝐇_curl conforming basis function 2|l|Vertex-based shape functions 2|l|There are no DoFs on the vertices. 2|l|Edge-based shape functions 2|l|for 0 ≤ i < p_E, E ∈ℰ, where λ_α and σ_α, α∈{0,1,2,3} as defined in figure <ref> Lowest order φ_E_m^𝒩_0 = 1/2∇( σ_e_2 - σ_e_1) ( λ_e_1 + λ_e_2) Higher-order φ_i^E_m = ∇( L_i+2( σ_e_2 - σ_e_1) ( λ_e_1 + λ_e_2) ) 2|l|Cell-based functions 2|l|0 ≤ i,j < p_C, where e⃗_x and e⃗_y are the unit vectors in x- and y-direction correspondingly Type 1: φ_(i,j)^C,1 = ∇ ( L_i+2(ξ_F) L_j+2(η_F) ) Type 2: φ_(i,j)^C,2 = ∇( L_i+2(ξ_F) L_j+2(η_F) )   where ∇(a b) (a'   b - a   b') is the anti-gradient Type 3: φ_(0,j)^C,3 = L_j+2(2y-1) e⃗_x φ_(i,0)^C,3 = L_i+2(2x-1) e⃗_y With the help of the constructed 𝐇_curl conforming shape functions, we can define a local basis for the two-dimensional Nédélec space on the reference element. V_h(𝒞^2) V^𝒩_0_h(𝒞^2) ⊕_E ∈ℰ V^E_h(𝒞^2) ⊕ V^C_h(𝒞^2), with V^𝒩_0_h(𝒞^2) span{φ^𝒩_0_E : E ∈ℰ} V^E_h(𝒞^2) span{φ^E_i : 1 ≤ i ≤ p_E,  E ∈ℰ} V^C_h(𝒞^2) span{φ^C,t_(i,j) : 0 ≤ i,j < p_C,  1 ≤ t ≤ 2 } ⊕span{φ^C,3_(0,j) : 0 ≤ j < p_C }⊕span{φ^C,3_(i,0) : 0 ≤ i < p_C } where V^𝒩_0_h is the space of the lowest-order Nédélec function, V^E_h is the space of the edge bubbles, and V^C_h is the space of the cell bubbles. Visualizations of some edge-based basis functions are presented in Figure <ref>. For a discussion that focuses more on the two-dimensional case and provides additional visualizations of the two-dimensional base functions, we refer the reader to <cit.>. §.§ Three-dimensional Nédélec elements Similar to the previous case, our goal is to construct a basis for the three-dimensional Nédélec space that will lead to a good sparsity pattern of the resulting element matrices. We begin by defining the hexahedral reference element as 𝒞^3 = [0,1] × [0,1] × [0,1]. The enumeration of vertices, edges and faces is based on the implementation in <cit.>. The parameterization is defined as in Figure  <ref>. We continue by defining the set of all edges ℰ = { E_m }_0 ≤ m < 12 with local edge-ordering E_m = { v_i, v_j } as shown in figure <ref>. The local face order is given by [ ℱ = { F_m }_0 ≤ m < 6 = { {v_0, v_2, v_4, v_6 }, {v_1, v_3, v_5, v_7 }, {v_0, v_1, v_4, v_5 },; {v_2, v_3, v_6, v_7 }, {v_0, v_1, v_2, v_3 }, {v_4, v_5, v_6, v_7 } }. ] A more detailed description of the cell is given in the documentation of [<https://www.dealii.org/current/doxygen/deal.II/structGeometryInfo.html>]. The polynomial order is given by p⃗ = ( { p_E }_E ∈ℰ, { p_F }_F ∈ℱ, p_C ). 2|c| 𝐇_curl conforming basis function 2|l|Vertex-based shape functions 2|l|There are no DoFs on the vertices. 2|l|Edge-based shape functions 2|l| for 0 ≤ i < p_E, E ∈ℰ, where λ_α and σ_α, α∈{0, …, 7} as defined in figure <ref> Lowest order: φ_E^𝒩_0 = 1/2∇ ( σ_e_1 - σ_e_2 ) ( λ_e_1 + λ_e_2 ) Higher order: φ_i^E = ∇ ( L_i+2 (σ_e_1 - σ_e_2) ( λ_e_1 + λ_e_2 ) ) 2|l|Face-based 2|p13cm| For 0 ≤ i,j < p_F, F ∈ℱ as defined in equation (<ref>) 2|p13cm| we define λ_F ∑^7_α=0λ_f_α and (ξ_F, η_F) (σ_f_1 - σ_f_2, σ_f_1 - σ_f_4). Type 1: φ_(i,j)^F_m,1 = ∇ ( L_i+2(ξ_F) L_j+2(η_F) ) Type 2: φ_(i,j)^F_m,2 = ∇( L_i+2(ξ_F) L_j+2(η_F) )   where ∇(a b) (a'   b - a   b') is the anti-gradient Type 3: φ_(0,j)^F_m,3 = L_j+2(η_F) λ_F ∇ξ_F φ_(i,0)^F_m,3 = L_i+2(ξ_F) λ_F ∇η_F 2|l|Cell-based 2|l| 0 ≤ i,j,k < p_C, where e⃗_x, e⃗_y, e⃗_z are the basis vectors Type 1: φ_(i,j,k)^C,1 = ∇( L_i+2(2x-1) L_j+2(2y-1) L_k(2z-1) ) Type 2: φ_(i,j,k)^C,2 = diag(1,-1,1) φ_(i,j,k)^C,1 φ_(i,j,k)^C,2 = diag(1,-1,-1) φ_(i,j,k)^C,1 Type 3: φ_(0,j,k)^C,3 = L_j+2(2y - 1)L_k+2(2z - 1) e⃗_x φ_(i,0,k)^C,3 = L_i+2(2x - 1)L_k+2(2z - 1) e⃗_y φ_(i,j,0)^C,3 = L_i+2(2x - 1)L_j+2(2y - 1) e⃗_z With the help of the constructed 𝐇_curl conforming basis function, we can define a basis for the three-dimensional Nédélec space. The main difference compared to the two-dimensional is that cell bubbles from the two-dimensional case become the face bubbles in the three-dimensional case. Furthermore, we define an additional space for the cell bubbles. V_h(𝒞^3) V^𝒩_0_h(𝒞^3) ⊕⊕_E ∈ℰ V^E_h(𝒞^3) ⊕⊕_F ∈ℱ V^F_h(𝒞^3) ⊕ V^C_h(𝒞^3), with V^𝒩_0_h(𝒞^3) span{φ^𝒩_0_E : E ∈ℰ} V^E_h(𝒞^3) span{φ^E_i : 0 ≤ i ≤ p_E,  E ∈ℰ} V^F_h(𝒞^3) span{φ^F,t_(i,j) : 0 ≤ i,j ≤ p_F,  1≤ t ≤ 2,  F ∈ℱ} ⊕span{φ^F,3_(0,j) : 0 ≤ j ≤ p_F  F ∈ℱ}⊕span{φ^F,3_(i,0) : 0 ≤ i ≤ p_F  F ∈ℱ} V^C_h(𝒞^3) span{φ^C,t_(i,j,k) : 0 ≤ i,j,k ≤ p_C, 1≤ t ≤ 2 }⊕span{φ^C,3_(0,j,k) : 0 ≤ j,k ≤ p_C ⊕} ⊕span{φ^C,3_(i,0,k) : 0 ≤ i,k ≤ p_C }⊕span{φ^C,3_(i,j,0) : 0 ≤ i,j ≤ p_C } where V^𝒩_0_h is the space of the lowest-order Nédélec function, V^E_h is the space of the edge bubbles, V^F_h is the space of the face bubbles and V^C_h is the space of the cell bubbles. Visualizations of some edge-based basis functions are presented in Figure <ref>, and visualization of some face-based basis functions is presented in Figure <ref>. §.§ 𝐇_curl-conforming transformation In order to extend our definition from the reference element to the physical element, we introduce a 𝐇_curl-conforming transformation that maps the vectorial shape functions from the reference element 𝒞^d, d∈{2,3} onto the physical element C^d. The transformation has to preserve the degrees of freedom to be 𝐇_curl-conform. The transformation also has to map gradient fields from the reference element onto gradient fields on the physical element. In <cit.>, the Piola transformation is presented that satisfies these properties. Let us summarize this transformation shortly. Let Φ_C: 𝒞^d → C^d be a continuously differentiable, invertible and surjective map, û⃗∈𝐇_curl(𝒞^d). The transformation u⃗ F_C^-Tû⃗∘Φ_C^-1 implies u⃗∈𝐇_curl(C^d) with [ d = 2: d = 3:; curl_x u⃗ = J^-1_Ccurl_x̂û⃗∘Φ^-1_C curl_x u⃗ = J^-1_C F_Ccurl_x̂û⃗∘Φ^-1_C,; ] with J_C=det F_C. § PRINCIPAL PROBLEM OF THE SIGN-CONFLICT This section aims to construct the elements so that tangential continuity is ensured between elements. §.§ On the continuity requirements To ensure the continuity between two neighboring elements, the resulting polynomials on the edges in two dimensions and on the edges and the faces in three dimensions must match. In the previous section, we have defined local edge and face parametrizations. The parameterization we have chosen is either symmetric for even polynomial degrees or anti-symmetric for odd polynomial degrees, as visualized in Figure <ref>. To ensure that the polynomials between neighboring edges match, we need to ensure that the direction also matches. If the directions do not match, this results in the sign conflict as some polynomials are anti-symmetric see Figure <ref>. This problem does not arise for Lagrange-type elements, as, in that case, the degree of freedom belongs to point evaluations. §.§ Solutions and algorithms for treating the sign conflict The apparent solution for the sign conflict is to choose a particular direction for each edge and each face. For example, one could define each edge to point from left to right or correspondingly from bottom to top, but it is easy to find a counter-example where this approach will fail, and one will encounter the sign conflict. Therefore we consider the Algorithm <ref> and <ref>, which were proposed by Zaglmayr and Schöberl <cit.> and implemented into by Kynch and Ledger <cit.>. Algorithm <ref> is applicable for the two-dimensional case and computes a globally consistent orientation for all edges based on the global numeration of dofs. Algorithm <ref> computes a globally consistent orientation for all faces based on the global numeration of dofs. § SIGN CONFLICT ON NON-UNIFORM GRIDS §.§ Motivation for the extension to non-uniform grids In the finite element method context, adaptive grid refinement has proven to be a powerful technique as it allows an adjustment of the resolution of the computational mesh in different simulation regions. The goal is to archive a good balance between accuracy and computational cost by focusing on the more complex parts of the simulation by using local grid refinement in these areas. To decide which parts of the simulation need to be refined can be done either user-defined or automated. For example, automatic, i.e., adaptive, selection can be performed via an error estimator based on the solution's local behavior. The discussion of error estimators is outside the scope of this work, but we refer the reader to <cit.>. When an element is locally refined in an unstructured mesh, the neighbor elements will be refined to eliminate hanging nodes. This approach is unsuitable for structured meshes since a single local refinement would lead to a uniform global refinement. Therefore, when a structured mesh is locally refined, hanging nodes, edges, and faces in the three-dimensional case are introduced. This leads to a mismatch between the refined and coarse element's number of dofs. §.§ Overview of the implementation of hanging-nodes Additional constraints must be implemented to overcome the mismatch between refined and coarse elements. These constraints are necessary to ensure that the resulting linear system can be solved numerically. In the case of Nédélec elements, the constraints for non-conforming meshes require that the tangential components of the basis function on the hanging edges and faces match those of the corresponding basis functions on the neighboring unrefined element. Constraints containing weights can be developed by considering a reference setting where we match the tangential constraints. These constraints can be applied to more general shapes with the help of an affine coordinate transformation <cit.>. The computation of the weights is not in the scope of the work. We refer the reader to <cit.> for the computation of the weights. The implementation presented here was created using as a programming platform that provides the functionality to compute the weights numerically. Therefore, we focus on modifying the given weights to match the grid's orientation, described in section <ref>. The hanging edge and face constraints depend on the refined element's orientation and its unrefined neighbor's orientation. Therefore the constraints have to be computed during the runtime. In the previous implementation of the Nédélec elements in [<https://www.dealii.org/current/doxygen/deal.II/classFE__Nedelec.html>], this problem was overcome by assuming pre-assigned edge and face parameterizations, allowing for pre-computed constraints. §.§ Preparation of the mesh To greatly simplify the computation of the constraints during the runtime, we extend Algorithm <ref> and Algorithm <ref> so that the exterior edges and faces match those of the parent's neighbors. §.§.§ Preparation of the mesh for the two-dimensional case To gain more insight into the Algorithm <ref>, we consider Figure <ref>, which compares the direction of the edges of the unrefined parent element with the direction of the edges of the refined child elements. To visually differentiate between those two, the parent element is depicted in black, while the hanging vertices and edges of the child elements are highlighted in blue. The parent element consists of the two vertices v_0 and v_1 and the edge E^P_0 that points from v_0 to v_1. The left child element consists of the vertices v_0 and v_1 and the edge E^C_0 between them, and the right child element consists of the vertices v_2 and v_1 and edge E^C_1. Suppose we apply Algorithm <ref> in to a refined element, the edges will always point to the hanging vertex, the hanging vertex as a higher global dof index as the outer vertices. As long as the neighbor element has the same refinement level, the global orientation stays consistent. However, when the neighbor is coarser, one has to apply the Algorithm <ref>; see Figure <ref>. §.§.§ Preparation of the mesh for the three-dimensional case In Algorithm <ref>, we have focused on the orientation of hanging edges, which applies to two-dimensional cases. However, we also have to deal with hanging faces in the three-dimensional case. Therefore we introduce the Algorithm <ref>. To gain more insight into the Algorithm <ref>, we visualize the orientation of the refined face in Figure <ref>. As before, the parent element is depicted in black, and the direction of its children is in blue. In addition, the face direction is indicated here. §.§.§ Challenging refinement cases In the three-dimensional case, there are specific configurations where the element has an edge that neighbors a coarser element, even though the neighbors of all faces of that element are of the same refinement level as the element itself. For an example of such a configuration, see Figure <ref>. To greatly simplify the computation of the hanging edges and hanging face constraints later on, we provide Algorithm <ref>, which deals with these specific configurations. §.§ Modification of the constraint matrix In the previous section, we introduced several algorithms to prepare the orientation of the grid to make it easier to adapt the hanging node constraints to general grids. Based on that work, we now modify the constraint matrix. Here we like to point out that the considered enumeration of DoFs is based on the ordering of the edges and vertices as in . The basic concept is the same to extend this work to other FEM software, but one must consider the edges and vertices enumeration of that specific software. §.§.§ Constraints for hanging edges The two most prominent approaches to deal with the additional DoFs originating from the hanging edges and faces are the following. First, one can apply suitable projections and use iterative solvers <cit.>. The second method on which we focus in this work is to impose constraints on the additional DoFs of the refined element by expressing them as a linear combination of the coarse's DoFs in the following way: φ_r = [α_i,j]_i,j^n,m·φ_c, where φ_r is the vector of the basis function on the refined element, φ_c is the vector of basis functions on the coarse element, and α_i,j are the weights between the corresponding basis functions. Figure <ref> considers the most simple example. Here one takes into account that the Nédélec functions are edge-based. Therefore we obtain two DoFs on the coarse element and four DoFs on the refinement element. §.§.§ Resolving the sign conflict on hanging edges We have just introduced the constraint matrix for oriented meshes so far. To extend the implementation of Nédélec elements in , for working with non-oriented meshes, we need to modify the constraint matrix accordingly to the orientation of the mesh. Therefore we must compare the refined element's vertex order with the coarse neighbor's vertex order, similar to the Algorithm <ref>. If the vertex order between the refined and coarse neighbors does not match, we must adapt the constraint matrix accordingly. Furthermore, we need to consider to which underlying base function each entry of the constraint matrix belongs. We introduced the base function for the Nédélec elements in detail in section <ref>. Here we need to consider if the underlying basis function is symmetric or anti-symmetric. Since entries that map an anti-symmetric shape-function to a symmetric shape-function and vice versa are multiplied by -1. Entries that map from symmetric shape functions to symmetric shape functions do not change. Also, entries that map from anti-symmetric shape functions to anti-symmetric do not change, as both sign changes cancel each other out. This is summarised in the Tabular <ref>. Given this information, we can formulate the following Algorithm to resolve the sign conflict on hanging edges. Again we want to consider the smallest example possible. Here we have to choose polynomial degree p=2 for the underlying base function. As for p=1, the constraint matrix would not change at all. See Figure <ref>. §.§.§ Constraints for hanging faces So far, we have focused solely on the orientation of hanging edges, which applies to two-dimensional cases. For the extension to three dimensions, we need to consider hanging faces. Hanging faces consist of eight external and four internal lines and four faces, as visualized in Figure <ref>. The coarse element consists of four external lines and one face. Therefore the size of the constraint matrix increases accordingly. As the constraint matrix increases significantly in size for hanging faces, particularly in the first non-trivial case where the polynomial degree is p=2, we only visualize the structure of the constraint matrix in Figure <ref>. It is worth noting that for p=1, there are no DoFs on the faces, rendering this case unsuitable for our study on the structure of hanging faces. §.§.§ Resolving the sign conflict on hanging faces Due to the complexity of the structure of the constraint matrix, we consider the different sub-constraint matrices, i.e., the C_(i,j) in Figure <ref>, independently as this is the natural decomposition of the problem. Thereby we consider each hanging edge and face independently. Based on our prior study of the problem, we can determine which coarse edge and face directions we must consider depending on the current hanging edge or face we want to constrain. This information is also shown in Figure <ref>. Constraints for the outer edges We begin by adapting the signs of sub-constraint matrices that describe the map edges on the coarse element to outer edges, i.e., edges E_4,…, E_11 in Figure <ref>, on the refined element. This is most straightforward as it is analogous to the two-dimensional case discussed in section <ref>. Based on the vertex order, we determine the direction of the edges and then adapt the signs of the corresponding entries in the constraint matrix. Constraints for the faces Next, we discuss how to adapt the constraint matrix for that map to the refined faces F_0, …, F_3. For an edge, there are only two possible configurations (pointing from the left to the right or vice versa). However, in the three-dimensional case, we must consider the x-direction and the y-direction and which direction is prioritized. This results in 2^3=8 possible orientations. These are visualized in Figure <ref>. The diagonal arrow denotes whether the x or the y-direction is prioritized. If the diagonal arrow points to the upper left vertex, the x-direction is prioritized. If the diagonal arrow points to the lower right vertex, the y-direction is prioritized. We must modify the constraint matrix accordingly based on the given configuration of the coarse and refined faces. We can geometrically interpret the necessary operations as x-axis inversion, y-axis inversion, and x- and y-axis exchange. These operations are visualized in Figure <ref>. 0.45 0.45 Because of the more complex nature of these operations, we provide them here as high-level pseudo-code. In Algorithm <ref>, we present how to perform an x-inversion on the constraint matrix for a given cell 𝒦. The y-inversion is analogous to the x-inversion. The Algorithm <ref> explains the x- and y-axis exchange. Given a particular configuration of the coarse face, we can now create a look-up table, which of these operations have to be applied to the sub-constraint matrix to map to a particular refined face correctly. For an example of such a look-up table, see Tabular <ref>. Constraints for the inner edges At last, we describe the process of adapting the constraint matrix for the inner edges E_0, …, E_3. We treat this case last, as this configuration is the most complex, requiring extensive modifications of the constraint matrix. As shown in Figure <ref>, the refined interior edges are constrained by all four coarse edges and the coarse face. For the sub-constraint matrices that map from the coarse edges parallel to the refined edge, we are considering. We employ the same approach for the outer edges. Next, we need to apply a similar approach as for the faces, taking into account the direction of the internal edge, which can be either in the x- or y-direction. We must apply the corresponding axis inversion as described above according to the orientation of the internal edge we are currently considering. However, we must deal with one additional case for the inner edges: the sub-constraint matrix mapping from the coarse edges orthogonal to the refined internal edge. This works again similarly to the case of the outer edges. The corresponding sub-constraint matrices must also be adapted according to the direction of the coarse edges parallel to the refined edge. This is shown in the Algorithm <ref>. § MODEL PROBLEM: TIME-HARMONIC MAXWELL'S EQUATIONS AND NUMERICAL SOLUTION Let Ω⊂ℝ^d, d∈{2,3} be a bounded modelling domain with sufficiently smooth boundary Γ = Γ^inc∪Γ^∞, where Γ^∞ is an absorbing boundary condition and Γ^inc is the boundary condition for some given incident electric field. Find the electric field u⃗∈𝐇_curl(Ω){v⃗∈ℒ^2(Ω),  curl(v⃗) ∈ℒ^2(Ω) } such that, {[ curl( μ^-1curl(u⃗) ) - εω^2 u⃗ = 0⃗ in Ω; μ^-1γ^t ( curl( u⃗) ) - i κωγ^T ( u⃗) = 0⃗ on Γ^∞; μ^-1γ^t ( curl( u⃗) ) - i κωγ^T ( u⃗) = 2 i ωγ^T ( u⃗^inc) on Γ^inc, ]. where u⃗^inc:ℝ^d→ℂ^d, d∈{2,3} is some given incident electric field, μ∈ℝ^+ is the relative magnetic permeability, κ = √(ε), ε∈ℂ relative permittivity, ω = 2 π/λ is the wavenumber and λ∈ℝ^+ is the wavelength. For the traces we define the space of well-defined surface divergence fields 𝐇_div^-1/2(Γ) {v⃗∈𝐇^-1/2(Γ) : v⃗·n⃗=0,  div_Γv⃗∈𝐇^-1/2(Γ) } and the space of well-defined surface curls 𝐇_curl^-1/2(Γ) {v⃗∈𝐇^-1/2 : v⃗·n⃗=0, curl_Γv⃗∈𝐇^-1/2(Γ) }, then the traces are given by [ γ^t: 𝐇_curl(Ω) →𝐇_div^-1/2(Γ), γ^t(v⃗) = n⃗×v⃗ and; γ^T: 𝐇_curl(Ω) →𝐇_curl^-1/2(Γ), γ^T(v⃗) = n⃗× (v⃗×n⃗) ] where n⃗ denotes the outward normal to Ω. System (<ref>) is called time-harmonic, because the time dependence can be expressed by e^i ωτ, where τ≥ 0 denotes the time. For the derivation of the time-harmonic Maxwell's equations we refer the reader to <cit.>. Before we derive the weak formulation let us recapitulate, that with integration by parts, we can reformulate an integral in the following way ∫_Ωcurl(v⃗) w⃗  x = ∫_Ωv⃗curl( w⃗)   x + ∫_∂Ω( v⃗×w⃗) w⃗  s. Next, we want to derive the weak formulation of equation (<ref>). ∫_Ωcurl(μ^-1curl( u⃗) ) φ⃗  x - εω^2 ∫_Ωu⃗φ⃗  x = 0⃗ (<ref>)⇒ ∫_Ωμ^-1curl( E⃗) curl( φ⃗)   x - εω^2 ∫_ΩE⃗φ⃗  x + ∫_∂Ωμ^-1γ^t ( curl( E⃗) ) φ⃗  s = 0⃗. Finally we apply the definition of the boundaries Γ^∞ and Γ^inc and obtain the weak form, which is given by: Find u⃗∈𝐇_curl(Ω) such that for all φ⃗∈𝐇_curl(Ω) ∫_Ω( μ^-1curl( u⃗) curl( φ⃗) - εω^2 u⃗φ⃗)   x + i κω∫_Γ^∞γ^T ( u⃗) γ^T ( φ⃗)   s = ∫_Γ^incγ^T ( u⃗^inc) γ^T ( φ⃗)   s. Notice that we have chosen the plane wave injection for the incident field <cit.>. The numerical solution of the resulting linear system is rather challenging, as it is ill-posed. So specialized methods have to be employed. A well-known approach to address the time-harmonic Maxwell's equation is based on combining direct solvers and domain decomposition methods <cit.>. Here the basic idea is to divide the problem into small enough sub-problem so that a direct solver can handle each sub-problem. Another approach is to find suitable preconditioners for iterative solvers, for example, with the help of H-matrices. As the computation of such preconditioners is quite challenging, these methods often have to be combined with a domain decomposition method <cit.>. § NUMERICAL TESTS In this section, we present some numerical examples, to demonstrate our implementation of hanging nodes for Nédélec elements especially on non-orientable geometries. Therefore, we consider four examples. §.§ Minimal test case: simple cube As a first proof of concept, we want to compare the results of the new implementation of hanging nodes for Nédélec elements with the existing implementation of hanging nodes for Nédélec elements. We have to consider that the existing implementation only works for orientable grids. We use a simple cube refined once globally as a minimal test case. So it consists of eight cells. Then, one of these cells is refined adaptively. The resulting grid is orientable. Therefore we can use this test case as a first proof of the concept and compare our results with the existing implementation of hanging nodes in . Comparing the results of the existing implementation and our new implementation of hanging nodes for Nédélec elements show no difference. It holds |E_existing - E_new|_∞ < 1e-16. Therefore, we continue with more complicated applications in the following. §.§ Quantitative computational analysis on a simple cube To further validate our new implementation of the hanging nodes, we consider different goal functionals on different (adaptive-)refinement levels, where we use the finest level with 2 080 944 DoFs as numerical reference. As a benchmark problem, we consider a cylindrical fiber made from SiO_2 with a refractive index of n_SiO_2=2.0257 surrounded by air n_air=1.0000 and an incident wave with a wavelength of λ=375 nm. We evaluate the following three goal functionals: the point value J_P(u) = u(x), the face integral J_F(u) = ∫_f u(s) s, and the domain integral J_D(u) = ∫_ω u(x) x. The results are presented in Table <ref>. In this test, we employ the polynomial degree of the underlying base functions high enough so that all features of the base functions are tested. Therefore, we choose a polynomial of degree p=3. The errors resulting from the sign conflict are visible in the intensity plot. Consequently, we compare in Figure <ref> the intensity plots resulting from the existing implementation of the Nédélec elements in . The intensity plot is computed in the first column with the FE_Nedelec[<https://www.dealii.org/current/doxygen/deal.II/classFE__Nedelec.html>] class, which does not support non-oriented meshes. The resulting intensity distribution differs from the correct solution. The results computed with the existing implementation of the FE_NedelecSZ[<https://www.dealii.org/current/doxygen/deal.II/classFE__NedelecSZ.html>] class are presented in the second column. Here the solution on the uniform refined grid is correct, but on the isotropic refined grid, the solution differs from the correct solution. The result from the here presented extension of the FE_NedelecSZ class is shown in the third column. Here is also the solution on the isotropic refined grid correct. §.§ Silver ball in vacuum To validate our implementation for non-orientable grids, we compute the scattering of a planar electromagnetic wave on a silver ball in a vacuum once via FEM and once with Mie's theory <cit.>, a well-established method for calculating the scattering of electromagnetic waves by spherical particles. For our simulation, we assume a silver ball with a radius of 100nm and a complex refractive index of r_Ag=0.0+4.0i that is hit by an incident planar wave with a wavelength of λ=500 nm and linear polarisation in the x-direction. To compute the scattering of the electric field in three dimensions via FEM, we use Nédélec elements with polynomial order p=2 as basis functions and solve the time-harmonic Maxwell's equations, as presented in chapter <ref>. Also, we employ a domain decomposition of the computational grid by decomposing the grid into four concentric shells. Each shell is further divided into two half-shells, resulting in eight subdomains. Here, using hanging nodes allows us to use adaptive mesh refinement around the silver ball, where the electric field is expected to vary significantly. Thereby we can increase the accuracy of our simulation without adding too many additional DoFs. For the computation of the electric field via Mie's theory, we employed the library scattnlay <cit.>. In Figure <ref>, we are comparing the result obtained by Mie's theory, and once obtained with FEM, we obtain a generally good agreement between the results. However, the computation of the scattered electric field of the silver ball with FEM proves quite challenging. Here we can observe some numerical artifacts at the north and south positions of the nano particle. The computation of the scattering field from nano particles provides a challenging benchmark for FEM, as the results can be validated by comparison with the results from the well-established Mie Theory. Therefore further studies of the nano particle are of interest. §.§ Laser-written waveguide To test our implementation of hanging nodes for Nédélec elements on non-orientable grids in a practical application in optics simulations, we consider a waveguide created by writing six modifications into a carrier substrate with a laser, causing the substrate to compress in the center. To simulate the behavior of a laser in that waveguide, we use the FEM method, as discussed above again. The geometry is quite complex, so we employ domain decomposition and adaptive mesh refinement. For the simulation, we assume the carrier material to have a refractive index of r_cladding=1.48995 and the compressed center to have a refractive index of r_center=1.4906. The modifications have a distance of 3mum, and the incident laser light has a wavelength of λ=660 nm and is linearly polarised in the x-direction. § CONCLUSION In this work, we considered the sign conflict, specifically in scenarios where hanging nodes are present. We provide a comprehensive guide in terms of mathematical derivations and algorithmic designs for resolving this sign conflict. These concepts can be applied to any software package that supports Nédélec elements and locally refined meshes on quadrilaterals or hexahedra with hanging nodes. Our choice is as a programming platform that is highly accessible and user-friendly. The new implementation was demonstrated for four numerical experiments that include qualitative comparisons in two and three spatial dimensions as well as brief computational convergence studies. Finally, a current practical example from optics simulations showing a laser-written wave-guide is presented. § ACKNOWLEDGMENTS This work is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Furthermore, we would like to thank Tim Haubold and Philipp König for many fruitful discussions and Clemens Pechstein for tips on how to find mistakes in the implementation.
http://arxiv.org/abs/2306.12244v1
20230621130416
Discovering Intrinsic Spatial-Temporal Logic Rules to Explain Human Actions
[ "Chengzhi Cao", "Chao Yang", "Shuang Li" ]
cs.CV
[ "cs.CV" ]
Opportunities of Renewable Energy Powered DNN Inference Tian Guo ======================================================= We propose a logic-informed knowledge-driven modeling framework for human movements by analyzing their trajectories. Our approach is inspired by the fact that human actions are usually driven by their intentions or desires, and are influenced by environmental factors such as the spatial relationships with surrounding objects. In this paper, we introduce a set of spatial-temporal logic rules as knowledge to explain human actions. These rules will be automatically discovered from observational data. To learn the model parameters and the rule content, we design an expectation-maximization (EM) algorithm, which treats the rule content as latent variables. The EM algorithm alternates between the E-step and M-step: in the E-step, the posterior distribution over the latent rule content is evaluated; in the M-step, the rule generator and model parameters are jointly optimized by maximizing the current expected log-likelihood. Our model may have a wide range of applications in areas such as sports analytics, robotics, and autonomous cars, where understanding human movements are essential. We demonstrate the model's superior interpretability and prediction performance on pedestrian and NBA basketball player datasets, both achieving promising results. § INTRODUCTION For a human, although the exhibited movements can be complex, the logic behind the actions is usually simple, clear, and can be generalized. The logic rules present a compact and high-level knowledge representation, defining what actions tend to be executed under what conditions. There has been a great interest and business value in unveiling the human logic from the observational movements and actions <cit.>. We provide two motivating examples below. In sports analytics, understanding each player's behavior preferences or tendencies under various scenarios will provide coaches with valuable information <cit.>. Usually, coaches need to watch the game or training videos for hundreds of hours before they can summarize the discoveries into compact principles. Could we design an algorithm to synthesize these principles from the raw action data automatically? One can imagine, such a tool will significantly reduce the workload of coaches, providing more granular insight into each player's capabilities and strategies, and aiding in personalized training and match strategy design <cit.>. In self-driving car, it's essential to enable the self-driving cars to “read human's mind" like humans. This requires the self-driving cars to automatically understand human intentions and reasoning when they are running on the same roads with human drivers <cit.>. If self-driving cars can automatically distill logic rules from their observed low-level noisy human actions and movement trajectories, it will increase the technical reliability and accelerate the widespread use of self-driving cars. For human actions, lots of the governing rules would be regarding the spatial-temporal relation with the surrounding environments and their intentions <cit.>. For example, when a basketball player with a ball is within the scoring range, his/her action, such as shoot, pass, or triple threat, is influenced by historical and current surrounding factors, such as the current locations of the player and the defenders, the time elapses of the game, the success shooting rate of the player in today's game, and so on. The quick decision made by the player is actually reflecting a composition of all these factors, which can be described as a collection of spatial-temporal logic rules in our model. Formally, in our introduced spatial-temporal logic rules, the logic variable (i.e., predicate) set will include spatial-temporal relation predicates, in addition to the commonly defined object property and relation predicates. The rule content will capture the spatial relation of the object with surrounding objects, as well as the temporal ordering constraints of the events. Our methods have the following distinct features: From the modeling perspective: (i) Our human action model is a rule-based probabilistic model, which treats each hidden rule as a “soft" constraint. We assume each rule will be executed by humans with probabilities, and this tolerates the uncertainties in data. (ii) Our model directly uses low-level, fine-grained, and (may) irregularly-spaced action times and locations (i.e., original 3d coordinates) as inputs, as opposed to other rule-based models, where one needs to first extract relational data as inputs. (iii) Our spatial and temporal predicates are also probabilistic. For predicates such as “left of" or “before", we model them as kernel functions with learnable variables. In this way, our introduced spatial-temporal predicates are smooth functions of the input locations and times, which increases model flexibility. From the learning perspective: We propose a tractable and differentiable algorithm that can jointly learn the rule content and model parameters from observational data. The learning framework is designed to maximize the likelihood of the human action trajectories. Specifically, we propose to use a neural rule generator to generate the spatial-temporal logic rule set. Our continuous rule generator parameters will be optimized in a differentiable way. The overall learning procedure is an expectation-maximization (EM) algorithm, where we treat the rule set as latent variables. In the E-step, the posterior distribution over the latent rule set is evaluated. In the M-step, both the rule generator parameters and the model parameters are optimized by maximizing the expected log-likelihood with respect to the current posterior. We demonstrated the promising performance of our model in terms of human action prediction and explanation on two interesting real datasets. § RELATED WORK Logic Rule Learning. Learning logic rules from raw data has been widely studied for various downstream tasks, such as motion inference <cit.> and healthcare analysis <cit.>. Learning rules via an exact search requires enumerating all combinations of the logic predicates and is intractable in most problems. One has to design heuristic searching algorithms by leveraging some structural properties of the problems. For example, Dash et al. <cit.> formulated a convex rule learning problem and proposed a column generation algorithm to expand the rule set gradually. Wang et al. <cit.> designed a Bayesian framework for learning rule classifiers and derived bounds on the support of rules in a MAP solution. Recently, Yang et al. <cit.> proposed an interesting end-to-end differentiable approach (Neural LP) to learn the parameters and structure of logical rules. Qu et al. <cit.>, and Sadeghian et al. <cit.> proposed an efficient logic rule mining algorithm based on the knowledge graph data. However, none of these advanced rule mining methods can directly work on spatial-temporal human action data when the inputs are raw event 3d coordinates and types. Spatio-Temporal Dynamics for Event Data. Since the human actions are irregular spatial-temporal event data, we also briefly discuss probabilistic models for such event sequences. Modeling the spatial-temporal dynamics of discrete events is foundational in many scientific fields and applications <cit.>. Shen et al. <cit.> proposed a novel deep learning model for spatial-temporal events such as taxi data and achieved promising prediction accuracy. Zhou et al. <cit.> integrated deep learning methods with spatiotemporal point processes and modeled the intensity function as a latent stochastic process. Chen et al. <cit.> deployed two novel architectures, including jump and attentive continuous-time normalizing flows, to learn the dynamics of the spatiotemporal event data. Repe et al. <cit.> learned canonical spatiotemporal point cloud representation using a latent ODE and continuous normalizing flows to generate shapes continuously in spacetime. However, these spatial-temporal event models are governed by hard-to-interpret dynamic functions and cannot be generalized to model human action events. Could we propose a model with logic-informed dynamic functions to explain the spatial-temporal human action events? § OUR MODEL §.§ Data: Human Actions Recorded as Spatial-Temporal Event Sequences Consider a set of objects, denoted as 𝒞. For the object c ∈𝒞, its trajectories and the key actions observed up to t can be summarized as a sequence of temporally ordered events ℋ^c_t- = { e^c_1=(t^c_1, s^c_1, κ^c_1), …, e^c_n = (t^c_n, s^c_n, κ^c_n) | t^c_n < t}, where t ∈ R^+ is the time, s∈ R^2 is the location, and κ∈𝒦 is the event (i.e., action) type. §.§ Definition of Spatial-Temporal Predicates Static Predicate Given the object set 𝒞, the predicate is defined as the property or relation of objects, which is a logic function as follows X(·):𝒞×𝒞···×𝒞↦{ 0,1 }. For example, Smokes(c) is a property predicate and Friend(c, c') is a relation predicate. Spatial-Temporal Predicate In our paper, we extend the above static predicates to spatial-temporal predicates, which include spatial-temporal property predicates and spatial-temporal relation predicates. Specifically, the spatial-temporal property predicates are defined as X(·): 𝒞×…×𝒞×𝒯×𝒮↦{ 0,1 }. For example, PickupKey (c, t, s) is a spatial-temporal property predicate. Suppose an entity c_1 picked up the key at time t_1 in location s_1, then the predicate will be grounded as True (1) at (c_1, t_1, s_1), i.e., PickupKey(c_1, t_1, s_1)=1; otherwise it is False (0). Given the observational human action data, the grounded predicate {PickupKey (c, t, s) }_t=1,2,… can be modeled as a sequence of discrete events – when the predicate becomes True, an event happens. In general, the grounded spatial-temporal property predicate {X(v_t)}_t =1,2,… is a discrete event sequence, where the event occurrence times and locations are irregular. The spatial-temporal relation predicates are introduced to define the spatial and temporal relations of two entities. Specifically, they are defined as R(·, ·): (𝒞×𝒯×𝒮 ) × (𝒞×𝒯×𝒮) ↦{ 0,1 }. Spatial-temporal relation predicates are logic variables indicating the spatial-temporal relations of two objects, where we further divide them into temporal relation predicates, static spatial relation predicates, and dynamic spatial relation predicates. More details can be found in Appendix. It is noteworthy that all these boolean predicates can be converted to probabilistic ones. We can soften these logic functions by kernel functions with learnable parameters to tolerate uncertainties in data. §.§ Definition of Spatial-Temporal Logic Rules r0.3 < g r a p h i c s > Illustration of feature construction using a simple logic formula with temporal relation predicate (t_1 < t_2), f:Y ← A B C (A Before B). The rule defines the template to gather combinations of the body predicate history events. Here predicate A has 2 events and predicate B has 1 event, the temporal relation constraint would lead to valid combinations (also called “paths"). This type of feature construction can be extended to spatial-temporal cases, where we count the valid paths as the feature. We will consider spatial-temporal logic rules where the body part contain spatial-temporal predicates as relation constraints. For example, a sensible rule will look like f: Y_TurnAround(c, t, s) ← X_PickUpKey ( c, t, s) ⋀ R_InFront ((c', t, s'), (c, t, s)) ⋀ R_Behind ((c”, t, s”), (c, t, s)) where c∈𝒞_person, c'∈𝒞_block, and c”∈𝒞_key. In general, the spatial-temporal logic rule in our paper is defined as a logical connectives of predicates, including property predicates and spatial-temporal relation predicates, f: Y(v) ←⋀_X_property∈𝒳_f X_property(v') ⋀_R_spatial-temporal∈ℛ_f R_spatial-temporal(v”,v) where Y(v) is the head predicate evaluated at the entity-time-location triplet v, 𝒳_f is the set of property predicates defined in rule f, and ℛ_f denotes the set of spatial-temporal relation predicates defined in rule f. §.§ Logic-Informed Action Event Models We consider a setting where we can fully observe the trajectories of all the moving objects, including their real-time locations and key actions (i.e., events), denoted as ℋ_t. We aim to propose a logic-informed spatial-temporal model to predict and explain the action type given the entity-time-location triplet v = (c, t, s) (i.e., query) and ℋ_t. Logic-informed feature The main idea is to construct the model features using spatial-temporal logic rules, as defined in Eq. (<ref>). Intuitively, given the entire trajectories ℋ_t and the query v=(c, t, s), the body part of the rule defines the evidence to be selectively gathered from history to deduce the event type for query entity v=(c, t, s). Assume that for each possible event type κ∈𝒦, there exist multiple rules such as Eq. (<ref>) to explain its occurrence, with κ being the head predicate. Given an individual rule as Eq. (<ref>), we propose to build the feature that is conditional on history and query as φ_f(κ |v, ℋ_t) = sign(κ∈ f)·∑_path∈{ℋ_t, v} g_f(path), where we introduce a function g_f(·) to check the body conditions of f given a “path". We use a simple example to explain how to compute features, as shown in Figure <ref>. As illustrated, the feature computes the valid total number of “paths" given the data and query. Suppose there is a rule set ℱ_κ, where the event κ is the head predicate. All the rules will play together to reason about the occurrence of κ. For each f ∈ℱ_κ, one can compute the features as above. Given the rule set ℱ_κ, we model the probability of the event κ as a log-linear function of the features, i.e., p(κ | v, ℋ_t) ∝exp( ∑_f ∈ℱ_κ w_f·ϕ_f(κ| v, ℋ_t) ), where w=[w_f]_f ∈ℱ≥ 0 are the learnable weight parameters associated with each rule. All the model parameters can be learned by maximizing the likelihood, which can be computed using the above Eq. (<ref>). We intend to train a rule generator p_θ and an evaluator p_w to maximize the likelihood of training data as: max_θ,w𝒪(θ, w) = 𝔼_(κ, v,ℋ_t) [log𝔼_p_θ [p_w(κ|v, ℋ_t)]]. More details can be found as follows. § OUR LEARNING ALGORITHM Our goal is to jointly learn the set of spatial-temporal logic rules {ℱ_κ}_κ∈𝒦 and their weights by the maximum likelihood method, where each rule has a general form as Eq. (<ref>). To discover each rule, the algorithm needs to navigate through the combinatorial space considering all the combinations of the property predicates and their spatial and temporal relations. To address this computational challenge, we propose a tractable (functional) EM algorithm that treats the rule set as latent variable z. The rules will be generated by a hidden neural rule generator. The overall learning framework alternates between an E-step, where the posterior distribution of the latent rule space is evaluated (rule generation), and the M-step, where the model parameters and rule generator parameters are optimized. Please refer to Fig. <ref> for an illustration. Our goal is to maximize the likelihood of the observed human action events {κ^(i)}_i =1, …, n. Using the chain rule, we have log p_w({κ^(i)}_i =1, …, n)=∑_i=1^nlog p_w(κ^(i)| v^(i), ℋ_t^(i-1)). To simplify the notation, we will use p_w(κ^(i)) to stand for p_w(κ^(i)| v^(i), ℋ_t^(i-1)) in the following. Given a latent rule set z, we have to marginalize the posterior of z to get the above log-likelihood. However, the exact inference of z is intractable. We will introduce an amortized recognition network p_θ(z|κ^(i)) to approximate the true posterior. We have log p_w(κ^(i))= D_KL(p_θ(z|κ^(i))|| p_w(z|κ^(i))) + ℒ(θ,w;κ^(i)), where the first term is the KL divergence of the approximate from the true posterior, and the second term ℒ(θ,w;κ^(i)) is the variational lower bound (ELBO). It can be represented as: ℒ(θ,w;κ^(i))= -D_KL(p_θ(z|κ^(i))||p_w(z)) +𝔼_p_θ(z|κ^(i))[log p_w(κ^(i)|z)]. And log p_w(κ^(i)) ≥ℒ(θ,w;κ^(i)). The bound becomes tight when the approximate posterior matches the true one. Our goal is to optimize the variational parameters θ and model parameters w from the ELBO lower bound. §.§ Rule Generator We deploy Transformer-based framework to model the rule generator p_θ. We define the distribution of a set of rules as follows: p_θ(z | v, ℋ_t) = Ψ(z|N,Trans_θ(v, ℋ_t)), where Ψ(·) is multinomial distributions, N is the number of the top rules, and Trans_θ(v, ℋ_t) defines a distribution over compositional rules with spatial-temporal states. The generative process of the rule set is quite intuitive, where we simply generate N rules to form z. In fact, this p_θ(z | v, ℋ_t) is a flexible posterior approximation function, which will be optimized by the EM type algorithm. We choose transformer over graph neural network (GNN) as our baseline because transformer architectures are based on a self-attention mechanism that is able to capture long-range relationships, as opposed to recurrent neural networks that process sequence elements recursively and can only take into account short-term context. Note that the graph operations in GNN are designed to learn node representations on the fixed and homogeneous graphs. The limitations especially become problematic when learning representations on a changeable graph that consists of various types of nodes and edges. §.§ Rule Evaluator Eq. (<ref>) is our rule evaluator (suppose we know the rule content). Here we assume the rule content is latent, and the rule evaluator is given as p_w(κ | v, z, ℋ_t) = exp( ∑_f ∈ z_κ w_f·ϕ_f(κ| v, ℋ_t) )/∑_κ'exp( ∑_f ∈ z_κ' w_f·ϕ_f(κ'| v, ℋ_t) ). §.§ Optimization We optimize the rule generator p_θ and reasoning evaluator p_w to maximize the objective in Eq. (<ref>). At each training iteration, we first update the reasoning predictor p_w according to some rules generated by the generator, and then update the rule generator p_θ. In our network, the latent rule set will be automatically discovered. The best set of logic rules is approximately obtained by sampling and preserving the top-K rules according to their posterior probabilities. Specifically, as shown in Eq. (<ref>), the posterior probabilities of the latent rule z is obtained by a Transformer type of encoder, which maps the input observed action trajectories to a latent explanatory rule space. Each candidate rule is generated in the latent rule space token-by-token (token means logic variable/predicate in our setting) in a sequential manner and meanwhile the posterior probability of each rule sequence can be evaluated. When optimizing the evaluator, we draw several rules ẑ for each query and let the evaluator use ẑ to predict κ. For each query, we aim to identify top K rules z_I from all generated rules ẑ, i.e., z_I ⊂ẑ, |z_I|=K. It is accomplished by taking into account the posterior probabilities of each subset of logic rules z_I with prior from the rule generator p_θ and likelihood from the reasoning predictor p_w. Then, the likely set of high-quality rules can be obtained by sampling from the posterior. Specifically, when a series of rules produced from the rule generator p_θ, we calculate the weights of each rule z^(i) as follows: H(z^(i))={p_w(κ|z^(i)) - 1/|𝒜|} + logTrans_θ(z^(i)|v, ℋ_t), where 𝒜 is the set of all candidate event type inferred by logic rules. Trans_θ(z^(i)|v, ℋ_t) is the probability of rule computed by the generator. For a subset of rules z_I ⊂ẑ, the log-probability can be approximated as: log p_θ,w(z_I|v,ℋ_t) ≈∑_z^(i)∈ z_I H(z^(i)) + logΨ(z_I|N,Trans_θ(v, ℋ_t)) + const. This equation inspired us to use the distribution q(z_I)∝exp(∑_z^(i)∈ z_I H(z^(i)) + logΨ(z_I|N,Trans_θ(v, ℋ_t))) as approximation of the posterior. Each rule z^(i) sampled from q(z_I) independently can be formed with N logic rules. Clearly, H(z^(i)) can be regarded as the quality of candidate rules, with consideration of the evaluator p_w. It is calculated as the contribution of a rule to the correct event type minus the average contribution of this rule to the other candidate responses. A rule is more significant if it obtains a higher score to the correct event type and a lower score to other potential predictions. After getting several high-quality rules from training data, we further utilize these rules to update the parameters of rule generator p_θ. Concretely, we regard the generated high-quality rules as part of training data, and update the rule generator by maximizing the log-likelihood as follows: 𝒪(θ) =log p_θ(z_I|v,ℋ_t) =∑_z^(i)∈ z_IlogTrans_θ(v,ℋ_t)+const. By learning to generate high-quality rules, the rule generator will reduce the search area and produce better empirical results for the reasoning predictor. § EXPERIMENTS In this section, we provide some implementation details and show ablation studies as well as visualization to evaluate the performance of our framework. We compare our model with several state-of-the-art approaches, including PECNet <cit.>, NMMP <cit.>, STGAT <cit.>, SOPHIE <cit.>, STAR <cit.>, Y-Net <cit.>, MID <cit.>, Social-SSL <cit.>, and NSP-SFM <cit.>. §.§ Datasets Stanford Drone Dataset. This dataset consists of more than 11,000 persons in 20 scenes captured from the campus of Stanford University in bird’s eye view. We follow the <cit.> standard train-test split, and predict the future 4.8s (12 frames) using past 3.2s (8 frames). Note that SDD dataset does not provide explicit pedestrian's action. Instead, we record them as an abstract encoding of the pedestrian's speed and location. The action contains [left, right, straight, turn around]. NBA SportVU Dataset. It is collected by NBA using the SportVU tracking system, which reports the trajectories of the ten players and the ball in real basketball games. Each trajectory contains the 2D positions and velocities of the offensive team, consisting of 5 players. We predict the future 10 timestamps (4.0s) based on the historical 5 timestamps (2.0s). Each player's action contains [left, right, straight, turn around, pass, shoot]. §.§ Metrics Here, we adopt two metrics for evaluation: Average Displacement Error (ADE_K) and Final Displacement Error (FDE_K). Specifically, ADE_K is the minimum among K average distances of the K predicted trajectories to the ground-truths in terms of the whole trajectories. FDE_K is the minimum distance among K predicted endpoints to the ground-truth endpoints. Moreover, we also calculate the accuracy and F1 score of event-types predicted from each network. §.§ Quantitative Analysis We compare our method with several state-of-the-art approaches, and table <ref> presents the qualitative results on the SDD dataset. The proposed model achieved the best performance in ADE, FDE and accuracy. We observe that our method significantly outperforms all baselines measured by ADE and FDE. It achieves an ADE of 6.41 and FDE of 10.23 at K=20 in SDD datset, which exceeds the previous state-of-the-art performance of Y-Net <cit.> by 18.3% on ADE and 13.6% on FDE. In NBA dataset, our method also achieve higher performance than Y-Net. This is because Y-Net firstly assume that the waypoint lies on a straight line segment connecting the sampled goal and the past trajectory, then use a multivariate Gaussian prior centered at the assumed location. This assumption can not well suit in other complex conditions, such as the trajectory of players in the NBA dataset. Compared with MID <cit.>, we also obtain 15.7% ADE and 28.4% FDE improvement. Note that they carefully design a Transformer-based architecture to model the temporal dependencies in trajectories, but ignore the spatial correlation of agents. Our transformer-based network aims at generating high-quality logic rules based on spatial-temporal relation under the principle to maximize the likelihood of the observational human actions. For NSP-SFM, it obtains high performance in SDD dataset but can not achieve the same level in NBA dataset. It combines physics with deep learning for trajectory prediction and accommodates arbitrary physics models. The limitation of it lies in specific physics models, such as pedestrians, and is deterministic. So it can not deal with some strategy-based conditions, including basketball and football games. But our logic-learning method tries to use a set of spatial-temporal logic rules with intention variables involved as principles to model the dynamics of human actions, not restrained into specific conditions. Further, The proposed model achieved the best scores in F1 score, which is an balanced metric considering both recall and precision. This is because the rule generator and evaluator can collaborate with each other to reduce search space and learn better rules. More experiments and ablation studies can be found in Supplementary Material. §.§ Visualization As mentioned before, pedestrian trajectory prediction is a complex problem because we have to consider the spatial-temporal properties of each pedestrian in the scene. Pedestrians in crowded scenes may have complex interactions, representing different motion modes, including forming groups, following other pedestrians, changing directions to avoid collisions, etc. We show some qualitative results for trajectory prediction on SDD dataset in Figure <ref>. Note that the second column has illuminate the most likely predicted goal in the following 30 seconds, where the higher color means higher probability. We mark some sampled goals in the orange cross. And the last column shows our predicted trajectories compared with ground truth. We observe that Y-net predicts diverse scene-complaint trajectories, with both future goals and paths modalities. The predicted trajectories are closer to socially-acceptable trajectories and forms more stable behaviors between members. §.§ Estimated Intention We provide a qualitative evaluation of the estimated distributions for for each player's intention in NBA dataset in Figure <ref>, representing spatial-temporal relations, which can be estimated from examples and used to manipulate a scene in order to fulfill spatial relations specified in verbal commands. Yellow regions have higher value, while purple regions have value close to zero. For the relations left of, right of, in front of, behind and other side of it is visible how entities exactly and roughly change the angle variance. Interestingly, for these intentions, the distributions seem to complement the affirmative exemplars mainly in terms of direction and distance. As a consequence, these distributions still represent locations in the area around the reference object. §.§ Generated Logic Rules We add visualization and explanation about the logic rule and corresponding actions from NBA dataset in Figure <ref>. Note that the static spatial relation {} represents that the player is on the left of player . And the dynamic spatial relation {} means that player is getting away from player . We can see that these logic rules are meaningful and diverse. In this picture, player A is defended by two players and then passes the ball to player B. Player B goes front and crossover to bypass three defenders and shoot at the basket. Our rule can actually represent their offensive strategy. In fact, our framework can actually adapt to some complex motions, such as cutting toward the ball, because these spatial-temporal predicates are fed into the rule generator and evaluator to obtain high-quality rules to explain the intention of players. § LIMITATION It's challenging to define some complex predicates with richer meanings for different datasets. Given a more informative dataset, our method can discover more principles-like complex rules. Our motivation is to consider the spatial-temporal relation between pedestrians and generate some high-quality logic rules to explain their behaviors. Although we only choose some simple actions in our experiments, they can bring some benefits for understanding the principles of biological agents’ behaviors. Our framework is suitable for more complex conditions, supposing more sophisticated action predicates can be obtained from the data. § CONCLUSION We proposed a framework for learning intrinsic spatial-temporal logic rules for explaining human actions. We regard logic rules as latent variables, and the rule generator as well as the rule evaluator are jointly learned with EM-based algorithm. In the experiments, our method can analyze the biological movement sequence of pedestrians and players, and obtained novel insights from generated logic rules. In the future, we plan to incorporate other physical laws into the models, such as conservation of energy and momentum to enhance robustness of our model. unsrt § APPENDIX § MODEL DEFINITION Temporal relation predicates: They can be used to define the temporal relations of two action events. We consider three types of temporal relation predicates {before, equal, after} as R_Before(t_1, t_2)= 1{t_1-t_2 <0} R_After(t_1, t_2)= 1{t_1-t_2 > 0} R_Equal(t_1, t_2) = 1{t_1=t_2 }. We will treat the temporal relation predicate as either a boolean variable or a real-valued function. If the time information is imprecisely recorded, we can parameterize the temporal relation predicates as temporal kernel functions that map to [0, 1], which is a function of t_1, t_2 with learnable parameters. Static spatial relation predicates: They define the spatial relations of two objects, such as {left, right, in front, behind, far from, inside}. Take “left" for example, R_left(s_1, s_2)= 1{ϵ <s_1-s_2< L}· 1{atan 2(s_1-s_2) ∈ (3π/4, π],[-π, -3π/4)} . The other relations can also be represented in the same way. We will either treat the static spatial relation predicates as a boolean variable, or we can parameterize them as spatial kernel functions of s_1, s_2 with learnable parameters that map to [0, 1]. Dynamic spatial relation predicates: They define the dynamic spatial relations of two objects, such as {closer to, father away} For example, R_CloserTo((c_1, t, s_1), (c_2, t, s_2))= 1{∂ d/dt <0 } R_FartherAway((c_1, t, s_1), (c_2, t, s_2))= 1{∂ d/dt >0 } where d =s_1-s_2. One can freely define other types of spatial-temporal predicates. We just provide some concrete examples above to illustrate the ideas. § ESTIMATED TRAJECTORIES IN NBA DATASET Moreover, we also compare the qualitative results of predicted trajectories of NMMP and our method in Figure <ref>. We can see that our method can actually generate more precise predictions than NMMP. Our approach can actually adapt to the changing number of agents. In common sense, if the distance between two players is too large, they almost have no relations. We therefore only need to consider a limited (but changing) number of agents falling within a reasonable neighborhood region of an agent, with the region size prespecified. § IMPLEMENTATION We follow the same data prepossessing strategy as PECNet <cit.> for our method. All models were trained and tested on the same split of the dataset, as suggested by the benchmark. We train the network using Adam optimizer with a learning rate of 0.001 and batch size 16 for 500 epochs. § COMPARISON OF OPTIMIZATION ALGORITHMS Our framework uses an EM algorithm to optimize the rule generator. In practice, the generator can also be optimized by reinforcement learning. So We empirically compare the two algorithms in the w/o emb. case, and the results are shown in Table <ref>. Clearly, our method still achieves better results than reinforcement learning. § THE NUMBER OF TRAINING TRIPLETS To better evaluate different methods under cases where training triplets are limited, in this section we reduce the amount of training data to see how the performance varies. The results are presented in Figure <ref>. We can see that all of methods have better performance with the increase of training triplets. And our model achieves the best results. § COMPARISON OF PARAMETERS The parameters and FLOPs of all methods is shown in Table <ref>. As we can see, with reasonable storage consumption, our method has comparable FLOPs and provides promising performance. § BACKBONE We added the relevant ablation experiments on the different components of the approach. For the rule generator and the decoder, we compare ours (transformer-based) with three widely used backbones, including CNN, RNN and GNN (graph neural network), and evaluate them in the NBA dataset. As shown in Table <ref> and Table <ref>, our architecture can actually achieve superior results in all metrics. Moreover, in the E-step, we will identify top K rules from all generated rules, where K is a tunable hyperparameter. So we also add a hyperparameter tuning study of the number of K. The results are shown in Table <ref>. The best result appears when K is set as 5, and the performance is almost the same after the K is larger than 5, but it brings more storage consumption. So finally, we set K as 5. § ADDITIONAL RESULTS To make our current experimental results more convincing, we further added seven more recent SOTA baselines. Note that all of these newly included baselines were proposed in between 2021 and 2022. We also evaluated them in the ETH/UCY dataset, as shown in Table <ref> respectively. These new experimental results show that we still achieve superior results in most metrics. § ROBUSTNESS We added some noise and randomly removed several tracks in the NBA dataset, then evaluated all methods in Table <ref>. Note that “Ours” is the original results (without noise s or missing tracks) in the main paper. As we can see, our method still achieves the best performance. Moreover, by comparing “Ours” and “Ours*”, we can see that the quality of tracks has less influence on our method, which demonstrates the robustness of our method.
http://arxiv.org/abs/2306.02544v1
20230605022938
Fourier Test-time Adaptation with Multi-level Consistency for Robust Classification
[ "Yuhao Huang", "Xin Yang", "Xiaoqiong Huang", "Xinrui Zhou", "Haozhe Chi", "Haoran Dou", "Xindi Hu", "Jian Wang", "Xuedong Deng", "Dong Ni" ]
cs.CV
[ "cs.CV" ]
FTTA with Multi-level Consistency for Robust Classification 1National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, China [email protected] 2Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China 3Marshall Laboratory of Biomedical Engineering, Shenzhen University, China 4ZJU-UIUC Institute, Zhejiang University, China 5Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), University of Leeds, UK 6Shenzhen RayShape Medical Technology Co., Ltd, China 7School of Biomedical Engineering and Informatics, Nanjing Medical University, China 8The Affiliated Suzhou Hospital of Nanjing Medical University, China Huang et al. Fourier Test-time Adaptation with Multi-level Consistency for Robust Classification Yuhao Huang1,2,3Yuhao Huang and Xin Yang contribute equally to this work., Xin Yang1,2,3⋆ Xiaoqiong Huang1,2,3 Xinrui Zhou1,2,3 Haozhe Chi4 Haoran Dou5 Xindi Hu6 Jian Wang7 Xuedong Deng8Dong Ni1,2,3() July 31, 2023 ============================================================================================================================================================================================================ Deep classifiers may encounter significant performance degradation when processing unseen testing data from varying centers, vendors, and protocols. Ensuring the robustness of deep models against these domain shifts is crucial for their widespread clinical application. In this study, we propose a novel approach called Fourier Test-time Adaptation (FTTA), which employs a dual-adaptation design to integrate input and model tuning, thereby jointly improving the model robustness. The main idea of FTTA is to build a reliable multi-level consistency measurement of paired inputs for achieving self-correction of prediction. Our contribution is two-fold. First, we encourage consistency in global features and local attention maps between the two transformed images of the same input. Here, the transformation refers to Fourier-based input adaptation, which can transfer one unseen image into source style to reduce the domain gap. Furthermore, we leverage style-interpolated images to enhance the global and local features with learnable parameters, which can smooth the consistency measurement and accelerate convergence. Second, we introduce a regularization technique that utilizes style interpolation consistency in the frequency space to encourage self-consistency in the logit space of the model output. This regularization provides strong self-supervised signals for robustness enhancement. FTTA was extensively validated on three large classification datasets with different modalities and organs. Experimental results show that FTTA is general and outperforms other strong state-of-the-art methods. § INTRODUCTION Domain shift (see Fig. <ref>) may cause deep classifiers to struggle in making plausible predictions during testing <cit.>. This risk seriously limits the reliable deployment of these deep models in real-world scenarios, especially for clinical analysis. Collecting data from the target domain to retrain from scratch or fine-tune the trained model is the potential solution to handle the domain shift risks. However, obtaining adequate testing images with manual annotations is laborious and impracticable in clinical practice. Thus, different solutions have been proposed to conquer the problem and improve the model robustness. Unsupervised Domain Adaptation (UDA) refers to training the model with labeled source data and adapting it with target data without annotation <cit.>. Recently, Fourier domain adaptation was proposed in <cit.>, with the core idea of achieving domain transfer by replacing the low-frequency spectrum of source data with that of the target one. Although effective, they require obtaining sufficient target data in advance, which is challenging for clinical practice. Domain generalization (DG) aims to generalize models to the unseen domain not presented during training. Adversarial learning-based DG is one of the most popular choices that require multi-domain information for learning domain-invariant representations <cit.>. Recently, Liu et al. <cit.> proposed to construct a continuous frequency space to enhance the connection between different domains. Atwany et al. <cit.> imposed a regularization to reduce gradient variance from different domains for diabetic retinopathy classification. One drawback is that they require multiple types of source data for extracting rich features. Other alternatives proposed using only one source domain to perform DG <cit.>. However, they still heavily rely on simulating new domains via various data augmentations, which can be challenging to control. Test-time Adaptation (TTA) adapts the target data or pre-trained models during testing <cit.>. Test-time Training (TTT) <cit.> and TTT++ <cit.> proposed to minimize a self-supervised auxiliary loss. Wang et al. <cit.> proposed the TENT framework that focused on minimizing the entropy of its predictions by modulating features via normalization statistics and transformation parameters estimation. Instead of batch input like the above-mentioned methods, Single Image TTA (SITA) <cit.> was proposed with the definition that having access to only one given test image once. Recently, different mechanisms were developed to optimize the TTA including distribution calibration <cit.>, dynamic learning rate <cit.>, and normalizing flow <cit.>. Most recently, Gao et al. <cit.> proposed projecting the test image back to the source via the source-trained diffusion models. Although effective, these methods often suffer from the problems of unstable parameter estimation, inaccurate proxy tasks/pseudo labels, difficult training, etc. Thus, a simple yet flexible approach is highly desired to fully mine and combine information from test data for online adaptation. In this study, we propose a novel framework called Fourier TTA (FTTA) to enhance the model robustness. We believe that this is the first exploration of dual-adaptation design in TTA that jointly updates input and model for online refinement. Here, one assumption is that a well-adapted model will get consistent outputs for different transformations of the same image. Our contribution is two-fold. First, we align the high-level features and attention regions of transformed paired images for complementary consistency at global and local dimensions. We adopt the Fourier-based input adaptation as the transformation strategy, which can reduce the distances between unseen testing images and the source domain, thus facilitating the model learning. We further propose to smooth the hard consistency via the weighted integration of features, thus reducing the adaptation difficulties of the model. Second, we employ self-consistency of frequency-based style interpolation to regularize the output logits. It can provide direct and effective hints to improve model robustness. Validated on three classification datasets, we demonstrate that FTTA is general in improving classification robustness, and achieves state-of-the-art results compared to other strong TTA methods. § METHODOLOGY Fig. <ref> shows the pipeline of FTTA. Given a trained classifier G, FTTA first conducts Fourier-based input adaptation to transfer each unseen testing image x_t into two source-like images (x_t1 and x_t2). Then, using linear style interpolation, two groups of images will be obtained for subsequent smooth consistency measurement at global features (L_f) and local visual attention (L_c). Furthermore, regularization in the logit space can be computed following the style interpolation consistency in the frequency space (L_s). Finally, FTTA updates once based on the multi-consistency losses to output the final average prediction. Fourier-based Input Adaptation for Domain Transfer. Transferring unseen images to the known domain plays an important role in handling domain shift risks. In this study, instead of learning on multiple domains, we only have access to one single domain of data during training. Therefore, we need to utilize the limited information and find an effective way to realize the fast transfer from the unseen domain to the source domain. Inspired by <cit.>, we adopt the Fast Fourier Transform (FFT) based strategy to transfer the domain information and achieve input adaptation during testing. Specifically, we transfer the domain information from one image to another by low-frequency amplitude (𝒜) swapping while keeping the phase components (see Fig. <ref>). This is because in Fourier space, the low-frequency 𝒜 encodes the style information, and semantic contents are preserved in 𝒫 <cit.>. Domain transfer via amplitude swapping between image x_s to x_t can be defined as: 𝒜^x_t' = ((1-λ) 𝒜^x_t+λ𝒜^x_s) ∘ℳ+𝒜^x_t∘ (1-ℳ), where ℳ is the circular low-pass filtering with radius r to obtain the radial-symmetrical amplitude <cit.>. λ aims to control the degree of style interpolation <cit.>, and it can make the transfer process continues (see Fig. <ref>). After inverse FFT (IFFT, ℱ^-1), we can obtain an image x_t' by ℱ^-1(𝒜^x_t', 𝒫^x_t). Since one low-level amplitude represents one style, we have n style choices. n is the number of training data. The chosen styles for input adaptation should be representative of the source domain while having significant differences from each other. Hence, we use the validation set to select the styles by first turning the whole validation data into the n styles and calculating n accuracy. Then, styles for achieving top-k performance are considered representative, and L2 distances between the C_K^2 pairs are computed to reflect the differences. Smooth Consistency for Global and Local Constraints. Building a reliable consistency measurement of paired inputs is the key to achieving TTA. In this study, we propose global and local alignments to provide a comprehensive consistency signal for tuning the model toward robustness. For global consistency, we compare the similarity between high-level features of paired inputs. These features encode rich semantic information and are therefore well-suited for assessing global consistency. Specifically, we utilize hard and soft feature alignments via pixel-level L2 loss and distribution-level cosine similarity loss, to accurately compute the global feature loss L_f. To ensure local consistency, we compute the distances between the classification activation maps (CAMs) of the paired inputs. It is because CAMs (e.g., Grad-CAM <cit.>) can reflect the local region the model focuses on when making predictions. Forcing CAMs of paired inputs to be close can guide the model to optimize the attention maps and predict using the correct local region for refining the prediction and improving model robustness (see Fig. <ref>, c_t1 is encouraged to be closer with c_t2 for local visual consistency). Finally, the distances between two CAMs can be computed by the combination of L2 and JS-divergence losses. Despite global and local consistency using single paired images can provide effective self-supervised signals for TTA in most cases, they may be difficult or even fail in aligning the features with a serious gap during testing. This is because the representation ability of single-paired images is limited, and the hard consistency between them may cause learning and convergence difficulties. For example, the left-upper CAMs of c1 and c2 in Fig. <ref> are with no overlap. Measuring the local consistency between them is meaningless since JS divergence will always output a constant in that case. Thus, we first generate two groups of images, each with four samples, by style interpolation using different λ. Then, we fed them into the model for obtaining two groups of features. Last, we propose learnable integration with parameters u and v to linearly integrate the global and local features. This can enhance the feature representation ability, thus smoothing the consistency evaluation to accelerate the adaptation convergence. Style Consistency for Regularization on Logit Space. As described in the first half of Eq. <ref>, two low-level amplitudes (i.e., styles) can be linearly combined into a new one. We propose to use this frequency-based style consistency to regularize the model outputs in logit space, which is defined as the layer before softmax. Thus, it is directly related to the model prediction. A total of 8 logit pairs can be obtained (see Fig. <ref>), and the loss can be defined as: L_s = (∑_i = 1^2∑_j = 1^4 ||(1-λ_j)*y_log(x_t)+λ_j*y_log(x_ti)-y_log(x_ij)||_2)/8, where x_t and x_ti,i∈1,2 are the testing image and two transformed images after input adaptation. x_ij represents style-interpolated images controlled by λ_j. y_log(·) outputs the logits of the model. § EXPERIMENTAL RESULTS Materials and Implementations. We validated the FTTA framework on three classification tasks, including one private dataset and two public datasets (see Fig. <ref>). Approved by the local IRB, the in-house Fetal-17 US dataset containing 8727 standard planes with gestational age (GA) ranging from 20 to 24^+6 weeks was collected. It contains 17 categories of planes with different parts, including limbs (4), heart (4), brain (3), abdomen (3), face (2), and spine (1). Four 10-year experienced sonographers annotated one classification tag for each image using the Pair annotation software package <cit.>. Fetal-17 consists of two vendors (A&B) and we conducted bidirectional experiments (A2B and B2A) for method evaluation. The Maternal-fetal US dataset named Fetal-8 (GA: 18-40 weeks) <cit.>[<https://zenodo.org/record/3904280#.YqIQvKhBy3A>] contains 8 types of anatomical planes including brain (3), abdomen (1), femur (1), thorax (1), maternal cervix (1), and others (1). Specifically, 10850 images from vendors ALOKA and Voluson (C&D) were used for bidirectional validation (C2D and D2C). Another public dataset is a fundus dataset named Messidor, which contains 1200 images from 0-3 stage of diabetic retinopathy <cit.>[<https://www.adcis.net/en/third-party/messidor/>]. It was collected from three ophthalmologic centers (E, F&G) with each of them can treated as a source domain, allowing us to conduct three groups of experiments (E2FG, F2EG and G2EF). Dataset split information is listed in Table <ref>. We implemented FTTA in Pytorch, using an NVIDIA A40 GPU. All images were resized to 256×256, and normalized before input to the model. For the fetal datasets, we used a 1-channel input, whereas, for the fundus dataset, 3-channel input was utilized. During training, we augmented the data using common strategies including rotation, flipping and contrast transformation. We selected ImageNet-pretrained ResNet-18 <cit.> as our classifier backbone and optimized it using the AdamW optimizer in 100 epochs. For offline training, with batch size=196, the learning rate (lr) is initialized to 1e-3 and multiplied by 0.1 per 30 epochs. Cross-entropy loss is the basic loss for training. We selected models with the best performance on validation sets to work with FTTA. For online testing, we set the lr equal to 5e-3, and λ_j,j=1,2,3,4 for style interpolation was set as 0.2, 0.4, 0.6, and 0.8, respectively. We only updated the network parameters and learnable weights once based on the multi-level consistency losses function before obtaining the final predictions. Quantitative and Qualitative Analysis. We evaluated the classification performance using four metrics including Accuracy (Acc, %), Precision (Pre, %), Recall (Rec, %), and F1-score (F1, %). Table <ref> compares the FTTA (Ours) with seven competitors including the Baseline without any adaptation and six state-of-the-art TTA methods. Upper-bound represents the performance when training and testing on the target domain. It can be seen from Upper-bound and Baseline that all the metrics have serious drops due to the domain shift. Ours achieves significant improvements on Baseline, and outperforms all the strong competitors in terms of all the evaluation metrics, except for the Pre in Group B2A. It is also noted that the results of Ours are approaching the Upper-bound, with only 5.31% and 4.40% gaps in Acc. We also perform ablation studies on the Fetal-17 dataset in the last 7 rows of Table <ref>. FTTA-IA denotes that without model updating, only input adaptation is conducted. Four experiments are performed to analyze the contribution of three consistency measurements (-C1, -C2, and -C3 for global features, local CAM, and style regularization, respectively), and also the combination of them (-C). They are all equipped with the input adaptation for fair comparisons. FTTA-C^* indicates replacing the Fourier-based input adaptation with 90^∘ rotation to augment the test image for consistency evaluation. Different from FTTA-C, Ours integrates learnable weight groups to smooth consistency measurement. Experiences show that the naive Fourier input adaptation in FTTA-IA can boost the performance of Baseline. The three consistency variants improve the classification performance respectively, and combining them together can further enhance the model robustness. Then, the comparison between FTTA-C and Ours validates the effectiveness of the consistency smooth strategy. Table <ref> reports the results of FTTA on two public datasets. We only perform methods including Upper-bound, Baseline, and Ours with evaluation metrics Acc and F1. Huge domain gaps can be observed by comparing Upper-bound and Baseline. All five experimental groups prove that our proposed FTTA can boost the classification performance over baseline, and significantly narrow the gaps between upper-bound. Note that MESSDIOR is a challenging dataset, with all the groups having low Upper-bounds. Even for the multi-source DG method, Messidor only achieves 66.70% accuracy <cit.>. For the worst group (F2EG), Acc drops 35.13% in the testing sets. However, the proposed FTTA can perform a good adaptation and improve 26.30% and 5.96% in Acc and F1. Fig. <ref> shows the CAM results obtained by Ours. The red boxes denote the key regions, like the eyes in (a), which were annotated by sonographers and indicate the region-of-interest (ROI) with discriminant information. We consider that if one model can focus on the region having a high overlap with the ROI box, it has a high possibility to be predicted correctly. The second columns visualize the misclassified results before adaptation. It can be observed via the CAMs that the focus of the model is inaccurate. Specifically, they spread dispersed on the whole image, overlap little with the ROI, or with low prediction confidence. After TTA, the CAMs can be refined and close to the ROI, with prediction corrected. § CONCLUSION In this study, we proposed a novel and general FTTA framework to improve classification robustness. Based on Fourier-based input adaptation, FTTA is driven by the proposed multi-level consistency, including smooth global and local constraints, and also the self-consistency on logit space. Extensive experiments on three large datasets validate that FTTA is effective and efficient, achieving state-of-the-art results over strong TTA competitors. In the future, we will extend the FTTA to segmentation or object detection tasks. §.§.§ Acknowledgement. This work was supported by the grant from National Natural Science Foundation of China (Nos. 62171290, 62101343), Shenzhen-Hong Kong Joint Research Program (No. SGDX20201103095613036), and Shenzhen Science and Technology Innovations Committee (No. 20200812143441001). splncs04
http://arxiv.org/abs/2306.01547v1
20230602135455
Efficient calculation of dispersion energy for multireference systems with Cholesky decomposition. Application to excited-state interactions
[ "Michał Hapka", "Agnieszka Krzemińska", "Marcin Modrzejewski", "Michał Przybytek", "Katarzyna Pernal" ]
physics.chem-ph
[ "physics.chem-ph" ]
[email protected] Faculty of Chemistry, University of Warsaw, ul. L. Pasteura 1, 02-093 Warsaw, Poland Institute of Physics, Lodz University of Technology, ul. Wolczanska 217/221, 93-005 Lodz, Poland Faculty of Chemistry, University of Warsaw, ul. L. Pasteura 1, 02-093 Warsaw, Poland Faculty of Chemistry, University of Warsaw, ul. L. Pasteura 1, 02-093 Warsaw, Poland Institute of Physics, Lodz University of Technology, ul. Wolczanska 217/221, 93-005 Lodz, Poland We propose an algorithm, that scales with the fifth power of the system size, for computing the second-order dispersion energy for monomers described with multiconfigurational wave functions. This scaling can be achieved when the number of virtual (unoccupied) orbitals largely exceeds the number of active orbitals, which is the case in practical calculations. Our approach employs Cholesky decomposition of Coulomb integrals and a recently developed recursive formula for density response functions of the monomers, enabling dispersion energy computations for systems in nondegenerate ground or excited states with arbitrary spin. As a numerical illustration, we apply the new algorithm in the framework of multiconfigurational symmetry adapted perturbation theory, SAPT(MC), to study interactions in dimers with localized excitons. The SAPT(MC) analysis reveals that the dispersion energy may be the main force stabilizing excited-state dimers. Efficient calculation of dispersion energy for multireference systems with Cholesky decomposition. Application to excited-state interactions Katarzyna Pernal July 31, 2023 ============================================================================================================================================ § INTRODUCTION Modelling of dispersion forces is crucial for an accurate representation of noncovalent interactions in molecular systems <cit.> and materials <cit.>. Unfortunately, approaches to calculate the dispersion energy in excited-state complexes have been scarce <cit.>. Semiempirical dispersion energy corrections for density functionals for ground-state complexes generally fail for dimers in excited states <cit.>. So far, two pairwise dispersion approaches have been extended to excited states. First, the local response dispersion (LRD) model of Nakai and co-workers <cit.> was applied to exciton-localized complexes from the S66 dataset <cit.>. Second, Feng et al. <cit.> used the exchange-hole dipole moment (XDM) method of Becke and Johnson <cit.> to obtain van der Waals C_6 coefficients in systems involving inter- and intramolecular charge transfer excitations. The proposed generalizations of both LRD and XDM rely on excited-state electron density extracted from TD-DFT. When ground-state interactions are concerned, accurate values of the dispersion energy can be obtained from single reference symmetry-adapted perturbation theory (SAPT) <cit.> based either on coupled-cluster <cit.> or DFT description of the monomers <cit.>. These methods are not applicable to excited states. Recently, we have developed a wave function-based approach to the dispersion energy in ground and excited states <cit.>, which employs the extended random phase approximation (ERPA) for density response <cit.>. The dispersion energy can be then predicted for any molecular system with a local exciton <cit.>. A complete description of noncovalent interactions is accessible when combining our model either with multiconfigurational SAPT <cit.>, SAPT(MC), or with a supermolecular approach based on multiconfigurational self-consistent field (MCSCF), in particular complete active space (CASSCF), description of the dimer <cit.>. In the latter method, named CAS+DISP, the supermolecular CASSCF energy is corrected for the missing part of the dispersion energy. Both SAPT(MC) and CAS+DISP have already proven useful in studying excited-state organic dimers <cit.>. Currently, the bottleneck in both SAPT(MC) and CAS+DISP is the calculation of coupled dispersion and exchange-dispersion energy contributions. The computational cost of both components grows formally with the sixth power of the system size. The goal of this work is to extend the applicability of SAPT(MC) and CAS+DISP methods to larger systems by reducing the scaling of the coupled dispersion energy from m^6 to m^5. For this purpose, a novel algorithm is proposed. It employs a Cholesky decomposition technique and the recently introduced recursive formula for computation of density response functions <cit.>. The m^5 scaling is achievable if the interacting monomers are described with multiconfigurational (MC) wave functions, e.g., CASSCF, and the number of active orbitals is much smaller than that of the virtual ones, which is typically the case. The new developments are applied to study molecular interactions in excited-state organic complexes of larger size than those affordable until now for multiconfigurational dispersion methods. The approach for multireference functions parallels previous works focused on coupled dispersion energy computations for single-reference wave functions. In particular, SAPT based on Kohn-Sham description of the monomers, SAPT(DFT) <cit.>, may employ either the density-fitting (DF) <cit.> or Cholesky decomposition <cit.> techniques. The algorithm of Bukowski et al. <cit.>, recently improved by Xie et al. <cit.>, is most general and applicable to computing density response of the monomers from both local and hybrid functionals. In the case of the exchange-dispersion energy, the computational cost remains as large as m^5 in single-reference SAPT (m^6 in the multireference case) <cit.> even in the DF/Cholesky formulation <cit.>. § DISPERSION ENERGY WITH MULTICONFIGURATIONAL WAVE FUNCTIONS AT THE M^5 COST The spin-summed second-order dispersion formula written in terms of monomer response properties obtained within the Extended Random Phase Approximation (ERPA) reads <cit.> E^(2)_ disp = -16∑_ν∈ A,μ∈ B( ∑_p>q∈ A r>s∈ B[ Ỹ_ν^A]_pq[ Ỹ_μ^B]_rs g_pqrs)^2/ω_ν^A +ω_μ^B , where pqrs are natural orbitals (NOs) of the monomers. Modified two-electron integrals in the NO representation {g_pqrs} are defined as ∀_p>q∈ A r>s∈ B g_pqrs=(n_p^1/2+n_q^1/2) (n_r^1/2+n_s^1/2) ⟨ pr|qs⟩ , where ⟨pr|qs|$⟩ are two-electron Coulomb integrals in the⟨12|12|$⟩ convention, {n_p}_p∈ X denotes a set of natural occupation numbers of monomer X (X=A, B), and it holds that 2∑_p∈ Xn_p=N_X, with N_X being a number of electrons in monomer X. Transition energies {ω_ν^X} and transition vectors {Ỹ_ν^X} follow from the ERPA equation <cit.> 𝒜_+^X 𝒜_-^X Ỹ_ν^X = (ω_ν^X)^2 Ỹ_ν^X , where [𝒜^X_±]_pq,p'q'=([𝒜^X]_pq,p'q'± [ℬ^X]_pq,p'q')/[(n_p^1/2± n_q^1/2)(n_p'^1/2± n_q'^1/2)] are hessian matrices of the monomers (see Ref. ). It should be noted that both the formula for the dispersion energy in Eq. (<ref>) and the ERPA eigenproblem in Eq. (<ref>) are applicable to closed and open-shell systems with monomers in arbitrary spin states. For multireference functions based on partitioning of orbitals into the inactive (doubly occupied), active (partially occupied) and virtual (unoccupied) subsets, denoted s_1, s_2 and s_3, respectively, the range of the pq multi-index of [ 𝒜^X_±]_pq,p'q' matrices, under the condition that p>q, can be split into the following subranges p ∈ s_2 ∧ q ∈ s_1 , p ∈ s_3 ∧ q ∈ s_1 , p ∈ s_2 ∧ q ∈ s_2 , p ∈ s_3 ∧ q ∈ s_2 . The same partitioning applies also for the p'q' multi-index. Thus, a straightforward implementation of ERPA requires steps that scale as n_ OCC^3n_ SEC^3, where n_ OCC= M_s_1 + M_s_2 is the number of generalized occupied orbitals and n_ SEC = M_s_2 + M_s_3 is the number of generalized secondary orbitals (M_s_i denotes cardinality of the set s_i). Evaluation of the dispersion energy formula, Eq. (<ref>), shares the same scaling behavior (indices μ,ν run over all n_ OCCn_ SEC eigenvectors). Below we propose an algorithm leading to a lowered, m^5 scaling. Using the integral identity 1/ω_ν^A+ω_μ^B = 2/π∫_0^∞ω_ν^A ω_μ^B/( ( ω_ν^A)^2 + ω^2 ) (( ω_μ^B)^2 + ω^2) dω , ω_ν^A>0,ω_μ^B>0 , and introducing the frequency-dependent matrix C^X(ω) ∀_p>q, p'>q' ∈ X [ C^X(ω)]_pq,p'q' = 2∑_ν[ Ỹ_ν^X]_pq[ Ỹ_ν^X]_p'q'ω_ν^X/( ω_ν^X )^2 + ω^2 , leads to another representation of E^(2)_ disp E^(2)_ disp = -8/π∫_0^∞ dω ∑_p>q,p^'>q^'∈ A r>s,r^'>s^'∈ B [ C^A(ω)]_pq,p^'q^' [ C^B(ω)]_rs,r^'s^' g_pqrs g_p^'q^'r^'s^' . The 𝐂^X(ω) matrix is equivalent to the real part of density linear response function taken with the imaginary argument i ω, see Eq. (33) in Ref. , and is obtained by solving the following equation <cit.> [ 𝒜_+^X𝒜_-^X+ω^21] 𝐂^X(ω)=𝒜_+^X . The modified two-electron integrals g_pqrs of Eq. (<ref>) can be represented via the decomposition g_pqrs=∑_L=1^N_ CholD_pq,LD_rs,L , where vectors 𝐃_L are obtained by Cholesky decomposition of the AO Coulomb matrix followed by transformation to natural-orbital representation and scaling by n_p^1/2+n_q^1/2 factors. The matrix product of 𝐂^X(ω) with 𝐃 yields a reduced-dimension intermediate 𝐂̃^X(ω)=𝐂^X(ω)𝐃 which allows one to write E^(2)_ disp =-8/π∫_0^∞ dω Tr[ 𝐃^T𝐂̃^A(ω) 𝐂̃^B(ω)^T 𝐃] . Notice that the obtained formula applied to excited-state systems would miss the so-called non-Casimir-Polder terms arising from negative transitions ω_ν^X<0 for which the identity in Eq. (<ref>) does not hold. In Ref.  we have shown how to account for such terms explicitly. Upon inspection, we found that non-Casimir-Polder terms are negligible for the studied systems and they are not discussed any further. The key step in the proposed reduced-scaling algorithm for computing dispersion energy with a multireference description of the monomer wave function assumes partitioning of a monomer Hamiltonian Ĥ_X into partially-correlated effective Hamiltonian, Ĥ^(0), and the complementary part <cit.>, Ĥ_X^'=Ĥ_X-Ĥ_X^(0). Then, the parametric representation of the Hamiltonian is introduced Ĥ_X^α = Ĥ_X^(0)+αĤ_X^' , where the Ĥ_X^' operator is multiplied by the coupling constant α∈0,1]. There are two underlying requirements in the Hamiltonian partitioning. The first one is that the wave function describing monomer X is of zeroth-order in α for Ĥ_X^α. The other condition is that scaling of the ERPA equations corresponding to Ĥ_X^α is lowered from m^6 to m^5 at α=0. It has been shown that a group-product-function Hamiltonian <cit.> Ĥ_X^(0) satisfies both conditions for the MC wave function based on an ansatz assuming partitioning of orbitals into inactive (doubly occupied), active (partially occupied), and virtual (unoccupied). Notice that for a single-reference wave function, Ĥ_X^(0) can be chosen as a noninteracting Hamiltonian, as in the Møller-Plesset (MP) perturbation theory. After employing a partitioned Hamiltonian Ĥ^α_X, Eq. (<ref>), the resulting ERPA hessian matrices 𝒜_±(α) become linear functions of α (notice that from now on the index X in hessian and response matrices is dropped for simplicity) 𝒜_±(α)=𝒜_±^(0) + α𝒜_±^(1) . By contrast, the response matrix 𝐂(α,ω), see Eq. (<ref>), depends on α in a nonlinear fashion. Let us represent the projected response matrix 𝐂̃(α,ω), Eq. (<ref>), as a power series expansion in the coupling constant α around α=0. Truncating the expansion at the nth order and setting α=1, we obtain 𝐂̃(ω) = ∑_k=0^n1/k!𝐂̃^(k)(ω) , where 𝐂̃^(k)(ω) follows from an efficient recursive scheme derived in Ref.  𝐂̃(ω)^(0) = 𝐀̅_+^(0)𝐃 , 𝐂̃(ω)^(1) = 𝐀̅_+^(1)𝐃 - 𝐀̅^(1)𝐂̃(ω)^(0) , ∀_k≥2 𝐂̃(ω)^(k) = -k𝐀̅^(1)𝐂̃(ω)^(k-1) -k(k-1)𝐀̅^(2)𝐂̃(ω)^(k-2) . The required matrices are given by the ERPA matrices 𝒜_±^(0) and 𝒜_±^(1) 𝐀̅_+^(0) =Λ(ω)𝒜_+^(0) , 𝐀̅_+^(1) =Λ(ω)𝒜_+^(1) , 𝐀̅^(1) =Λ(ω)( 𝒜 _+^(0)𝒜_-^(1)+𝒜_+ ^(1)𝒜_-^(0)) , 𝐀̅^(2) =Λ(ω)𝒜_+ ^(1)𝒜_-^(1) , with Λ(ω) =( 𝒜_+^(0)𝒜_-^(0) + ω^2 1)^-1 . Notice that by setting n=1 in Eq. (<ref>) for each monomer, the dispersion energy obtained from Eq. (<ref>) will be equivalent to the uncoupled approximation introduced in Ref. . For a sufficiently large n, one recovers full dispersion energy (i.e., the coupled dispersion energy) <cit.> numerically equal to that following from Eqs. (<ref>)–(<ref>). Since the dimensions of the hessian matrices and the matrix 𝐃 are m^2 × m^2 and m^2 × N_ Chol, respectively, matrix multiplications involved in Eqs. (<ref>)–(<ref>) scale as m^4N_ Chol∼ m^5. As has been shown in Ref. , the matrices 𝒜_+^(0), 𝒜_-^(0) are block-diagonal with the largest blocks of M_s_2^2× M_s_2^2 size. Consequently, the cost of inversion of the Λ(ω) matrix, Eq. (<ref>), is negligible if the number of active orbitals, M_s_2, is much smaller than that of virtual orbitals, which is usually the case in practical calculations. A valid concern is whether expansion of the linear response function at α=0, Eq. (<ref>), leads to a convergent series. Although a definite proof cannot be given, it is reasonable to expect that if a monomer wave function leads to stable ERPA equations around α=0, then the series converges. Our numerical tests on two datasets of small, weakly-correlated dimers <cit.> have shown no convergence problems, see Supporting Information. In most cases expansion up to the order n=8 was sufficient to achieve a μ E_h accuracy, amounting to the mean absolute percentage error below 0.1% in the dispersion energy. Convergence tests carried out on larger dimers, selected from the S66 test set, both in ground and excited states have led to the same conclusions, see Table S1 in Supporting Information. An example of the convergence of E^(2)_ disp computed according to the procedure given by Eqs. (<ref>)–(<ref>) with the truncation order n in the range from n=1 to n=10 is presented for the benzene-cyclopentane complex in Figure S1 in Supporting Information. The evaluation of the α-expanded response matrix requires access to 𝒜^(1)_± hessians together with three projected 𝐂̃(ω)^(k) matrices needed to carry out the recursion. Due to their size (m^2 × m^2 and m^2 × N_ Chol, respectively), for systems approaching 100 atoms these quantities can no longer fit into memory and have to be stored on disk. In this regime, the disk storage and the number of I/O operations will become the main bottleneck of the proposed approach. Due to employing the Cholesky decomposition, the cost of computing the 𝐂̃(ω)^(k) matrices, and ultimately the dispersion energy, scales as m^4N_Chol∼ m^5. The combination of Eq. (<ref>) with the recursive scheme for 𝐂̃^(k)(ω) is the main contribution of this work. The scaling of second-order induction energy computations in SAPT(MC) can be reduced to m^4 using α-expansion of the response functions accompanied by the Cholesky decomposition of two-electron integrals. However, it is possible to achieve such scaling without relying on the coupling constant expansion, see Supporting Information for details. For the Hartree-Fock treatment of the monomers, such an alternative approach is identical to induction energy computations in the coupled Hartree-Fock scheme, as first proposed by Sadlej <cit.>. § VISUALISATION OF THE DISPERSION ENERGY The use of Eq. (<ref>) enables spatial visualization of the dispersion interactions. Following the work of Parrish et al. <cit.> and our recent development <cit.>, we introduce a spatially-local descriptor of the dispersion energy based on the MC wave function description of the monomers. By inspection, it can be checked that the dispersion energy expression given in Eq. (<ref>) can be written in terms of a two-particle matrix 𝐐, indices of which correspond to occupied, i.e., inactive or active, orbitals localized on different monomers E^(2)_disp = ∑_q ∈ A, s∈ BQ_qs , where ∀_q∈ A s∈ B Q_qs = -8/π∫_0^∞dω∑_p∈ A r∈ B∑_L=1^N_ Chol D_pq,LW_pq,rs^AB(ω)D_rs,L , and ∀_pq∈ A rs∈ B W_pq,rs^AB(ω)= ∑_L=1^N_ CholC̃_pq,L^A(ω)C̃_rs,L^B(ω) . Such a two-particle partition of the dispersion energy can be seen as a generalization of the partitioning scheme developed in Ref.  that was applied with uncoupled amplitudes and single-determinant wave functions. We propose a local dispersion density function for monomer A as a charge-like density, where the density of the orbital is weighted by its contribution to the dispersion energy Q^A(𝐫)=∑_q∈ s_1^A ∪ s_2^A w_q ρ_q(𝐫) , with weights defined as ∀_q ∈ s_1^A ∪ s_2^A w_q = ∑_s∈ BQ_qs/N_q . ρ_q(𝐫) denotes either electron density of the active electrons if q refers to an active orbital localized on A ∀_q∈ s_2^A ρ_q(𝐫) = ∑_q' ∈ s_2^A n_q' φ_q'(𝐫)^2 , N_q = ∑_q' ∈ s_2^A n_q' , (notice that the sum over active orbitals includes the orbital q) or an orbital density, if q denotes an inactive orbital ∀_q ∈ s_1^A ρ_q(𝐫) = φ_q(𝐫)^2 , N_q = 1 . Analogous definition can be introduced by employing natural orbitals of the monomer B, leading to the dispersion density function localized on B, Q^B(𝐫). A function Q^AB(𝐫), defined as an average of Q^A(𝐫) and Q^B(𝐫), Q^AB(𝐫) = 1/2( Q^A(𝐫) + Q^B(𝐫) ) , collects local contributions of the natural orbitals of both monomers to the dispersion interaction and, as it should, integrates to E^(2)_ disp E^(2)_ disp=∫ Q^AB(𝐫) d𝐫 . The additional cost of obtaining the Q^AB(𝐫) descriptor is marginal compared to the cost of dispersion energy computation, as all intermediate quantities are available from the calculation of E^(2)_disp. Since natural orbitals are typically not localized, changing the Q^AB(𝐫) representation to local orbitals should provide a more informative visualization of dispersion forces. Our aim is, however, to investigate differential maps of Q^AB(𝐫) computed for ground and excited states of the studied systems. For this purpose, natural orbitals are adequate. § COMPUTATIONAL DETAILS Numerical demonstration of the developed algorithm was carried out for both ground and excited states of noncovalent complexes selected from the S66 benchmark dataset of Hobza and co-workers <cit.>, for which benchmark interaction energy values for electronically excited complexes have been provided by Ikabata et al. <cit.>. The dimers were divided into two sets according to their size. Smaller systems (up to eight heavy atoms in a dimer and ca. 500-600 basis functions with a basis set of a triple-zeta quality) were analyzed in detail in our recent work <cit.>. Larger systems (up to eleven heavy atoms in a dimer and ca. 800-900 basis functions), which are beyond the capabilities of the m^6-implementation of the MC dispersion energy, are analyzed for the first time. This group contains five complexes: benzene-cyclopentane, benzene-neopentane, AcOH-pentane, AcNH2-pentane, and peptide-pentane, where peptide refers to N-methylacetamide (see Figure <ref>). Both ground- and excited-state calculations were performed using ground-state geometries taken from Ref. . All supermolecular calculations employed the Boys-Bernardi counterpoise correction <cit.>. The excitons were localized on benzene (π→π^∗), AcOH (n→π^∗), AcNH2 (n→π^∗), and peptide (n→π^∗) molecules. The CCSD(T) results extrapolated to the complete basis set limit (CBS) <cit.> served as benchmark for ground state interaction energies. In the case of complexes involving excited states, reference results were taken from Ref. . They were obtained by combining CCSD(T)/CBS ground-state interaction energies with excitation energies calculated at the EOM-CCSD<cit.>/6-31++G(d,p) <cit.> level of theory. The Cholesky decomposition of the Coulomb integrals matrix, ⟨pr|qs|$⟩, was performed in the AO basis with a modified program developed for Refs. . The Cholesky vectors,R_pq,L, were generated with the convergence criterion∑_p ≥q ( ⟨pp | qq ⟩- ∑_L R_pq,L R_pq, L )<10^-2, which is the same as used in Ref. . Second-order dispersion energies and SAPT(MC) <cit.> energy components based on CASSCF treatment of the monomers were computed in the GammCor<cit.> program. From now on SAPT(MC) based on CASSCF wave functions is denoted as SAPT(CAS). The frequency integration in Eq. (<ref>) has been carried out using 8-point Gauss-Legendre quadrature. The necessary integrals and reduced density matrices were obtained from the locally modified Molpro <cit.> package. Supermolecular CASSCF and DFT-SAPT calculations were performed in Molpro. All calculations employed aug-cc-pVTZ basis set <cit.>. Although MP2 orbitals were used as a starting guess for the CASSCF computations, further orbitals rotations were required in almost all cases for ground- and excited-state complexes to assure that the desired orbitals are included in the active space and to maintain size-consistency in the supermolecular approach. Excited-state wave functions were computed with two-state state-averaged CASSCF. We chose the same active spaces for both ground- and excited-state calculations. The active space for benzene included threeπbonding and the threeπ^*antibonding MOs, which means 6 active electrons on 6 orbitals, labeled as CAS(6,6) <cit.>. For AcOH we chose CAS(8,8) active space including twon,π,π^*, twoσ, and twoσ^*orbitals <cit.>. For AcNH2 the CAS(6,5) space was selected which involvesσ,n,π,π^*, andσ^*orbitals <cit.>. The peptide (N-methylacetamide) active space, CAS(6,6), was composed ofσ,π,π^*andσ^*orbitals, and two lone-pair orbitalsnlocated on oxygen atom <cit.>. To improve the accuracy of SAPT(CAS) interaction energy, especially for systems dominated by large polarization effects, we need to include higher than second-order induction terms. For ground-state systems which can be represented with a single Slater determinant, these terms can be approximated at the Hartree-Fock (HF) level of theory and represented as theδ_HFcorrection <cit.> δ_ HF = E_ int^ HF - ( E_ elst^(1)(HF) + E_ exch^(1)(HF) + E_ ind^(2)(HF) + E_ exch-ind^(2)(HF) ) , whereE_int^HFcorresponds to the supermolecular HF interaction energy and all terms are computed with Hartree-Fock wave functions. There is no straightforward way to account for higher-order polarization terms in excited-state computations. To tackle this problem, we assume that the change of higher-than-second-order induction terms upon excitation is proportional to a corresponding shift in the second-order induction, and define theδ_CAScorrection as: δ_ CAS = E^(2)_ ind( ES)/E^(2)_ ind( GS) δ_ HF , where the labels GS/ES correspond to dimers in ground and excited states, respectively. Notice that in our previous work <cit.> a similar scaling expression involved sums of induction and exchange-induction (E^(2)_exch-ind) terms. In this work, the latter is not computed directly, but follows from approximate scaling relation, see below. Such a treatment ofE^(2)_exch-indenergy could introduce additional error in theδ_CASterm and we decided not to include it in Eq. (<ref>). Compared to single-reference SAPT schemes, in multiconfigurational SAPT it is not straightforward to apply the Cholesky decomposition to second-order exchange energy components, i.e.,E_exch-ind^(2)andE_exch-disp^(2). The difficulty follows from the necessity to obtain separately lower and upper triangles of transition density matrices (see the discussion in Sec. 2 of Ref. ); possible solutions will be addressed in our future work. To account for both exchange-induction and exchange-dispersion terms in this study, we propose a simple scaling scheme E^(2)_ exch-ind( aVTZ) = E^(2)_ exch-ind( aVDZ) ×E^(2)_ ind( aVTZ)/E^(2)_ ind( aVDZ) , E^(2)_ exch-disp( aVTZ) = E^(2)_ exch-disp( aVDZ) ×E^(2)_ disp( aVTZ)/E^(2)_ disp( aVDZ) . where aVXZ = aug-cc-pVXZ, and it is assumed that the convergence of second-order polarization and exchange components with the basis set size is identical. All presented SAPT(CAS) results includeδ_HFandδ_CAScorrections for ground- and excited-state complexes, respectively, as well as scaled second-order exchange components defined in Eqs. (<ref>)–(<ref>). CAS+DISP is a sum of supermolecular CASSCF interaction energy and the dispersion energy, DISP=E_disp^(2)+E_exch-disp^(2), computed in the same fashion as in SAPT(CAS), i.e., using the newly developed expression given in Eq. (<ref>) and the scaling relation from Eq. (<ref>). § RESULTS In Table <ref> we present SAPT interaction energy decomposition for ground- and excited-state complexes. Regardless of the electronic state of the dimer, all systems can be classified as dispersion-dominated, with theE^(2)_disp/E^(1)_elstratio ranging from2.8to3.8. As can be deduced from Table <ref>, the most significant change in dispersion interactions upon transition from the ground to the excited state occurs in complexes of benzene (benzene⋯cyclopentane and benzene⋯neopentane). The effect amounts toΔE^(2)_disp ≈0.2kcal/mol which corresponds to a decrease in the dispersion energy in the excited state. In both complexes, the redistribution of the electron density uponπ→π^*excitation on benzene is accompanied by a non-negligible drop in the electrostatic attraction. The latter energetic effect is, however, cancelled by the simultaneous depletion of the exchange repulsion. Thus, decline of the dispersion energy contributes in a major way to the weakened net attraction in the excited state. We observed the same trends in dimers of benzene with H2O, MeOH, and MeNH2 studied in Ref. . Compared to benzeneπ→π^*systems, complexes ofn-pentane involve an→π^*exciton localized on the carbonyl group of the interacting partner (AcOH, AcNH2, peptide). These systems exhibit an increase in the dispersion energy upon excitation which ranges from-0.06to-0.15kcal/mol (Table <ref>). In AcOH⋯pentane, the enhanced dispersion is comparable in magnitude with a concurrent decrease in the first-order Pauli repulsion, both of which contribute to the overall stabilization of the excited-state dimer. In contrast, in peptide⋯pentane and AcNH2⋯pentane the net repulsive components become stronger and outweigh the dispersion attraction, so that both complexes are more strongly bound in the ground state. For peptide⋯pentane, the weakened interaction in the excited state can be attributed mainly to a significant increase of second-order exchange-induction (ΔE^(2)_exch-ind = 0.21kcal/mol). The other interaction energy components undergo relatively minor changes, the only stabilizing effect of-0.07kcal/mol is due to dispersion. In the case of AcNH2⋯pentane, increased static polarizability of acetamide in the excited state results in a stronger induction attraction (the net change in induction andδcorrections amounts to-0.60kcal/mol). This, however, is countered by a steep rise in the repulsive components. In particular, exchange-induction and first-order exchange increase by1.00and0.45kcal/mol, respectively. Note that a similar pattern occurred in the methylamine⋯peptide (n-π^*) interaction <cit.>. Changes in theE^(2)_dispcomponents are visualized in Figure <ref> using the difference between ground- and excited-state dispersion interaction density,Q^AB(𝐫), see Sec. <ref>. Both the sign and magnitude of the effect are correctly captured—one observes a notable depletion of the dispersion density inπ-π^*complexes compared to a weaker accumulation forn-π^*dimers. In agreement with the character of the underlying excitons, in theπ-π^*case majority of theΔE^(2)_dispterm is delocalized over the benzene ring, while inn-π^*dimers it is basically confined to the carbonyl group. In Tables <ref> and <ref> we report total ground- and excited-state interaction energies, respectively, calculated at the CASSCF, CAS+DISP and SAPT(CAS) levels of theory. Addition of the dispersion energy to supermolecular CASSCF changes the character of the interaction from repulsive to attractive, reducing mean errors by two orders of magnitude. In consequence, CAS+DISP results closely match the coupled-cluster (CC) reference <cit.> with mean absolute percentage errors (MA%Es) of0.8% for ground and3.7% for excited-states. SAPT(CAS) performs similar to the CAS+DISP model (MA%E values of2.5% and3.3% for ground and excited states, respectively). The DFT-based LRD model of Nakai et al. <cit.> combined with the LC-BOP functional <cit.> is somewhat less accurate. The model underestimates interaction energies which amounts to mean errors at the level of 10% (Tables <ref>-<ref>). Note, however, that DFT result were obtained in the 6-311++G(2d,2p) basis set. Since the aug-cc-pVTZ basis set is not sufficient to saturate the dispersion energy with respect to the basis set size, the good agreement of both SAPT(CAS) and CAS+DISP with coupled-cluster is partially due to error cancellation (the CC values include CBS-extrapolated ground-state energies). Indeed, individual SAPT(CAS) energy components for ground-state complexes are systematically underestimated with respect to their SAPT(DFT) counterparts (see Tables S2-S4 in the Supporting Information). This reflects the effective neglect of intramonomer electron correlation effects in the SAPT(CAS) approach <cit.>. § CONCLUSIONS We have presented an algorithm for second-order dispersion energy calculations with multiconfigurational wave functions that scales with the fifth power the system size. Until now,m^5scaling in coupled dispersion energy computations could only be achieved with single-determinant description of the monomers <cit.>. The prerequisite form^5scaling with a multiconfigurational reference is that the number of active orbitals in wave functions of the monomers is considerably smaller compared to the number of virtual orbitals. In practice, this condition is typically fulfilled in interaction energy calculations performed using augmented basis sets. The algorithm relies on the Extended RPA solver to obtain density response functions of the monomers and employs Cholesky decomposition of two-electron integrals. The key step involves the coupling-constant expansion of the response matrix projected onto the space spanned by the Cholesky vectors. The expansion follows from partitioning of the monomer Hamiltonian into the zeroth-order partially interacting group-product-function Hamiltonian and the remainder term scaled by the coupling constantα. Consecutive terms of the response matrix expansion atα=0are calculated based on recursive relations proposed in Ref. . Our numerical experience shows that truncation through the eighth order is sufficient to achieve accuracy at the level of fewμE_h. The cost of the induction energy can be reduced fromm^5tom^4in an infinite-order approach without the expansion inα. This avoids the (small) numerical error related to the truncation scheme (see Supporting Information). To visually represent the change in dispersion forces upon vertical excitations, we have introduced a spatial descriptor based on the proposed expression for the dispersion energy. The underlying partition of the dispersion energy expression may be cast as a generalization of the approach first developed by Parrish and Sherrill <cit.> for single-reference wave functions. The newm^5dispersion energy algorithm was employed together with state-averaged CASSCF wave functions to examine interactions involving localized excitons of theπ-π^*andn-π^*type. For representation of the interaction energy, multiconfigurational dispersion energy was complemented either with SAPT(MC) <cit.> energy components or with supermolecular CASSCF interaction energy, the latter known as the CAS+DISP <cit.> method. The dimers included up to eleven heavy atoms (between 800 and 900 basis set functions using the aug-cc-pVTZ basis set) which exceeded the capabilities of our original,m^6-implementation <cit.>. In line with earlier investigations <cit.>, SAPT decomposition shows that even in low-lying excited states the dispersion energy may be the driving force behind the stability of the complex. Hence, both accurate and efficient algorithms adequate for dispersion computations with multiconfigurational wave functions are mandatory. Spatial mapping of the dispersion energy density helps to identify regions affected most by the exciton. Visualizing the remaining SAPT(MC) energy components could aid the interpretation of energetic effects that occur upon electron density rearrangement in excited states. Work along this line is in progress. This research was funded in whole or in part by National Science Center, Poland under grants no. 2019/35/B/ST4/01310 and no. 2021/43/D/ST4/02762. For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. This research was also funded by the European Centre of Excellence in Exascale Computing TREX - Targeting Real Chemical Accuracy at the Exascale. The project has received funding from the European Union’s Horizon 2020 - Research and Innovation program - under grant agreement no. 952165.
http://arxiv.org/abs/2306.03507v1
20230606085301
"A Little is Enough": Few-Shot Quality Estimation based Corpus Filtering improves Machine Translation
[ "Akshay Batheja", "Pushpak Bhattacharyya" ]
cs.CL
[ "cs.CL" ]
Semantic Segmentation on VSPW Dataset through Contrastive Loss and Multi-dataset Training Approach Min Yan^1 †  Qianxiong Ning^1,2 †  Qian Wang^1 ^1 China Mobile Research Institute, ^2 Xi'an Jiaotong University [email protected]; [email protected]; [email protected] =========================================================================================================================================================================================================== Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The goal of QE aligns with the task of corpus filtering, where we assign the quality score to the sentence pairs present in the pseudo-parallel corpus. We propose a Quality Estimation based Filtering approach to extract high-quality parallel data from the pseudo-parallel corpus. To the best of our knowledge, this is a novel adaptation of the QE framework to extract quality parallel corpus from the pseudo-parallel corpus. By training with this filtered corpus, we observe an improvement in the Machine Translation (MT) system's performance by up to 1.8 BLEU points, for English-Marathi, Chinese-English, and Hindi-Bengali language pairs, over the baseline model. The baseline model is the one that is trained on the whole pseudo-parallel corpus. Our Few-shot QE model transfer learned from the English-Marathi QE model and fine-tuned on only 500 Hindi-Bengali training instances, shows an improvement of up to 0.6 BLEU points for Hindi-Bengali language pair, compared to the baseline model. This demonstrates the promise of transfer learning in the setting under discussion. QE systems typically require in the order of (7K-25K) of training data. Our Hindi-Bengali QE is trained on only 500 instances of training that is 1/40^th of the normal requirement and achieves comparable performance. All the scripts and datasets utilized in this study will be publicly available. § INTRODUCTION In recent times, Neural MT has shown excellent performance, having been trained on a large amount of parallel corpora <cit.>. However, not all language pairs have a substantial amount of parallel data. Hence, we have to rely on the noisy web-crawled corpora for low-resource languages. The task of Parallel Corpus Filtering aims to provide a scoring mechanism that helps extract good-quality parallel corpus from a noisy pseudo-parallel corpus. The task of Quality Estimation (QE) aims to provide a quality score for a translation when the reference translation is unavailable. We use Quality Estimation to assign the quality scores to the sentence pairs present in pseudo-parallel corpora and extract good-quality parallel sentences. We aim to improve the quality of Machine Translation for English(En)-Marathi(Mr), Hindi(Hi)-Bengali(Bn) and Chinese(Zh)-English(En) language pairs by using sentence-level QE-based corpus filtering. We observe that QE-based corpus filtering performs better than previously proposed methods. Our contributions are: * Adaptation of the QE framework, which is normally used for MT evaluation, to extract high-quality parallel corpus from pseudo-parallel corpus; to the best of our knowledge, this is a novel adaptation of the QE framework to extracting quality parallel corpus from the pseudo-parallel corpus. * Demonstrating the promise of Few-Shot QE technique to generate training data for MT; a Hindi-Bengali QE model is trained with only 500 training instances transfer learned from an English-Marathi trained QE model; the filtered parallel data using this Hindi-Bengali QE system gives 0.6 BLEU point improvement over Hi-Bn MT system trained on the pseudo-parallel corpus. * Demonstrating performance improvement of the Machine Translation systems by up to 1.8 BLEU points for English-Marathi, Hindi-Bengali and Chinese-English language pairs, over the model trained on the whole pseudo-parallel corpus. § RELATED WORK §.§ Parallel Corpus Filtering Neural Machine Translation (NMT) is extremely data hungry <cit.>. Recently, there has been a growing interest in the process of filtering noisy parallel corpora to enhance the data used for training machine translation systems. The Conference on Machine Translation (WMT) has organized annual Shared Tasks on Parallel Corpus Filtering (WMT 2018, WMT 2019, WMT 2020).  <cit.> proposed an approach that uses the Dual Bilingual GPT-2 model and the Dual Conditional CrossEntropy Model to evaluate the quality of the parallel corpus. <cit.> proposed the LaBSE model, which is a multilingual sentence embedding model trained on 109 languages, including some Indic languages. <cit.> mentioned different types of noise that can be injected in a parallel corpus and investigated whether state-of-the-art filtering models are capable of removing all the noise types proposed by <cit.>. Most recently, <cit.> used a combination of Phrase Pair Injection and LaBSE <cit.> based Corpus Filtering to extract high-quality parallel data from a noisy parallel corpus. In contrast, we use QE-based filtering to extract high-quality data from noisy pseudo-parallel data. We observe that QE quality scores are superior to the LaBSE quality scores. §.§ Quality Estimation Quality Estimation (QE) is the task of evaluating the quality of a translation when reference translation is not available. The state-of-the-art MonoTransQuest architecture, proposed by <cit.>, builds upon XLM-R, a widely-used pretrained cross-lingual language model known for its ability to generalize to low-resource languages <cit.>. <cit.> proposed a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE in a Parallel Corpus Mining setting. The Parallel Corpus Mining task aims to detect the most similar texts in a large multilingual collection and perform sentence alignment. This motivates us to use QE in the Parallel Corpus Filtering task. § APPROACHES We first discuss methods to extract good-quality parallel sentences from the pseudo-parallel corpus. Then we discuss a transfer learning-based filtering approach in few-shot settings. §.§ LaBSE based Filtering Language Agnostic BERT Sentence Embedding model <cit.> is a multilingual embedding model that supports 109 languages, including some Indic languages. We generate the sentence embeddings for the source and target sides of the pseudo-parallel corpora using the LaBSE [<https://huggingface.co/sentence-transformers/LaBSE>] model. Then, we compute the cosine similarity between the source and target sentence embeddings. After that, we extract good-quality parallel sentences based on a threshold value of the similarity scores. §.§ Phrase Pair Injection (PPI) with LaBSE-based Filtering  <cit.> proposed a combination of Phrase Pair Injection <cit.> and LaBSE-based Corpus Filtering to extract high-quality parallel data from a noisy parallel corpus. We train a PBSMT model on the noisy pseudo-parallel corpus using the Moses [<http://www2.statmt.org/moses/?n=Development.GetStarted>] decoder. Then, we extract phrase pairs with the highest translation probability. Finally, we perform LaBSE-based filtering on these phrase pairs to remove poor-quality phrase pairs. We augment these high-quality phrase pairs with LaBSE-filtered parallel sentences. §.§ Quality Estimation based Filtering In this approach, we train the MonoTransQuest[<https://github.com/TharinduDR/TransQuest>] <cit.> model and use it to generate the quality scores for the pseudo-parallel corpus of the corresponding language pair. Then, we extract high-quality parallel sentences from the pseudo-parallel corpus using a threshold quality score value. §.§ Few-shot Quality Estimation The Quality Estimation task requires human-annotated Direct Assessment scores for the corresponding language pairs. In few-shot settings, we fine-tune a pre-trained QE model for a high-resource language pair on QE data for the corresponding low-resource language pair to obtain a QE model for the low-resource language pair. § MATHEMATICAL PRELIMINARIES LaBSE scoring Let D = {(x_i, y_i)}_i=1^N be a pseudo-parallel corpus with N examples, where x_i and y_i represents i^th source and target sentence respectively. We first feed all the source sentences present in the pseudo parallel corpus as input to the LaBSE[<https://huggingface.co/sentence-transformers/LaBSE>] <cit.> model, which is a Dual encoder model with BERT-based encoding modules to obtain source sentence embeddings (S_i). The sentence embeddings are extracted as the l2 normalized [CLS] token representations from the last transformer block. Then, we feed all the target sentences as input to the LaBSE model to obtain target sentence embeddings (T_i). We then compute cosine similarity (score_i) between the source and the corresponding target sentence embeddings. S_i=LaBSE(x_i) T_i=LaBSE(y_i) score_i=cosine_similarity(S_i, T_i) QE scoring We feed “x_i [SEP] y_i" as an input to the MonoTransQuest <cit.> architecture which uses a single XLM-R model. The output of the [CLS] token is used as the input of a softmax layer that predicts the quality score (score_i) of the i^th sentence pair <x_i, y_i>. score_i=MonoTransQuest(x_i, y_i) § EXPERIMENTAL SETUP §.§ Dataset In all NMT experiments, we use two sets of corpus, namely, Parallel and Pseudo-Parallel corpus. The Parallel corpus consists of high-quality sentence pairs, while the Pseudo-Parallel corpus contains sentence pairs of varying quality. The En-Mr Parallel Corpus consists of the ILCI phase 1, Bible, PIB and PM-India corpus <cit.>. The Zh-En Parallel corpus consists of ParaMed[<https://github.com/boxiangliu/ParaMed>] corpus. The Hi-Bn Parallel corpus is obtained from the OPUS[<https://opus.nlpl.eu/>] corpus repository. The En-Mr and Zh-En Pseudo-Parallel Corpus consist of the Samanantar <cit.> and WMT18 Zh-En[<http://data.statmt.org/wmt18/translation-task/preprocessed/zh-en/>] corpus, respectively. The Hi-Bn Pseudo-Parallel Corpus consists of Samanantar and Tatoeba <cit.> corpus. The detailed data statistics are mentioned in table <ref>. In QE experiments, we create a small corpus (500 instances) for Hindi-Bengali language pair that consists of human-annotated Domain Adaptation scores for each sentence pair annotated by three annotators. The pairwise Pearson Correlation between the three annotators of Hindi-Bengali QE is 0.68, 0.61 and 0.67. This indicates a good agreement between the three annotators. Please refer to Appendix <ref> for further annotation details. We use the QE data provided by <cit.> and <cit.> for the Zh-En and En-Mr language pairs, respectively. The detailed QE data statistics are mentioned in table <ref>. For evaluation, we use the FLORES 101 test set which contains 1,012 sentence pairs for each language pair. §.§ Models We use MonoTransQuest model architecture to train the QE models. We use the Indic NLP library for preprocessing the Indic language data and Moses for preprocessing the English language data. For Indic languages, we normalize and tokenize the data. For English, we lowercase and tokenize the data. We use a Transformer based architecture provided by OpenNMT-py library to train the NMT models for all our experiments. The optimizer used was adam with betas (0.9, 0.98). The initial learning rate used was 5e-4 with the inverse square root learning rate scheduler. We use 8000 warmup updates. The dropout probability value used was 0.1 and the criterion used was label smoothed cross entropy with label smoothing of 0.1. We use a batch size of 4096 tokens. All the models were trained for 200,000 training steps. We use MonoTransquest[<https://github.com/TharinduDR/TransQuest>] model to train the sentence-level QE model. We start with a learning rate of 2e-5 and use 5% of training data for warm-up. We use early patience over ten steps. We use a batch size of eight. The model architecture is mentioned in Appendix <ref>. Baseline We train the baseline NMT models on the whole pseudo-parallel corpus augmented with the parallel corpus for the corresponding language pairs. LaBSE based Filtering In this model, we use the LaBSE filtering with threshold 0.8 to extract good quality parallel sentences from the En-Mr, Hi-Bn and Zh-En pseudo-parallel corpus. Then we augment the parallel corpus with the LaBSE-filtered parallel sentences and train the respective NMT models. LaBSE + PPI-LaBSE based Filtering We extract LaBSE Filtered parallel sentences and phrases from the pseudo-parallel corpus and augment them with the parallel corpora to train the respective NMT models. Our Model, QE based Filtering We train the sentence-level QE model from scratch for En-Mr and Zh-En language pairs using their respective training data, Table <ref>. We use the English-Marathi pre-trained QE model for the Hi-Bn language pair and finetune it on Hi-Bn training data, Table <ref>. We compute quality scores for the noisy pseudo-parallel corpora using the trained QE models. Then, we extract high-quality sentence pairs from the pseudo-parallel corpus using the threshold values of -0.5, -0.4, and 0 for En-Mr, Zh-En, and Hi-Bn language pairs, respectively. We augment the extracted high-quality sentence pairs with the parallel corpus and train the respective NMT models. § RESULTS AND ANALYSIS We evaluate our NMT models using BLEU <cit.>. We use sacrebleu <cit.> python library to calculate the BLEU scores. Table <ref> shows that QE based filtering model outperforms all other models for Hi-Bn, En-Mr and Zh-En language pairs. The QE based Filtering model improves the MT system's performance by 0.85, 0.6, 1.8, 0.37 and 0.63 BLEU points over the baseline model for Zh→En, En→Mr, Mr→En,Hi→Bn and Bn→Hi, respectively. It also outperforms LaBSE + PPI-LaBSE based Filtering model by up to 0.7 BLEU points for Zh-En, En-Mr and Hi-Bn language pairs. The LaBSE + PPI-LaBSE based Filtering model performs better than QE based Filtering model for En→Mr language direction. The LaBSE + PPI-LaBSE model, which is trained on nearly twice the amount of training data compared to the QE-based filtering model, can be a contributing factor to its better performance in En→Mr. The improvement in the performance of the Bn→Hi QE-based filtered MT system is comparable to the En→Mr and Zh→En QE-based filtered MT model. The Hi-Bn QE model is trained with only 500 training instances transfer learned from En-Mr trained QE models. This demonstrates the promise of the few-shot QE technique to generate training data for MT. We compute Pearson Correlation between human annotated quality scores and quality scores computed using LaBSE and QE, shown in Table <ref>. The QE quality scores have a higher correlation with human annotated quality scores, compared to LaBSE quality scores for all 3 language pairs. Table <ref> shows the Pearson Correlation between LaBSE and QE quality scores for all 3 language pairs. We observe that the LaBSE quality score has a low correlation with the QE quality score and the QE quality score has a high correlation with the human annotated quality score. This establishes the superiority of QE over the LaBSE quality score. § CONCLUSION AND FUTURE WORK We introduced a simple Quality Estimation based corpus filtering approach to extract high-quality parallel data from the noisy pseudo-parallel corpora. The takeaway from our work is that sentence-level QE-based filtering performs better than LaBSE-based filtering and helps improve the performance of NMT systems. We also show that few-shot QE models trained using a transfer learning-based approach can be used to extract good-quality parallel corpus from the pseudo-parallel corpus. Only 1/40^th of the normal data requirement (7K-25K) of QE training data achieves comparable performance for the Hindi-Bengali language pair. We also show that the QE quality score is superior to the LaBSE quality score. In the future, we plan to use the proposed corpus filtering technique for other language pairs. This will provide us with a general overview of how this filtering technique performs for multiple languages. § ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for their insightful feedback. We also express our gratitude towards Shivam Mhaskar, Sourabh Deoghare and other members of the Machine Translation group at CFILT, IIT Bombay, for their interesting and insightful comments. § LIMITATIONS Although our primary effort in this work was to extract as much parallel corpora as possible, the improvement in the performance has been found to be only marginal. The LaBSE and QE-based filtering experiments involve a hyper-parameter called "threshold quality score." To achieve optimal results, we conduct experiments with different values of this hyper-parameter. The proposed few-shot transfer learning technique requires a small amount of data that needs to be annotated by multiple annotators. § ETHICS STATEMENT The aim of our work is to extract high-quality parallel corpus from a noisy pseudo-parallel corpus. The datasets that we used in this work are publicly available and we have cited the sources of all the datasets that we have used. Publicly available datasets can contain biased sentences. We have also created a dataset for Hindi-Bengali few-shot QE. We briefly discuss the annotation guideline given to the annotators for the task in the Appendix <ref>. acl_natbib § APPENDIX §.§ Instances of Translations (Referred from Table <ref>) The instances of translations for all 3 language pairs are provided in Table <ref>, <ref>, <ref>, <ref> and <ref>. §.§ Model Architecture (Referred from section <ref>) We use a Transformer based architecture to train English-Marathi, Hindi-Bengali and Chinese-English NMT models for all our experiments. The encoder of the Transformer consists of 6 encoder layers and 8 encoder attention heads. The encoder uses embeddings of dimension 512. The decoder of the Transformer also consists of 6 decoder layers and 8 decoder attention heads. We use MonoTransQuest architecture to train English-Marathi, Hindi-Bengali and Chinese-English QE models for all our experiments. We use a single Nvidia A100 GPU with 40 GB memory to train our NMT and QE models §.§ Annotation Details (Referred from section <ref>) §.§.§ Annotator Demographic For the Direct Assessment score annotation, we requested three native language speakers of Bengali who are well-versed in Hindi and have attended their graduate degrees in the Hindi language. They were aged between 25 to 42 and were paid for the time they spent on annotations. §.§.§ Guidelines The guidelines provided to the annotators for the Quality Estimation task are shown in Figure <ref>. §.§.§ Dataset We create Hi-Bn QE data for our few-shot settings. We use 500 high-quality Hindi sentences from IIT Bombay English-Hindi parallel corpus <cit.>. We use the Hindi-Bengali NMT model to generate translations for the 500 Hindi sentences. We provide this Hindi-Bengali parallel data to the annotators for the Direct Assessment Task. The Direct Assessment tasks require the annotators to score the MT translations as per the guidelines provided, Figure <ref>.
http://arxiv.org/abs/2306.04718v1
20230607183025
Neural Symbolic Regression using Control Variables
[ "Xieting Chu", "Hongjue Zhao", "Enze Xu", "Hairong Qi", "Minghan Chen", "Huajie Shao" ]
cs.LG
[ "cs.LG" ]
Nonlinear Evolution of Quadratic Gravity in 3+1 Dimensions Hyun Lim July 31, 2023 ========================================================== Symbolic regression (SR) is a powerful technique for discovering the analytical mathematical expression from data, finding various applications in natural sciences due to its good interpretability of results. However, existing methods face scalability issues when dealing with complex equations involving multiple variables. To address this challenge, we propose SRCV, a novel neural symbolic regression method that leverages control variables to enhance both accuracy and scalability. The core idea is to decompose multi-variable symbolic regression into a set of single-variable SR problems, which are then combined in a bottom-up manner. The proposed method involves a four-step process. First, we learn a data generator from observed data using deep neural networks (DNNs). Second, the data generator is used to generate samples for a certain variable by controlling the input variables. Thirdly, single-variable symbolic regression is applied to estimate the corresponding mathematical expression. Lastly, we repeat steps 2 and 3 by gradually adding variables one by one until completion. We evaluate the performance of our method on multiple benchmark datasets. Experimental results demonstrate that the proposed SRCV significantly outperforms state-of-the-art baselines in discovering mathematical expressions with multiple variables. Moreover, it can substantially reduce the search space for symbolic regression. The source code will be made publicly available upon publication. § INTRODUCTION Symbolic regression (SR) aims to uncover the underlying mathematical expressions from observed data <cit.>. It has been widely used for scientific discovery across various disciplines <cit.> owing to its ability to learn analytical expressions between the input and output. The implementation of SR involves two steps <cit.>. The first step is to predict the skeleton of mathematical expressions based on a pre-defined list of basic operations (+, -, ×, ÷) and functions (sin, cos, exp, log). For instance, we can identify the skeleton of a symbolic equation as f(x)=logax+sin(bx) + c. Next, we adopt optimization methods, such as Broyden–Fletcher–Goldfarb–Shanno (BFGS), to estimate the parameters a,b,c in the skeleton. The key challenges of SR lie in: 1) how to improve the accuracy and scalability for multiple input variables, and 2) how to speed up the discovery process. In the past few decades, a plethora of SR methods <cit.> have been developed to discover underlying mathematical equations from data in science and engineering domains. One popular approach among them is genetic programming (GP) <cit.>, which uses evolutionary operations, such as mutation, crossover, and selection, to estimate the symbolic expressions in a tree structure. However, GP would suffer from instability and its inference time is expensive in the context of multiple input variables <cit.>. Another method, SINDy <cit.>, adopts sparse linear regression to discover the governing equations of dynamical systems. However, SINDy's performance relies heavily on prior knowledge of a known set of candidate functions, and it is difficult to uncover complex equations from data solely through linear regression. To overcome these limitations, some studies explore deep neural networks-based techniques, such as Deep Symbolic Regression (DSR) <cit.> and Transformer-based pre-training, for symbolic learning. Although these approaches obtain good prediction accuracy, they do not scale well to mathematical equations with multiple variables. Recently, researchers develop Symbolic Physics Learner (SPL), a physics-informed Monte Carlo Tree Search (MCTS) algorithm for symbolic regression. While SPL outperforms most GP-based methods, it still struggles with multiple variables in mathematical expressions. In summary, existing methods suffer from scalability issues when dealing with complex multi-variable equations as they require a much larger search space to identify the combination of different variables. Thus, the question is, how can we reduce the search space of symbolic regression for complex equations involving multiple variables? In this paper, we propose a novel neural symbolic regression with control variables (SRCV) that combines neural networks and symbolic regression to discover analytical expressions from data, as illustrated in Fig. <ref>. Inspired by divide and conquer <cit.>, SRCV addresses the multi-variable symbolic regression by decomposing it into a set of single-variable SR problems and then combines the estimated symbolic equation for each variable in a bottom-up manner. The proposed method is performed in four steps as follows. 1) We learn a data generator from observed data using DNNs, allowing for generating data for a specific variable. 2) Generate data via control variables. Specifically, we generate data samples for the current independent variable by manipulating the previously learned variables and other control variables. For example, for estimating the symbolic expression of variable x_i, we can generate data samples by varying x_i while fixing the other variables. 3) Single-variable symbolic regression is employed to estimate the mathematical expression of the current variable based on the generated data in step 2. Here any symbolic regression models can be inserted into the framework. 4) We gradually add the remaining variables one by one to step 2 and proceed with step 3 until all the variables are covered. Extensive experimental results on multiple SR benchmarks demonstrate the superiority of our SRCV over the state-of-the-art methods in discovering complex multi-variable equations. Moreover, the proposed approach is able to discover complex expressions in a reduced search space. Our main contributions are three-fold: 1) we propose SRCV, a simple and effective neural symbolic regression method using control variables; 2) we illustrate that the proposed method exhibits a significant reduction in search space for complex symbolic equations; 3) the evaluation results demonstrate that our method can significantly outperform the baselines in terms of accuracy and inference time. § RELATED WORK GP-based Symbolic Regression. Genetic Programming (GP) is one of the most popular algorithms for symbolic regression. The basic idea is to adopt the evolutionary operations, including mutation, crossover, and selection, to iteratively estimate the mathematical expressions until the desired accuracy is achieved. As a typical representative, the commercial software Eureqa <cit.> has been widely used in real-world applications. A recent study <cit.> combined genetic programming with reinforcement learning to enhance performance. While GP yields satisfactory results in many scenarios, it does not scale well to multiple input variables and is highly sensitive to hyperparameters <cit.>. DNNs-based Symbolic Regression. Some studies have employed DNN techniques <cit.> to discover symbolic equations from data. Early approaches proposed to replace the activation functions in DNNs with some basic functions like “sin(.)”, “cos(.)”, and “exp(.)”. This substitution may lead to training instability and exploding gradient issues. Recently, AI-Feynman <cit.> was developed to decompose the process of finding an equation into a flow based on the assumption of known physical properties. However, this method relies heavily on prior physics knowledge, such as symmetries or invariances. A more recent approach, Deep Symbolic Regression (DSR) <cit.>, combined recurrent neural networks (RNN) with reinforcement learning for symbolic regression. Despite outperforming many GP-based approaches, DSR struggles with equations that contain multiple variables and constants. Tree-based Symbolic Regression. Furthermore, a few recent studies proposed Monte Carlo tree search (MCTS) <cit.> for symbolic regression. The MCTS is performed in the following four steps: 1) selection, 2) expansion, 3) simulation, and 4) backpropagation. It takes advantage of the trade-off between exploration and exploitation to better discover mathematical expressions. For instance, a most recent work developed Symbolic Physics Learner (SPL) <cit.> to accelerate discovery based on prior physics knowledge. However, SPL does not scale well to mathematical equations with many variables. Pretraining-based Symbolic Regression. Inspired by large language models, researchers also adopted a pre-training technique based on Transformer <cit.> for the discovery of symbolic equations. For example, Biggio et al. <cit.> developed a large scale pre-training model for symbolic regression. To overcome the ill-posed problem in skeleton prediction, recent work developed an end-to-end (E2E) symbolic regression by training Transformer on a large amount of synthetic data. However, Transformer-based symbolic regression requires a ton of training data, which is not practical in real world applications. Moreover, it does not scale well to high-dimensional functions with many variables. § PROPOSED METHOD In this section, we first state the problem of symbolic regression, and then elaborate on the proposed SRCV. A walk-through example is provided to enhance the understanding of our approach. Furthermore, we study how the proposed method effectively reduces the search space in symbolic regression. §.§ Problem Statement Given a set of N data samples 𝒟={𝐱^(n), y^(n)}_n=1^N, where 𝐱^(n)∈ℝ^d and y^(n)∈ℝ. Here d denotes the dimension of input data. The goal of symbolic regression is to learn an analytical mathematical expression, y=f(𝐱)=f(x_1,x_2,…,x_d), based on observed data 𝒟. §.§ Proposed SRCV To improve the accuracy and scalability for multi-variable SR, we propose a novel neural symbolic regression with control variables (SRCV) to decompose it into a set of single-variable SR problems. The key idea is to learn a data generator from observed data using DNNs, and then use it to generate data samples by manipulating an independent variable each time. After that, we estimate the symbolic equation of the current variable based on its generated samples and then combine the discovered equations by adding variables one by one. Fig. <ref> shows the overall framework of the proposed SRCV, which consists of three main parts: i) data generator with DNNs; ii) data generation via control variables; iii) single-variable symbolic regression (SR). Below, we will describe these three components in detail. Data Generator with DNNs. In many real-world applications, we only obtain the data samples from multiple input variables, rather than from a single control variable. In order to control data generation for a single variable, we first need to learn a data generator using deep neural networks (DNNs). After learning the mapping function between the input and output, f(x_1,x_2,…,x_d), we can manipulate the input variables to generate different data samples as needed. For instance, we can vary variable x_1 while keeping the other variables fixed to generate data for x_1, i.e., f_x_2,…,x_d(x_1). Data Generation via Control Variables. For this part, we aim to generate different data samples by controlling the input variables. As mentioned earlier, our goal is to decompose multi-variable SR into a set of single-variable SR problems. Suppose that we have learned a symbolic equation of the prior i variables, denoted by x_≤ i (i=1,2,…). Next, we will estimate the mathematical equation of a newly added variable x_i+1. To achieve this, we use the above data generator to generate K groups of data samples for the current variable x_i+1. For each group, we will generate M data samples via varying the previously learned variables, i.e., x_≤ i, given a specific value of x_i+1, as shown in Fig. <ref>. Specifically, we randomly assign M different values to x_≤ i while keeping other control variables x_≥ i+2 fixed and assigning a value to x_i+1. Then they will be fed into the data generator, denoted by f_x_≥ i+2(x_≤ i,x_i+1), to produce M samples for a given x_i+1. Here we use 𝐅^k to represent the k-th group of samples for x_i+1, and 𝐗^k to represent different values of previously learned variables x_≤ i. By randomly choosing K different values for x_i+1, we can generate K groups of data samples 𝐅={𝐅^k}_k=1^K. Our next step is to perform single-variable symbolic regression to estimate the expression of x_i+1 based on the generated samples. Single-Variable Symbolic Regression. We propose single-variable SR to predict the mathematical expression for the current independent variable x_i+1. The key idea is to estimate the coefficients in the skeleton of previously learned variables, e.g., f_x≥ i+1(x_≤ i)=C_1x_i+C_2x_i-1x_1+… + C_j, using the generated samples of x_i+1. As illustrated in Fig. <ref>, our approach is performed in two steps. (1) We adopt optimization techniques, such as BFGS, to estimate K groups of coefficients 𝐂^K={C_1^k,…, C_j^k}_k=1^K in the skeleton using K groups of data samples {𝐅^k}_k=1^K and the corresponding values of previously learned variables {𝐗^k}_k=1^K. Here, the coefficient C_j in the skeleton can be viewed as a function of variable x_i+1. This step enables us to obtain K groups of data samples 𝐂^K related to variable x_i+1 by manipulating it with K different values. (2) We then apply symbolic regression to estimate the mathematical expression about x_i+1 given K groups of 𝐂^K in the first step. Specifically, we feed the coefficient matrix 𝐂^K and the corresponding K different values of x_i+1 into a symbolic model to estimate its skeleton and the corresponding coefficients, {C_1,…,C_j}. Finally, we repeat the above two steps by adding variables one by one until all the variables are covered. §.§ A Walk-through Example To better understand the proposed method, we use a walk-through example to explain its core idea. Take y= x_1x_2+2x_2+2 as an example. Given a set of data points {x_1^(n),x_2^(n),y^(n)}_n=1^N, we first adopt DNNs to learn a mapping function f(x_1,x_2) between the input variables x_1, x_2 and the output y, which will serve as a data generator. Then we use the data generator to generate different data samples for the independent variable x_1 by varying x_1 while keeping variable x_2 unchanged (e.g., x_2=2), i.e., f_x_2(x_1). Next, we leverage a symbolic regression model, such as GP and MCTS, to estimate the mathematical equation about x_1, e.g., we get f_x_2(x_1)= 2 x_1+6. Since it is hard to directly derive x_2 from the discovered equation f_x_2(x_1), we need to convert it into the following skeleton, f_x_2(x_1)= C_1 x_1+C_2, where C_1 and C_2 can be viewed as a function of x_2 that need to be estimated later. After that, we add another independent variable x_2 to the data generator f(x_1,x_2), and then generate M data samples given a random value of x_2, as shown in Fig. <ref>. By choosing K different values of x_2, we can generate K groups of data samples, denoted by {𝐅^k(x_1,x_2)}_k=1^K. The next step is to use an optimization method, such as BFGS, to estimate the k-th group of coefficients C_1^k and C_2^k in the skeleton f_x_2(x_1), given 𝐅^k(x_1,x_2) and 𝐗^k=[x_1,1,x_1,2,…,x_1,M]^⊤. Finally, we apply single-variable symbolic regression to estimate the symbolic regression about x_2 given K groups of coefficients {C_1^k}_k=1^K and {C_2^k}_k=1^K, as shown in Fig. <ref>. For instance, we can get C_1=x_2 and C_2=2x_2+2. Since there are no remaining variables, we complete the process of discovering symbolic equation. If there are additional variables, we repeat this process to estimate their symbolic expressions until all the variables are covered. §.§ Reduction of Search Space We also analyze the relationship between the complexity of a mathematical expression and search space, and then illustrate that the proposed method can significantly reduce the search space. In this work, the complexity of an expression is defined below. Following prior work <cit.>, complexity is defined as twice the number of binary operators {+, -, ×, ÷}, denoted by N_b, plus the number of unary operators {sin, cos, exp, log}, denoted by N_u, in the equation. Mathematically, the complexity can be formulated as 2N_b+N_u. [21]r0.51 < g r a p h i c s > The relationship between complexity and search space for different methods based on 1000 equations with different complexity. The main reason why we define the above complexity is that it is identical to the number of nodes in an expression tree minus one. Plus, most existing symbolic regression and brute force methods often adopt the expression tree for heuristic searching. Hence, we can use this metric to measure the difficulty of symbolic regression. Fig. <ref> shows the relationship between complexity and search space for our method and the state-of-the-art MCTS based on 1000 equations with different complexity. Regarding how to sample these equations, please refer to the detailed description in Appendix <ref>. We can see from the two black curves search space will rise as the complexity is increased. Our method can significantly reduce the search space for discovering the same equation compared to the original MCTS in <cit.>. The blue dashed line and solid line respectively represent the brute force and MCTS. We can see that our method can discover more complex equations under the same search space for both brute force and MCTS. For example, the original MCTS can discover an equation with a complexity of 16, while our method can estimate an equation with a complexity of 31, as shown in Fig. <ref>. §.§ Algorithm Summary We summarize the proposed method in Algorithm <ref>. We first learn a data generator from observed data using DNNs in Line 3. Lines 5-15 aim to generate K groups of data samples 𝐅={𝐅^k}_k=1^K and 𝐗={𝐗^k}_k=1^K by manipulating the current variable x_i+1 with K different values. Then, we use optimization methods to estimate the coefficients 𝐂^K in the skeleton based on 𝐅 and 𝐗 in Line 16. In Line 17, we apply single-variable SR to estimate the symbolic equation of x_i+1 based on K groups of 𝐂^K and the current variable. We will repeat this process until all the variables are completed. § EXPERIMENT In this section, we carry out extensive experiments to evaluate the performance of SRCV. We first compare the discovery rate of our method with state-of-the-art baselines on two SR benchmarks. Next, we apply SRCV to identify the governing equations of two gene regulatory networks. Finally, we perform ablation studies to explore the impact of certain hyper-parameters on symbolic regression. §.§ Datasets We use two SR benchmark datasets, Nguyen <cit.> and Jin <cit.>, for the first set of experiments. To illustrate the effectiveness of our method on complex regression, we specifically select equations containing at least two variables. We also evaluate our method on two gene regulatory networks, including the genetic toggle switch and the repressilator, using synthetic data. Detailed descriptions of these datasets are presented in Appendix <ref>. §.§ Baselines Four baseline approaches are used for comparison with the proposed SRCV. * Symbolic Physics Learner (SPL) <cit.>. This method incorporates prior knowledge into Monte Carlo tree search for scientific discovery. * Deep Symbolic Regression (DSR) <cit.>. It combines RL-based search method and recurrent neural networks (RNN) for symbolic regression. * Gplearn (GP) <cit.>. It is a classic genetic programming method implemented in Python. * Neural-Guided Genetic Programming (NGGP) <cit.>. It is a hybrid method that combines RNN with GP for symbolic regression. §.§ Experimental Setup In the experiments, we have a pre-defined list of basic operations (+, -, ×, ÷, const) and basic functions (sin, cos, exp, log). For SR benchmarks, we generate N=8000 data samples and then split them into 6400 and 1600 for training and validation, respectively. The proposed SRCV aims to discover the underlying mathematical expressions from data based on the above two lists of candidate operations. The discovered equations will be compared with the ground-truth expressions. For the data generator, we use three fully connected layers (MLP) with hidden sizes of 128, 256, and 128, respectively. Then we train the MLP using Adam optimizer with an initial learning rate of 0.1 and cosine annealing schedule. In addition, we use a single batch containing all input data due to the small number of training samples. For single-variable symbolic regression, we choose M=200 data samples for the current independent variable with K=200 different values. Also, we adopt MCTS in the prior work <cit.> to estimate the symbolic equation with a single variable. This paper will use these hyperparameters in the following experiments, unless specified otherwise. Note that we will conduct ablation studies to investigate the impact of some important hyperparameters on the prediction performance of our method. Evaluation Metrics. We run 10 independent tests for each case and calculate the recovery rate for each model. A successful discovery is evaluated using the following two criteria: i) prediction precision and ii) equation equivalence to ground truth. First, the mean square relative error (MSRE) between the prediction and ground truth should be less than 10^-3. Second, the discovered equation should be in an identical or equivalent symbolic form to the target equation. We manually check the discovered symbolic equations to ensure their correctness. §.§ Evaluation on SR Benchmarks First, we evaluate the proposed SRCV on two widely used SR benchmarks: Nguyen and Jin. Table <ref> illustrates the comparison of discovery rate for different methods using 10 random seeds. It can be observed that our method achieves higher recovery rates than the baselines. This is because the proposed SRCV adopts the similar idea of “divide and conquer” that decomposes multi-variable SR into a subset of single-variable SR problems. Besides, our method can significantly reduce the search space of symbolic regression, thus speeding up the discovery. Please refer to the comparison of computational cost in Appendix <ref>. §.§ Evaluation on Gene Regulatory Networks Next, we apply the proposed method to discover the underlying governing equations of two classic gene regulatory networks, the genetic toggle switch and the repressilator. Genetic toggle switch. The genetic toggle switch <cit.> is a synthetic gene regulatory network that has been extensively studied as a fundamental concept in the field of synthetic biology. It has numerous prospective applications in biotechnology, such as the development of biosensors, gene therapies, and synthetic memory devices. The genetic toggle switch consists of two mutually repressive genes controlled by their respective promoters, creating a bistable system that can be toggled between two stable states as follows. dUdt = α_11+V^β-U dVdt = α_21+U^γ-V , where α_1 and α_2 are the synthesis rates of repressors U and V, respectively. β and γ are the cooperativities of repression on two promoters. In this experiment, following the bistable region in prior work <cit.>, we choose α_1=4, α_2 = 4, β = 3, γ = 3, and the initial conditions U(0), V(0) ∈ [0, 4]. To train our model, we generate 1000 trajectories by randomly choosing 1000 initial conditions. We use 800 of them as training data and the remaining 200 as validation data. The time span of each trajectory is t ∈ [0, 1] with a sampling time interval of 0.01. Namely, we sample 100 data points for each trajectory. Fig. <ref> (a) illustrates the predicted trajectories of the genetic toggle switch using SRCV with a random initial condition. It can be observed that SRCV precisely predicts the trajectory, closely matching the ground truth obtained from the ODE solver (odeint). The mean square relative error (MSRE) of our method is about 9.03 × 10^-4. Importantly, our method successfully discovers the underlying governing equation from observed data as follows. We can see that it is quite close to the target model in Eqs. <ref>. We also present the experimental results of the baselines in Appendix <ref>. dUdt = 3.9190.972+V^3-U dVdt = 3.9210.972+U^3-V , Repressilator. The Repressilator <cit.> is another type of gene regulatory network that exhibits oscillatory behavior. This model is critical in studying the dynamics of genetic circuits and provides insights into the principles of oscillatory systems in biology. Comprising three genes, it operates through a feedback loop in which each gene produces a repressor protein that suppresses the expression of the subsequent gene. This process results in a cyclic pattern of gene expression, described as follows. dM_idt = - M_i+α1+P^n_j+α_0 dP_idt = - β(P_i-M_i) [ i=lacI, tetR, cI; j=cI, lacI, tetR ], In Eqs. <ref>, P_i denotes the repressor protein concentrations, and M_i represents the corresponding mRNA concentrations, where i is lacI, tetR, or cI. If there are saturating amounts of repressor, the number of protein copies produced from a given promoter is α_0. Otherwise, this number is α+α_0. β represents the ratio of the protein decay rate to the mRNA decay rate, and n is a Hill coefficient. In this experiment, we set β = 1, α_0 = 10^-5, α = 10, n = 3, and the initial conditions M_i, P_j ∈ [0, 5]. To train our model, we generate 5000 trajectories using 5000 random initial conditions. They are split into 4000 training data and 1000 validation data respectively. The time span of each trajectory is t ∈ [0, 4] with a sampling time interval of 0.01. Fig. <ref> (b) illustrates the trajectory predictions of repressilator using our method. We can see that the trajectory predicted by SRCV closely aligns with the ground truth, with MSRE of about 7.54× 10^-5. Additionally, our method successfully identifies the underlying governing equations, which are provided in Appendix <ref>. §.§ Ablation Studies Effect of M in data generation. First, we examine the impact of M different values of previously learned variables in data generation on the recovery rate. As shown in Table <ref>, when M varies from 50 to 200, the recovery rates of our approach remain fairly consistent. This suggests that our method is not sensitive to the number of generated samples M as it is sufficiently large. Effect of K groups in data generation. Next, we study the effect of K groups of generated data samples for the current variable on the recovery rate of SRCV. We can observe from Table <ref> that the recovery rates almost keep the same as K changes from 50 to 200. Effect of N training samples. Lastly, we investigate the influence of the training data size (N) on the proposed SRCV. It can be seen that our method achieves good performance when the number of training samples is sufficiently large. If the equations are more complex, it might be necessary to increase the amount of training data provided to the DNNs for optimal results. § CONCLUSION In this work, we developed two non-exemplar-based methods, YONO and YONO+, for class-incremental learning. Specifically, YONO only needs to store and replay one prototype for each class without generating synthetic data from stored prototypes. As an extension of YONO, YONO+ proposed to create synthetic replay data from stored prototypes via a high-dimensional rotation matrix and Gaussian noise. The evaluation results on multiple benchmarks demonstrated that both YONO and YONO+ can significantly outperform the baselines in terms of accuracy and average forgetting. In particular, the proposed YONO achieved comparable performance to YONO+ with data synthesis. Importantly, this work offered a new perspective of optimizing class prototypes for exemplar-free incremental learning. abbrv § GENERATE EXPRESSION TREE IN FIG. <REF> We introduce how to generate a mathematical expression with a specific complexity to estimate the search space in symbolic regression. In this work, we attempt to sample equations uniformly, i.e., all valid equations should have the same probability to be selected. A valid equation defined as follows. An equation is valid if it is a proper mathematical equation consisting of M_t (M_t = 5) terminal symbols {x_1, x_2, x_3, x_4, const}, M_u (M_u = 4) unary operators {sin, cos, log, exp}, and M_b (M_b = 4) binary operators {+, -, ×, ÷}. Moreover, it should not contain nested unary operators, such as sin(cos(x_1)+1) or log(sin(x_1)), since they are not very meaningful in most real cases. Before describing our sampling method, we first define F_i and G_i that will be used for sampling. Let F_i and G_i be the number of valid equations with complexity i containing no unary operator and at least one unary operator, respectively. The criterion for distinguishing two equations is based on the structure of their expression trees rather than their algebraic equivalence. According to this, some equations that are algebraically equivalent, such as x_1+x_2+x_3, will be counted multiple times due to different tree structures. Nevertheless, it almost has no influence on the overall results. Next, we introduce the idea of calculating F_i and G_i using dynamic programming. First, we consider F_i. When the complexity i is 0, the valid expression set contains only M_t terminals without unary operators, so F_0 = M_t. When the complexity is greater than or equal to 1, we need to consider the root of the expression tree. Since F_i counts an expression tree with no unary operator, its root has to be a binary operator with M_b possibilities. Suppose the left subtree has complexity of j (j=0, …, i - 2), then the complexity of the right subtree is i-j-2. As a result, it has F_j F_i-j-2 feasible options given a certain j. Finally, we can sum up all the cases and multiply the result by M_b to get F_i. F_i = M_t i = 0 M_b ∑_j=0^i-2 F_j F_i-j-2 i > 0, For G_i, when the complexity is 0, there is no unary operator, so G_0 = 0. When the complexity is greater than or equal to 1, the root can be either a unary operator or a binary operator. (1) If the root is a unary operator, we have M_u F_i-1 options for tree structures. (2) If the root is a binary operator, we assume that the left subtree has complexity of j (j=0, …, i - 2), then the right subtree has the complexity of i-j-2. According to the Definition <ref>, one or both of the subtrees should contain at least one unary operator, so the number of options for tree structures should be G(i,j) = G_j G_i-j-2 + G_j F_i-j-2 + F_j G_i-j-2. Since the root has M_b choices in this scenario, we can multiply G(i,j) by M_b to get the total options for a binary operator. Combining (1) and (2), we can get G_i as follows. G_i = 0 i = 0 M_u F_i-1 + M_b ∑_j=0^i-2 G_j G_i-j-2 + G_j F_i-j-2 + F_j G_i-j-2 i > 0, Lastly, we summarize the sample subroutine in Algorithm <ref>. First, we need to figure out whether a mathematical expression contains a unary operator or not. The probability of containing at least one unary operator is G_i / (F_i+G_i), and that of not containing one is F_i / (F_i+G_i). If there is no unary operator, we use SampleByNU in Lines 5-17 to generate the expression. This function operates as follows: if the complexity is 0, we select a terminal symbol M_u (M_u = 4) as mentioned above. Otherwise, we sample the operator at the root and the complexity of the left subtree according to the number of their options. After this, we recursively invoke subroutines to generate the subtrees. If there exists at least one unary operator, we use SampleByU instead, which uses a similar way to that in SampleByNU. By doing this, we can generate an expression tree with a specified complexity. § DATASET DESCRIPTION First, we introduce how to generate training data and validation data using two SR benchmarks. As shown in Table <ref>, we choose a certain range and then generate 8000 data samples. Then we split them into 6400 and 1600 for training and validation, respectively. Next, we describe how to generate the synthetic data for two gene regulatory networks: the genetic toggle switch and the repressilator. For the genetic toggle switch, we generate 1000 trajectories by randomly choosing 1000 initial conditions. We use 800 of them as training data and the remaining 200 as validation data. The time span of each trajectory is t ∈ [0, 1] with a sampling time interval of 0.01. Namely, we sample 100 data points for each trajectory. For the repressilator, we generate 5000 trajectories using 5000 random initial conditions. They are split into 4000 training data and 1000 validation data respectively. The time span of each trajectory is t ∈ [0, 4] with a sampling time interval of 0.01. § EXPERIMENTAL SETUP FOR DIFFERENT METHODS In this subsection, we present the experimental settings for the baselines. Following the prior work <cit.>, for gplearn, we set the population to be 10000 and the number of generations to 50. For other baselines, SPL, NGGP, and DSR, we directly use their source code with the default parameters to implement experiments. § COMPUTATIONAL COST OF DIFFERENT METHODS We also compare the computational cost of the proposed SRCV and baseline approaches on Nguyen benchmark, as shown in Table <ref>. Our experimental results illustrate that our method, including DNNs and single-variable SR, have less running time than the baseline on Nguyen-12. However, it needs more running time than GP on Nguyen-09 and Nguyen-10, than DSR and NGGP on Nguyen-10 and Nguyen-11. However, our method has much higher recovery rate than DSR and NGGP, as illustrated in Table <ref>. § DISCOVERED GOVERNING EQUATIONS OF REPRESSILATOR Below, we present the discovered equations of the repressilator using our method. We can see that the equations are very close to the target model, except for α_0 = 10^-5. The main reason is that α_0 is too tiny to be estimated. However, it does not impact the trajectory prediction too much, according to our results in Fig. <ref>. dM_lacIdt = -M_lacI + 9.9390.982+P_tetR^3 dM_tetRdt = -M_tetR + 10.3381.035+P_cI^3 dM_cIdt = -M_cI + 9.8450.987+P_lacI^3 dP_cIdt = M_lacI - P_cI dP_lacIdt = M_tetR - P_lacI dP_tetRdt = M_cI - P_tetR , § BASELINES ON GENE REGULATORY NETWORKS We also adopt the baseline methods to identify the governing equations of two gene regulatory networks. As illustrated in Table <ref>, we can observe that the governing equations uncovered by our SRCV method are close to the ground truth, while the baselines fail to discover governing equations from data. Note that we only list one representative equation in the following Table, but you can refer to the discovered equations of our method in Eqs.<ref> and <ref> above. Thus, the proposed SRCV demonstrates the superior performance over the baselines in discovering symbolic equations. § LIMITATIONS AND FUTURE WORK The accuracy of our evaluation results is impacted by the accuracy of single-variable symbolic regression. In this work, we adopt the state-of-the-art MCTS method for symbolic regression. For future work, we need to develop new single-variable symbolic regression models to improve the accuracy. The accuracy of prediction results is also impacted by the number of training samples for DNNs. If the number of training data is limited, we need to explore a physics-enhanced neural symbolic regression model. § BROADER IMPACT The goal of this work is to improve the accuracy and scalability of symbolic regression for scientific discovery. The proposed SRCV method has demonstrated superior performance over state-of-the-art methods in discovering analytical expressions from data, which can promote AI for scientific discovery. Note that this fundamental research will not cause any potential negative societal impacts. § COMPUTING RESOURCES We implement our experiments on the server with 1 A5000 GPUs with 24 GB graphics memory. The server has 32-Core 3.4 GHz AMD EPYC 7532 processors with 250GB RAM with 4TB SSD of storage capacity.
http://arxiv.org/abs/2306.09318v1
20230615175314
Inroads into Autonomous Network Defence using Explained Reinforcement Learning
[ "Myles Foley", "Mia Wang", "Zoe M", "Chris Hicks", "Vasilios Mavroudis" ]
cs.CR
[ "cs.CR", "cs.LG" ]
2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CAMLIS'22: Conference on Applied Machine Learning in Information Security (CAMLIS), October 20–21, 2022, Arlington, VA 1]Myles Foley[ [email protected] ] 1]Mia Wang[ [email protected] ] [1]Imperial College London 2]Zoe M[ [email protected], ] 2]Chris Hicks[ [email protected], ] 2]Vasilios Mavroudis[ [email protected], ] [2]The Alan Turing Institute Computer network defence is a complicated task that has necessitated a high degree of human involvement. However, with recent advancements in machine learning, fully autonomous network defence is becoming increasingly plausible. This paper introduces an end-to-end methodology for studying attack strategies, designing defence agents and explaining their operation. First, using state diagrams, we visualise adversarial behaviour to gain insight about potential points of intervention and inform the design of our defensive models. We opt to use a set of deep reinforcement learning agents trained on different parts of the task and organised in a shallow hierarchy. Our evaluation shows that the resulting design achieves a substantial performance improvement compared to prior work. Finally, to better investigate the decision-making process of our agents, we complete our analysis with a feature ablation and importance study. Reinforcement Learning Autonomous Cyber Defence Deep Learning Network Defence Inroads into Autonomous Network Defence using Explained Reinforcement Learning [ Received ... ; accepted ... ============================================================================== § INTRODUCTION Computer network security is characterised by an asymmetry as the defender needs to ensure constant protection of the network's components, while the adversary can opportunistically single-out weak entry points. Such asymmetries have been identified and addressed in many other areas of cyber security. For example, cryptographic protocols (e.g., TLS) thwart denial of service attacks by ensuring that the prover commits enough computation cycles before the verifier does so. In network defence, however, the problem remains open as the task is complex <cit.> and involves a wide array of both attack vectors and mitigation tools. Thus, network defence is currently handled primary by human experts which entails high operational costs. RL, and particularly deep RL (DRL), excels in interactive tasks that cannot easily be solved using analytical solutions. Human and even super-human levels of performance have been achieved in a range of complex tasks including classic board games such as chess and Go <cit.>, video games ranging from classic Atari <cit.> to multi-player real-time strategy games <cit.>, autonomous driving <cit.>, and robotics <cit.>. Recently, DRL has also been successfully applied to autonomous network defence <cit.>, a highly interactive task where the defender proactively monitors the state of the network, identifies abnormalities, and acts to remediate them. Commonly, this takes the form of a shallow hierarchy of specialised subagents coordinated by a controller, any combination of these being autonomous. To date however, there has been limited consideration for the explainability of these models. Explainable AI has, in domains such as natural language processing and computer vision <cit.>, proven useful not only for end users but also experts and developers of AI systems. DRL models are particularly challenging to explain because the neural networks which represent their agent policies are not readily understandable by humans. Nonetheless, the ability to explain and understand the actions of an autonomous defensive agent is critical. This work investigates, and answers in the affirmative, whether explainable RL (XRL) models and environments can improve autonomous defensive capabilities and aid in their development. §.§.§ Contributions Our main contributions are: * We develop methodologies for visualising (i.e., explaining) attacker functionality in the CybORG cyber environment. Our methodology highlights previously undocumented differences in the adversary models and motivates two new controller architectures with improved classification accuracy. * We present the full details of our new controller and specialised subagent models. We then evaluate them against two classes of adversary in the CybORG environment realising substantial performance improvements. * We perform a feature ablation and importance study to understand the most influential elements in the observation space and explain our model outputs. § RL BACKGROUND In this section we discuss the key RL techniques that are relevant for the rest of the paper. §.§ Deep RL Algorithms §.§.§ PPO Proximal Policy Optimisation (PPO) is an efficient policy gradient method  <cit.> for DRL. It has been shown to outperform other popular algorithms such as A3C <cit.>, achieving super-human performance in a variety of complex tasks including 49 separate ATARI arcade games <cit.>. Despite its effectiveness in very complex environments <cit.>, it has seen only limited use in security settings <cit.>, PPO uses a policy π_θ (θ∈ℝ) with an objective function that is defined by the total reward J(θ) = 𝔼_π_0[∑^∞_t=0γ^t r_t]. By formulating the objective function in this way actor-critic architectures can be used: the actor selects an action which is evaluated by the critic. The policy gradient is then computed: ▽_θ J(θ) = 𝔼_π_0[▽_θlogπ_θ (s, a)A_π_θ(s)] A_π_θ(s) is the advantage of taking action a instead of the average action as computed by the policy π_θ (Q_π_θ(s,a)x-V_π_θ(s)) <cit.>. During gradient descent, PPO introduces a clipping function to both prevent reaching local optima during large updates and avoid smaller updates that significantly increase the length of training. §.§ Curious Exploration Curiosity is a technique that enables agents to explore their environment based on an intrinsic reward signal not provided by the environment <cit.>. Such a signal is particularly useful in the absence of a continual extrinsic reward (e.g., the running score found in some games). Pathak et al. <cit.> introduce the Intrinsic Curiosity Module (ICM), a semi-supervised technique in which agents choose actions based on the uncertainty in the outcome of each action, intrinsically motivating the exploration of unknown states. ICM also ensures that agents are only incentivised to reach states that are impacted by their actions, avoiding those which are inherently unpredictable. §.§ Multi-Armed Bandits The Multi-Armed Bandit (MAB) is an RL problem so called after `one-armed bandits' or slot machines. It is a special case of the markov decision process where all actions return to the same state, thus there is only one state. In such problems agents take a single action and observe the reward, the goal is then to maximise this reward. In this way it can be seen to have an episode length of 1, and a maximum number of episodes M. Thus the agent must explore the actions to find the optimal strategy to maximise this reward over M. <cit.> §.§ Explainable RL Explainable RL (XRL), a fledgling sub-field of explainable AI, is the study of tools and methods which enhance human understanding of the actions taken by autonomous agents. A recent and thorough review of XRL is provided by Heuillet at al. <cit.> and separately by Puiutta and Veith <cit.>. XRL methods are commonly divided between those which are intrinsic, sometimes called transparent, and those which are post-hoc. Intrinsic XRL models are inherently interpretable and offer explainability at the time of training. In contrast, post-hoc explainability occurs after training; often by creating a second, simpler model to provide explanations. In DRL, learned policies are represented by neural networks making them difficult to interpret. Post-hoc explainability allows the performance advantages of DRL <cit.> to be retained whilst facilitating human understanding of autonomous decision making. Explainability is not limited to users and experts affected by the decisions of models but, as in this work, is a valuable researcher's aid in developing more efficient and higher-performance models. § NETWORK SIMULATION ENVIRONMENT We use the CybORG environment <cit.> which simulates the computer network of a manufacturing plant, as shown in Figure <ref>. The network consists of five user hosts (Subnet 1), three enterprise servers (Subnet 2[Subnet 2 also includes the defender's machine.]), three operational hosts and the operational server (Subnet 3). Each host exposes a number of network services that other hosts can connect to, and which may have exploitable vulnerabilities. However, due to the network's firewalls hosts in Subnet 1 cannot directly connect to machines in Subnet 3, and the operational server is accessible only through the operational hosts. The liveness of the operational server has a direct impact on the manufacturing and is considered critical. CybORG assumes two players, a defender and an adversary, who interact with the turn based environment using the actions available to them. A common drawback of simulated environments in RL is the reality gap which causes agents not to generalise sufficiently when moved from the simulation (i.e., training) to reality (i.e., evaluation). This is due to the simulation not adequately matching reality (e.g., in robotics). To address this, CybORG provides a network emulator that runs on Amazon Web Services (AWS). The combination of simulation and emulation ensure that the reality gap is minimised, with the actions available and their effect on the environment consistent across both <cit.>. The CybORG environment is host to the `Cyber Autonomy Gym for Experimentation' (CAGE) challenge <cit.>. CAGE is an international Kaggle-style competition, providing an increasingly challenging benchmark for the evaluation of autonomous defensive agents. The competition is currently in its second iteration (CAGE II). §.§ Action Space Attackers and defenders have unique action spaces. Defenders perform actions at the host level: 1) Analysing the processes running, 2) Terminating malicious processes, 3) Restoring the host to a previous (benign) state, and 4) Deploying honeypot[Honeypot refers to a decoy system or service that lures attackers by appearing to suffer from known security vulnerabilities. Honeypots are used to detect malicious actors and study their behaviour.] services. Adversaries can: 1) Scan a subnet for hosts, 2) Scan the ports of a host, 3) Exploit a service on a port, 4) Escalate their access, and 5) Disrupt the services on the operational server. Both players have a `sleep' action to perform no action on the network. Based on the selected actions, the environment updates its state and updates the agents' scores. It should be noted that even valid actions may not succeed, as the CybORG simulator introduces randomness to mimic the behaviour of the emulator (e.g. a valid node restoration may occasionally fail). §.§ Observation Space The defender's observation space is a vector of 52 bits i.e., 4 bits for every network host. The first two bits represent whether the host state is unknown (none), scanned or exploited (set when a decoy is triggered); the last 2 bits specify the access the attacker has on the host machine (i.e., none, user and administrator). As in a real network defence situation, neither the defender nor the adversary is omniscient. Neither agent knows the state of the network or the other's position with absolute certainty. In addition, the outcomes of actions are stochastic. For example, from the defender's perspective, when an exploit fails it is not possible to precisely determine which exploit was attempted. This can be crucial information in the instance that an adversary favours a specific exploit strategy. A better informed defender could strategically place decoys on the targeted service to frustrate and evade further attempts more effectively. §.§ Reward Function Most games include a scoring function that quantifies the performance of the player. Similarly, CybORG uses a reward function that rewards the adversary and penalises the defender for every compromised or impacted network host. The reward function is as follows: on each turn, for every host on which the adversary has admin access, the defender receives a reward of -0.1 and for every server the reward is -1. There is a -10 reward for disruption on the operational server and a -1 reward when any device is `restored'. In the context of RL, the negative reward for the defensive agent incentivises the agent to take actions that minimise the effect of the adversary. §.§ Adversaries The environment includes two adversaries: the BLineAgent that has prior knowledge (i.e., full knowledge of the network's structure but not its current state), and the MeanderAgent which does not have any prior information. Both agents share the same objective, to reach the operational server and, after escalating their privileges, disrupt its services (i.e., impact its liveness). Due to prior knowledge, the BLineAgent follows an optimal exploitation trajectory to the operational server. In contrast, the MeanderAgent breadth-wise scans the network for vulnerable hosts and gradually traverses the subnets. To prevent trivial defence strategies, the adversary is given user access on a predetermined host (in Subnet 1) that cannot be `restored' to a benign state by the defender. § MODEL The models that we train have a similar basic structure to those described in <cit.> that were trained for CAGE I. In particular, we focus our efforts on training a hierarchy of specialised defensive agents using DRL. These agents feature a controller agent that, at each time step, chooses a subagent to perform the action. Each subagent is trained against a specific adversarial strategy. As described in Section <ref>, the environment includes two adversaries. The hierarchical architecture was developed specifically to exploit this. The model supports two expert subagents that, through the controller, are `consulted' over the course of an episode (Figure <ref>). This avoids the performance limitations of a single, more general agent. Given the differences in the two adversaries, each subagent requires a different neural architecture for best performance. These are described below. §.§ Defence Our MeanderAgent defensive subagent was trained using the PPO algorithm and utilises a comparatively deeper neural network including three hidden layers with widths 256, 256, and 52. Full details of the hyperparameters used can be found in Appendix <ref>. Notably, curiosity did not improve the performance. Since the MeanderAgent is explicitly designed to explore the network during its attack, the opposing defender is also be forced to explore more broadly and to employ a wider range of strategies during training. As such, it learns sufficiently general strategies without the need for curiosity. §.§ Defence In contrast to the MeanderAgent, the BLineAgent follows a near-optimal path through the network. The BLineAgent defence, therefore, is at much greater risk of overfitting during training. As a result, we found that when training defensive agents against the BLineAgent, it was beneficial to include the curiosity mechanism. In this paper we consider two subagents for BLineAgent defence: an Action Knowledge (AK) subagent, and a State Representation (SR) subagent. Both are trained using PPO with curiosity but make different modifications to the state space. The AK subagent modifies each observation by appending a single bit indicating the success of the previous action. We find that this gives the subagent a better understanding of the defensive process and results in an improvement in performance. Secondly the SR agent is identical to the AK subagent, but receives observations of 27 floats as opposed to 53 bits. In this state space, each host has two floats to represent the features of activity and compromise. The additional float indicates whether the previous action succeeded. Although the mean episode reward is comparable to the AK agent's mean reward, we see a notable decrease in variance. § EXPLAINING THE ADVERSARY MODEL The behaviour of the adversaries is dependent on the network topology and the choice of defensive actions. In addition, there is stochasticity in both the choice and outcome of actions across all of these components. Explaining adversarial behaviour proved essential in developing effective defensive models. To better understand each adversary we, at each time step, record the choice of action, outcome and the resulting state transition. For consistency across multiple episodes we resolve IP ranges and addresses to subnets and hostnames, respectively. We observe that the connectivity (i.e., the edges) of the resulting graph provides a clear signal for differentiating the two adversaries. Figure <ref> shows a subset of the observations, recorded during the first four steps of adversarial behaviour, in which the BLineAgent and MeanderAgent can be seen adopting a depth-first and breadth-first approach to attacking the network, respectively. In Section <ref> we present two methods which make use of this observation to more accurately determine the class of adversarial threat than in prior work <cit.>. In Appendix <ref> we include the fully extracted adversary specifications generated by our methodology. § HIERARCHICAL RL ARCHITECTURE In order to improve the performance of our defensive capability we explore the use of alternative controller models. We introduce two new types of controller for this task, one heuristic and another bandit-based. §.§ Bandit Controller Model We employ a bandit controller that is based on the multi-armed bandit architecture. The task is to determine which of the adversaries is currently attacking the network, based on the sequence of observations. However, using a bandit or bandit-like approach comes with several challenges in this setting. In the traditional multi-armed bandit there is no notion of state: an agent takes actions and then observes the reward. However, in the CybORG environment a unique observation cannot be used to determine the current adversary. Thus sequences of observations need to be observed and, due to the stochasticity, there are multiple sequences that can be observed over a given number of timesteps. A single bandit predicting the adversary will do no better than 50%. This is analogous to the traditional multi-armed bandit setting. Consider the task of determining which of two slot machines has the higher payout in a casino (A): the task is trivial after several attempts. Now consider a second identical casino (B) where the payout of the machines is flipped. Again, we can find the better machine in B after some error. Finally, consider being randomly placed in A or B and having only one attempt to select the slot machine with the highest payout. As we do not know which casino we are in (as everything is identical), the best possible guess rate is 50%. We are able to solve this problem by abstracting the observations (which casino you are in) from the bandit. In this way we define N_b bandits, one for each of the observations. As such the observation is unique to the bandit predicting the adversary. While this could also be solved by a logistic regression model, the Bandit Controller is able to learn with fewer samples, also being able to determine new adversary behaviours and learn to predict them in an online fashion. §.§.§ Bandit Controller Implementation The bandit learning algorithm, shown in Algorithm <ref>, allows the bandit controller to track the states that it has previously seen, creating a new bandit for each newly seen state. Each of these bandits is initialised with Q values for each of the actions a ∈{0, 1, 2}, where these correspond to the MeanderAgent, BLineAgent, and no adversary. The Q values are updated using reward R and the number of times that prediction has been selected, N(A). We train the bandit controller for 15,000 timesteps, using epsilon = 0.01. The Bandit Controller has a state different to that of its subagents. Its state is a sliding window of the last four timesteps from the CybORG environment. As we can see from Figure <ref>, the minimum number of actions before an adversary has user privilege (and the first unambiguous instance of malicious behaviour) is three. A defensive agent can observe this on the fourth timestep, hence a prediction from the bandit controller only needs to happen once per episode. Finally, we use a simple reward function of +1 for a correct prediction, and -1 for an incorrect prediction. §.§ Heuristic Controller Model We also construct a heuristic for predicting the adversary. This approach is possible as we are able to observe the patterns that the adversaries display in a controlled version of the CybORG environment. As we can see in Figures <ref> and <ref>, the BLineAgent and MeanderAgent have fundamentally different strategies in the first four moves they make. Using this privileged view of the adversarial behaviour allows for a manual and formal definition of the behaviour, as defined in Heuristic <ref>. As in the Bandit Controller we use this heuristic once per episode, on the fourth timestep, to determine which adversary is attacking the network. The scanning of two different hosts on the network within the first four timesteps indicates the presence of the MeanderAgent adversary. Otherwise, this is either the BLineAgent adversary or the User agent. § EVALUATION In this section we evaluate the performance of our specialist subagents against the two adversaries. We further investigate the performance of the controller models. Finally, we evaluate the full defensive model capable of defending against either adversary. We use the model described in prior work <cit.> as a baseline performance measure (baseline for brevity), as this has been established as state-of-the-art and achieved the best score in CAGE I. Because the scoring function assigns only penalty points (i.e., 0 is the theoretically maximum score), all the reported rewards are negative. §.§ Specialised Sub Agents §.§.§ Training Results Figure <ref> shows the average reward of each defensive subagent as trained against the BLineAgent (left column), and the MeanderAgent (right column). The methods of AK and SR achieve peak rewards against the BLineAgent of -12.227 and -11.465 respectively, both of which are an improvement over the baseline <cit.> PPO with curiosity based model, which achieves -13.475. Furthermore, removing curiosity negatively impacts the reward against the BLineAgent, as shown clearly in the max reward plot of Figure <ref>(c) The difference in mean reward is explained by the maximum and minimum rewards. All models apart from PPO experience a first plateau in maximum reward of -9 and then step up to a second plateau of around -1. The SR agent finds the optimal policy earlier than the AK agent during training. In addition, the minimum rewards of the baseline and PPO model have greater variance than the SR and AK agents, and the AK agent has a marginally higher probability to score very poorly (i.e., below -300). Earlier optimal policy convergence and smaller policy variability makes the SR agent the best model against BLineAgent agent. This corroborates the standard deviation graph, and 1,000-episode evaluation results; in Table <ref>, where the SR agent displays less negative reward and with a standard deviation that is only a fifth of the AK agent. Against the meander attacker the PPO and SR agents outperform the baseline (-24.384) with best mean rewards of -17.065 and -19.959 during training. Figure <ref> shows the advantage of using a PPO 3-layer architecture which results in higher min and max rewards with reduced variance. §.§.§ Specialist Agents Here we evaluate the performance of our defensive subagents against their separate adversaries. We select the best performing agents from training for evaluation evaluate: PPO defence for the RedMeander and both the AK and SR defence for the BLineAgent. We evaluate each for 1,000 episodes of 100 steps and summarise our results in Table <ref>. For completeness, we also cross-evaluate our agents against the adversary not seen during training. Against the RedMeander adversary, PPO defence outperforms the baseline against both adversaries resulting in a mean score of -21.3 (improvement by a factor of 3.6) and a reduction in standard deviation by a factor of more than 9. This highlights the advantage of the increased depth of the neural network over the baseline. Against the BLineAgent adversary, we see that the SR agent is able to achieve a 1.5 times greater reward, with 4.89 times lower standard deviation. However, this comes at the cost of generality. A trend in all of the subagents is that when defending against previously unseen adversaries, the performance is significantly diminished. §.§ Controller Models As seen in Section <ref>, the defensive subagents do not generalise well beyond the adversaries that they are trained against. To address this, Sections <ref> and <ref> introduce two new controller architectures: Heuristic and Bandit. Here we evaluate the ability to correctly predict the adversary within the first four timesteps of an episode (as our controllers predict the adversary on the fourth timestep). For each episode, we randomly sampled one of the two red adversaries (i.e., 50% probability of selecting BLineAgent). Table <ref> shows that the baseline model has strong biases on selecting the BLineAgent agent. To investigate further, we let the baseline agent make predictions on each timestep until the end of the episode (c.f. only guessing after the 4th timestep). As seen in Table <ref>, the repeated guesses significantly reduced bias but accuracy remained low. In contrast, neither our bandit or heuristic controller exhibit this bias and can perfectly predict the correct attacker type. §.§ Hierarchical Defensive Model Here we evaluate the complete defensive model. Table <ref> reports the mean and standard deviation for the `best pair' combinations of subagents as determined by our evaluation in Section <ref> ( i.e., PPO for MeanderAgent, and AK or SR for BLineAgent). We observe that the subagents play a significant role in the improvement over the baseline. Over episodes of 100 timesteps, we are able to improve the result by at least 30% for the BLineAgent and 170% for MeanderAgent. The lowest reward values are split evenly between the Heuristic and Bandit controllers. These models outperform the PPO controller models regardless of the subagents in four of the six combinations of adversary and episode length. MeanderAgent performance is improved by 11.7%, which is more significant than BLineAgent (only improved by 1%) when using Bandit or Heuristic controller. Table <ref> indicates that models trained with BLineAgent perform poorly on MeanderAgent. This can be explained by the fact that BLineAgent has more information about the network, so its behaviour is more predictable. In contrast, MeanderAgent's actions have more randomness. § EXPLAINING THE DEFENSIVE MODELS It is critically important that human operators can understand the decisions made by autonomous agents. Using post-hoc XRL techniques, we determine whether our defensive agents are truly defending the network as their primary objective or as a side effect of an unintended objective. This is common in RL where agents may manipulate improperly specified reward mechanics to maximise their score in unintended ways. §.§ Ablation Study To understand which of the features in the observation space influence the agents decision making we perform an ablation study over knowledge of: 1) the success or failure of the previous action (hence referred to as previous action), 2) the adversary's access onto a host (hence referred to as adversary access), and 3) whether an adversary has scanned a host (hence referred to as adversary scan). The ablation results in Figure <ref> show the AK and SR agents against the BLineAgent in <ref> and <ref>, and the PPO agent against the MeanderAgent[This defensive agent doesn't use the previous action however we include it for completeness.] in <ref>. Figure <ref> indicates that the AK agent's performance is greatly affected by `adversary access'. While comparatively little impact seems to derive from the ablation of `adversary scan' and `previous action' there is some variance and the rewards fall to -812 and -539, respectively. Interestingly, the SR defensive agent is greatly affected by the ablation of the `adversary access' and `adversary scan', with the distribution of rewards being more negative in both cases. This is especially apparent in the case of `adversary scan'. Previous action has less of an effect in both AK and SR, yet still reduces the mean reward to -30.42 (a factor of 2) and -40.6 (a factor of 3), respectively. However, AK has some outlier scores that result in a minimum reward of -987.8. For PPO against MeanderAgent, Figure <ref> shows that ablation of `adversary access' causes a drastic reduction in reward, bringing the mean value to -781.23. Ablation of `adversary scan' reduces the mean reward to -44.70, a factor of 2.52 more negative than when the observation is included. §.§ Feature Importance To further validate the importance of `adversary access', `previous action', and `adversary scan', we utilise a well known framework from explainable AI called SHapley Additive exPlanations (SHAP). This uses an implementation agnostic game theoretic approach to explain the importance of features in determining outputs. SHAP is able to connect optimal credit allocations with local explanations to determine SHAPley values. These values provide a way of accurately distributing the contribution of the individual features within the complete feature space <cit.>. Figure <ref> shows the SHAPley values for the trained AK and SR subagents against the BLineAgent in <ref> and <ref>, and PPO against the MeanderAgent in <ref>. Each point on these plots is a feature in a specific observation, with the colour representing the value of that feature. All defensive agents observe the same trend in feature importance regardless of their training adversary: `adversary access' is the most important followed by `adversary scan', a trend that is also observed in Figure <ref>. Note that the PPO RedMeander defensive model doesn't use `previous action' and hence is not included in Figure <ref>. We show that `adversary access' is an important part of the observation. This indicates that the defensive agents are aware that they need to remove the attackers from hosts. The importance is also seen in Figure <ref> as the most significant shifts in reward distribution occur when ablating `adversary access'. In addition `adversary scan' is of importance to the agents which is clear in <ref> as the defensive agent's performance is significantly impacted in the absence of this information. This correlates with Figure <ref> as `adversary scan' has the greatest distribution of any of the SHAP values for the BLineAgent defensive agents. While knowledge of the `previous action' has the lowest feature importance for the agents, we argue that this is still important for these defensive agents, which, with this knowledge, outperform the baseline and PPO-only models in <ref>. For example, take the case where a defensive agent acts to remove an adversary from a host, if this action fails then the defensive agent will have to adjust its strategy. The importance of this feature can further be seen in Figures <ref> and <ref>, as ablation of this feature has a non-trivial impact on the performance of the agents. § RELATED WORK The effectiveness of RL across a range of simulated and abstracted autonomous network defence scenarios is well established in the literature. Han et al. <cit.> show the feasibility and resilience of RL agents under causative attacks in software defined networks. Elderman et al. <cit.> model network defence using the framework of a Markov game with incomplete information, highlighting the capabilities of even traditional RL methods (i.e., not DRL) in interactions between network attacker and defender. The hierarchical approach we build upon was first proposed by Foley et al. <cit.>. Comparatively, we propose two improved controller models based on a deeper understanding of the adversary models. We also develop improved subagents providing an explainability analysis to understand what causes the agents to defend networks effectively. Other approaches to autonomous network defence include dynamic causal Bayesian optimisation <cit.> as shown by Andrew et al. <cit.>. Several alternative network defence simulation environments have been proposed in the literature. Molina-Markham et al. <cit.> propose FARLAND which similarly to CybORG provides a hybrid simulation and emulation based environment capable, owing to a rich feature space, develops agents that can defend real-world networks. Microsoft have an experimental research platform CyberBattleSim <cit.> that offers, at a high-level of abstraction, a simulation-only network defence environment based on post-breach lateral adversary movement and system exploitation. In contrast to CybORG, CyberBattleSim places greater emphasis on credential access and data collection such as simulating a GitHub project leaking credentials in the commit history. Another simulation-only environment developed by Andrew et al. <cit.> is Yawning Titan (YT). Of all the network defence environments, YT offers the greatest abstraction and omits the majority of individual host details (e.g., operating system processes, network ports) needed for emulation. RL has also be applied to several closely related problems. In penetration testing (i.e., exploitation which is a subset of the CybORG envionment), Yang and Liu <cit.> formulate automated penetration testing in the multi-objective RL framework and demonstrate superior performance. Independently, Tran et al. <cit.> explore hierarchical RL architectures for the same task based on their findings that decomposing large action spaces into smaller sets produces greater performing agents. In intrusion prevention, Hammar and Stadler <cit.> demonstrate that RL is capable of intrusion prevention when formulated as a multiple stopping problem. Feng and Xu <cit.> train a defender to protect a single device from an unknown attacker and finally, Tahsini et al. <cit.> use a single defender model to protect a water tank system from adversarial attacks. § CONCLUSION Taking advantage of the rapidly increasing capabilities of neural networks and the advancements in RL algorithms, we present an improved approach to autonomous network defence. Beyond high performance, we place emphasis on the steps before and after training the model. Before training, we use a methodology to observe the adversary behaviour and inform choices in our hierarchical model. Specifically, we introduce two controller architectures, one heuristic and another bandit-based, that improve accuracy when predicting adversaries. Additionally we develop enhanced subagent architectures optimised for the specific classes of adversary. After training, our post-hoc analysis includes a feature importance and ablation study for each specialised subagent within the complete hierarchical model. Our results shed light on each agent's decision making process and help to better understand the system as a whole. This work contributes to a less studied but equally important research direction for future works in autonomous network defence. § ACKNOWLEDGEMENTS The authors would like to acknowledge that research was partially funded by EPSRC grant EP/T51780X/1. width=.5 § HYPERPARAMETER VALUES Optimal, lower and upper bounds of the of the hyperparameters are shown in Table <ref>. A uniformly sampled grid search was used to determine the optimal values. § EXTENDED ADVERSARY MODELS width=.31 Here we provide the full action-outcome transition graphs for the BLineAgent adversary, both with and without the presence of our defensive model. Table <ref> provides the definitions of all the acronyms used. width=.5 width=.85
http://arxiv.org/abs/2306.05061v1
20230608092446
A Dynamic Feature Interaction Framework for Multi-task Visual Perception
[ "Yuling Xi", "Hao Chen", "Ning Wang", "Peng Wang", "Yanning Zhang", "Chunhua Shen", "Yifan Liu" ]
cs.CV
[ "cs.CV" ]
Y. Xi, H. Chen, N. Wang, P. Wang, Y. Zhang, C. Shen, Y. Liu Yuling Xi, Ning Wang, Peng Wang, Yanning Zhang() School of Computer Science; and Ningbo Institute, Northwestern Polytechnical University, China {xiyuling, ningw}@mail.nwpu.edu.cn, {peng.wang, ynzhang}@nwpu.edu.cn Hao Chen, Chunhua Shen Zhejiang University, China [email protected], [email protected] Yifan Liu The University of Adelaide, Australia [email protected] A Dynamic Feature Interaction Framework for Multi-task Visual Perception YX, HC contributed equally. Part of this work was done when YX was visiting Zhejiang University. Yuling Xi, Hao Chen, Ning Wang, Peng Wang, Yanning Zhang, Chunhua Shen, Yifan Liu July 31, 2023 ============================================================================================================================================================================== Multi-task visual perception has a wide range of applications in scene understanding such as autonomous driving. In this work, we devise an efficient unified framework to solve multiple common perception tasks, including instance segmentation, semantic segmentation, monocular 3D detection, and depth estimation. Simply sharing the same visual feature representations for these tasks impairs the performance of tasks, while independent task-specific feature extractors lead to parameter redundancy and latency. Thus, we design two feature-merge branches to learn feature basis, which can be useful to, and thus shared by, multiple perception tasks. Then, each task takes the corresponding feature basis as the input of the prediction task head to fulfill a specific task. In particular, one feature merge branch is designed for instance-level recognition the other for dense predictions. To enhance inter-branch communication, the instance branch passes pixel-wise spatial information of each instance to the dense branch using efficient dynamic convolution weighting. Moreover, a simple but effective dynamic routing mechanism is proposed to isolate task-specific features and leverage common properties among tasks. Our proposed framework, termed D2BNet, demonstrates a unique approach to parameter-efficient predictions for multi-task perception. In addition, as tasks benefit from co-training with each other, our solution achieves on par results on partially labeled settings on nuScenes and outperforms previous works for 3D detection and depth estimation on the Cityscapes dataset with full supervision. § INTRODUCTION Modern computer vision applications often deal with multiple tasks simultaneously. For instance, an AR application may need joint semantic understanding and 3D scene reconstruction, and a self-driving car relies on object detection, road segmentation, and depth estimation. A unified compact multi-task model could significantly reduce computation time and model size, which is crucial for real-world applications. Recent multi-task architectures either have task-specific branches with a shared encoder <cit.> that ignores the correlation of high-level features or rely on an additional shared branch <cit.> to leverage common representations, which leads to computational burden and parameter redundancy. As a result, such elaborately designed networks often aim at specific joint tasks, which can hardly transfer to new tasks. Therefore, a unified multi-task framework that can easily fit into different perception tasks and transfer to new tasks is necessary. Perception tasks such as detection, segmentation, and depth estimation typically require both high-level context information and low-level fine-grained information to precisely describe details. As inherent synergy exists in most perception tasks, we resort to a unified two-branched network to extract basic features for different aims. In particular, an instance branch is responsible for learning multi-scale information to distinguish instance-level properties, , semantic classes for mask and bounding boxes. A dense branch generates rich pixel-level representations for fine-grained localization, , masks, box coordinators, and depth. Flexible combinations of these basic features ensure that our proposed unified two-branched network can handle multiple tasks jointly with minimal effort. Typical two-branched methods <cit.> have made progress in instance segmentation by proposing a low-cost dynamic merging mechanism that aggregates instance-level information and high-resolution dense feature maps. This approach has been extended to panoptic segmentation <cit.>. Wang <cit.> further enables interactions between these two branches with multiple transformer blocks. Instead of using computation-intensive self-attention blocks, we devise a lightweight module, termed Dynamic Message Passing (DMP), based on low-rank factorization to handle second-order information between two dense feature maps. It is parameter-efficient on high-dimensional feature maps and propagates spatial information across branches. This increases the performance of both dense and instance tasks with almost no additional computation cost. Previous work points out that conflicting loss functions may cause gradient updates in different directions for the shared parameters, making it difficult to properly optimize hard parameter sharing MTL. Similar observations also appear in our two-branched structure. If all the parameters are shared in the instance branch or the dense branch, performance decreases. Inspired by the “cross-stitch” unit <cit.>, which can automatically learn a combination of shared and task-specific representations, we propose a Dynamic Router (DR) that uses task and channel awareness to route task features and learn common representation implicitly. With limited extra computation, our model reaches competitive results on joint multi-task visual perception. More specifically, our main contributions can be summarized as follows: * We propose a general and simple two-branched multi-task perception network, which breaks down tasks and groups same level features in a parameter-efficient way to maximize inter-task feature sharing. To our knowledge, this is the first time that panoptic segmentation, monocular 3D detection, and depth estimation have been simultaneously addressed in a single network. * We propose Dynamic Message Passing (DMP) to communicate information across branches and tasks. Through a parameter- and com­pu­ta­ti­on-eff­ici­ent feature merging operation, our network interacts spatial information between branches. Moreover, given the sharing and conflicting relations among tasks, a task- and channel-aware Dynamic Router (DR) is proposed to isolate task-specific features and utilize common properties of the tasks. * We demonstrate significant improvements for multiple perception tasks under simple co-training strategies. Our framework achieves competitive results on nuScenes in a partially labeled setting and surpasses previous methods for 3D detection and depth estimation by a great margin on the Cityscapes dataset in a fully labeled setting. § RELATED WORK Multi-task visual perception in scene understanding targets the problem of training relationships between tasks. Extensive multi-task perception works have resorted to a single branch network that usually focuses on low-level dense prediction tasks to investigate the relationship among fine-grained features <cit.>. Joint learning of instance-level and dense prediction tasks has also been studied. However, these approaches either use a unified hard parameter sharing approach and experience rapid performance degradation <cit.> or use a separate decoder for each task <cit.>, which brings computational burden. To address the issues mentioned above, we first include panoptic segmentation as a typical joint instance and dense perception task. It tackles the problem of classifying every pixel in the scene by assigning different labels for different instances. Mainstream panoptic networks can be classified into two clusters, separate and unified approaches. Separate approaches rely on individual networks for stuff (semantic) and thing (instance) segmentation and focus on devising methods to fuse these two predictions <cit.> and resolving conflicts <cit.>. Recent unified approaches includes PanopticFCN <cit.> and MaX-DeepLab <cit.>. both of which are box-free methods with end-to-end mask supervision. To remove box guidance, these methods rely on positional embedding and dynamic convolution on the entire feature map for each instance, both of which can be simplified with box-based methods. To study the relationship among multiple perception tasks and merge them into a unified framework, we also include monocular 3D object detection and depth estimation in our task set. Recovering 3D coordinates from a single image is known to be ill-posed and prone to overfitting. Normally monocular 3D object detection approaches directly regress 3D attributes with 2D detectors <cit.>, which is easily overfit to object sizes <cit.>. Recent works on 3D object detection attempt to leverage depth information by adding a depth prediction layer to their 2D detector <cit.> to enhance the performance of monocular 3D detection. However, both these methods focus on the single detection task and only regress depth at the instance level, lacking fine-grained structure information and could hardly transfer to the depth estimation task. Therefore, we propose a multi-task visual perception framework to simultaneously address panoptic segmentation, monocular 3D object detection, and depth estimation tasks. Instead of relying on one branch to predict the result of each task, our framework naturally splits these tasks into two branches and groups same-level features of different tasks, which is parameter-efficient and easily deployed in real-world applications. Dynamic neural network Nowadays, many networks have adopted some variants of attention mechanism for both dense and instance prediction tasks. For dense prediction, it is used to learn a context encoding <cit.> or pairwise relationship <cit.>. Fully-convolutional instance prediction networks <cit.> use a dynamic module to merge instance information with high-resolution features. This design usually involves applying a dynamically generated operator, which is essentially an inner product between two input features. Different from previous dynamic modules that are only applied once during prediction, our approach aggregates multi-scale context information from the instance branch to refine the dense branch with an efficient dynamic operator. MaX-DeepLab <cit.> employs transformer modules <cit.> for cross-branch communication, which is computation -intensive thus the instance-level feature has to be sparse. Instead, we generate low-rank dynamic factors for the convolution layer. The formulation of the dynamic operator is closely related to linear 2nd-order operations such as Gated Linear Units <cit.> and Squeeze-and-Excite blocks <cit.>. A key distinction is that our dynamic module is more similar to cross-attention than self-attention since it merges features from different branches. The dynamic routing mechanism masks out a subset of network connections, which has been used in various models for computation reduction <cit.> and continual learning <cit.>. Dynamically changing the weights of network operations can be regarded as a special case of feature-wise transformation <cit.>. The most common form is channel-wise weight modulation in batch norm <cit.> and linear layers <cit.>. previous routing methods mainly concentrate on a single task, attempting to alleviate data variance or enhance the expression capability of the model in a single task. Our routing mechanism, however, is based on task correlations. Co-trained tasks implicitly share information, either at the instance-level or structural information. Given a set of correlated tasks, we use a task- and channel-aware router to utilize the shard information and suppress conflicting features. § METHOD In this work, we resort to a two-branched network to unify instance and dense prediction tasks and maximize feature sharing, leveraging common representations to boost the performance of each task. In Sec. <ref>, we introduce the overall two-branched framework. To enhance information propagation between branches, we introduce an efficient dynamic module to pass on location information. To utilize common features in the same branch of different tasks, we devise a dynamic routing module to further selectively fuse channel-wise tasks features in Sec. <ref>. Task-specific prediction heads are described in Sec. <ref>. The overall pipeline is illustrated in Fig. <ref>. §.§ Overall architecture Our framework comprises a feature extraction backbone, followed by two branches and separated task heads. The two-branched network includes an instance branch for higher-level contextual feature extraction and a dense branch for lower-level structural information prediction. The instance branch aims to generate instance-level semantic information and context using an arbitrary object detection decoder. We opt for a one-stage framework FCOS <cit.> due to its simplicity, and its multi-level architecture is convenient for investigating dynamic interaction between the dense branch and various instance-level feature maps. For each instance i, our instance branch additionally generates an instance embedding e^(i) beyond FCOS. In this work, e^(i) contains all instance-level task embeddings, including 3D object attributes and things embeddings in panoptic segmentation, then joints with dense branch outputs to generate final predictions such as instance masks and 3D regression values. The e^(i) is generated by a top layer that is a single convolution layer added to the object regression tower to produce instance-wise contextual information. To propagate instance-level information between branches, this branch also generates a multi-scale conditional feature pyramid {𝐌_l | l= 3; 4; … ; 7 }, which can be split along the channel for each task. Thus, given FPN output 𝐏_l, the instance branch computes these features with the following equation: {𝐌_l, 𝐄_l} = Top(Tower(𝐏_l)), l=3, 4,… 7, where 𝐌_l, 𝐄_l and 𝐏_l are tensors with the same spatial resolution. The densely predicted 𝐄_l along with other instance features such as class labels and bounding boxes are later filtered into a set containing only positive proposals e^(i). And 𝐌_l are further split into two dynamic tensors by channels for our dynamic operation called dynamic rank-1 convolutions (DR1Conv) to propagate location-aware information for dense prediction tasks. More details about DR1Conv will be described in Sec. <ref>. The dense branch preserves fine-grained image details to serve dense prediction tasks. Prior works on dense prediction tasks commonly use the largest resolution FPN feature map to generate per-pixel prediction, and ignore valuable semantic information of higher-level features. Our dense branch aggregates FPN features {𝐏_l} with contextual features {𝐌_l} into the final basis features 𝐅 for dense prediction like an inverted pyramid. Starting from the highest level feature map with the smallest resolution, we use a dynamic operation to merge location-aware context from the instance branch. Channel- and task-aware information is also introduced to further interact with fine-grained features among dense prediction tasks. This merging operation is parameter-efficient and meanwhile enables our model to share tasks features to a great extent. §.§ Dynamic feature interaction modules To introduce communication within branches and tasks, we build dynamic modules upon our two-branched structure. Features to interact in the dynamic module include location-, channel- and task-aware information. 𝐖 𝐌 𝐱 𝐚 𝐛 Dynamic Message Passing (DMP) We design a dynamic interaction operation named DR1Conv for position-sensitive message passing between two branches. DR1Conv is inspired by BatchEnsemble <cit.>, which uses a low-rank factorization of convolution parameters for efficient model ensemble. BatchEnsemble <cit.> uses a low-rank factorization of convolution parameters for efficient model ensemble. One can factorize a weight matrix 𝐖' as a static matrix 𝐖 and a low-rank matrix 𝐌, 𝐖' = 𝐖⊙𝐌, 𝐌 = ^. Here 𝐖', 𝐖, 𝐌∈ℝ^m× d, ∈ℝ^m, ∈ℝ^d and ⊙ is element-wise product. This factorization considerably reduces the number of parameters and requires less memory for computation. A forward pass with this dynamic layer can be formulated as 𝐲 = 𝐖'𝐱 = (𝐖⊙^) 𝐱 = ( ( ⊙ ) ) ⊙ where 𝐱∈ℝ^d, 𝐲∈ℝ^m are the input and output vectors respectively. Thus, this matrix-vector product can be computed as element-wise multiplying 𝐚 and 𝐛 before and after multiplying 𝐖 respectively. This formulation also extends to other linear operations such as tensor product and convolution. Dusenberry <cit.> use this factorization for efficient Bayesian posterior sampling in Rank-1 BNN. Next, we extend this technique to convolutions to serve our purpose—to generate parameter-efficient dynamic convolution modules for dense mask prediction tasks. We extend the factorization in Eq. (<ref>) to convolutions. Different from BatchEnsemble and Rank-1 BNN  <cit.>, our dynamic convolution is designed to be position-sensitive so that contextual information at different positions can be captured. In other words, the rank-1 factors 𝐚 and 𝐛 have to preserve the location information of 2D images. In practice, we densely compute _hw and _hw for each location (h, w)∈[1,…,H]×[1,…,W] as two feature maps 𝐀, 𝐁∈ℝ^C× H× W whose spatial elements are the dynamic rank-1 factors. For simplicity, we first introduce the 1×1 convolution case. For each location (h, w), we generate a different dynamic convolution kernel 𝐖'_hw∈ℝ^C from the corresponding locations of 𝐀, 𝐁. We apply dynamic matrix-vector multiplication at position (h, w) as 𝐲_hw = 𝐖'_hw𝐱_hw = (𝐖(𝐱_hw⊙𝐚_hw))⊙𝐛_hw, where 𝐚_hw, 𝐛_hw∈ℝ^C are elements in the dynamic tensors 𝐀 and 𝐁. ⊙ is element-wise multiplication. This can be interpreted as element-wise multiplying the context tensors before and after the static linear operator. We then generalize this to arbitrary kernel shape J× K. The dynamic rank-1 convolution (DR1Conv) Conv_𝐖' with static parameters 𝐖 at location (h, w) takes an input patch of 𝐗 and dynamic features 𝐀 and 𝐁 and outputs feature 𝐲_hw: 𝐲_hw = ∑_j∑_k(𝐖[j, k] (𝐗[h-j, w-k] ⊙𝐀[h-j, w-k])) ⊙𝐁[h-j, w-k]. We can parallelize the element-wise multiplications between the tensors and compute DR1Conv results on the whole feature map efficiently. Specifically, we make dynamic convolution kernel position-sensitive by using two tensors 𝐀,𝐁∈ℝ^C× H × W generated from box regression tower with the same size as input feature 𝐗. DR1Conv can be formulated as: 𝐘 = DR1Conv_𝐀, 𝐁(𝐗) = Conv(𝐗⊙𝐀)⊙𝐁. All tensors have the same size C × H × W. This is implemented as element-wise multiplying the dynamic factors 𝐀, 𝐁 before and after the static convolution respectively. The structure of DR1Conv is shown in Figure <ref>. For the dense branch, we use DR1Conv to merge FPN outputs 𝐏_l and contextual features 𝐌_l from instance branch: 𝐅_l = DR1Conv_𝐀_l, 𝐁_l(Conv_3×3(𝐏_l) + ↑_2(𝐅_l+1)), where ↑_2 is upsampled by a factor of 2. We first reduce channel width of 𝐏_l with a 3× 3 convolution, the channel width is kept the same throughout the computation. In practice, we found that for semantic segmentation, 64 channels are sufficient. The computation graph is shown in Fig. <ref>. After the last refinement, 𝐅_3 is output as the final 𝐅. This makes our dense branch very compact, using only 1/4 of the channels of the corresponding block compared with BlendMask <cit.>. In our experiments, we found using DR1Conv makes our model 6% faster while achieving even higher accuracy. Note that we can parallelize the element-wise multiplications between the tensors and compute DR1Conv results on the whole feature map efficiently. In addition, it can be integrated into other fully-convolutional instance or panoptic segmentation networks. We argue that DR1Conv is essentially different from naive channel-wise modulation. The two related factors 𝐀, 𝐁 combine to gain much stronger expressive power while being very computationally efficient. The Channel-wise Dynamic Routing (CDR) Mechanism Benefited from the aforementioned location-aware message passing within the two-branched architecture, we are able to investigate the sharing of instance- and dense-level information across typical perception tasks. There is a common agreement that depth estimation and segmentation tasks both require structural information, and our experiments on co-training for panoptic segmentation and depth estimation revealed information sharing on both the instance and dense branches, with varying degrees of feature sharing across branches and feature scales. Thus, to isolate task-specific features and leverage common representations, we introduce dynamic channel-wise routers to both the instance and dense branches at different scales for fine-grained feature interaction. For each task in multi-task co-training, we assign it as the primary task and treat the others as secondary tasks. For instance, when segmentation and depth are co-trained, segmentation is the primary task and depth is the secondary task, and vice versa. The objective of our routing mechanism is to separate task-specific features from shared features learned implicitly. We use different activation functions to differentiate routers for primary and secondary tasks. According to the phenomenon that features sharing varies on multiple scales, the channel routing is formulated as, 𝒢_l= {σ,softmax} (Conv(GAP(x_l)), where σ denotes the sigmoid function. Conv is a 1 × 1 convolution layer. The channel router leverages a sigmoid function to generate channel-wise activation to weigh the importance of the primary task feature per channel while using softmax to emphasize helpful channels and suppress harmful channels in the secondary task feature. The router outputs channel-wise attention scores to weight task features. In instance branch, routers take each level of box tower features as input x_l, routing task-specific contextual information {𝐌_l} that we split into {𝐌^m_l, 𝐌^a_l} along channels. In the dense branch, we set two convolution layers in DR1Conv, denoting primary and secondary projections. Routers use the input features of DR1Conv in Eq. (<ref>) to generate task-specific weighting. The channel-aware routing result is formulated as, ℱ^m= 𝒢^m ⊗ F^m + 𝒢^a ⊗ F^a_, where ⊗ is the channel-wise product. The output dimension is identical to the feature channel, routing features of primary and secondary tasks. Superscripts m and a denote primary and secondary tasks, respectively. F^{m,a} are {𝐌^m_l, 𝐌^a_l} or DR1Conv projections features {𝐅^m_l, 𝐅^a_l} in each branch. For each task, we compute two routing scores that multiply to self and secondary features and combine them together to get {𝐌_l} and {𝐅_l} in each level. {𝐌_l} and {𝐅_l} are sent into Eq. (<ref>) to get final task-specific dense basis features. In this interactive mechanism, we maximize the extent of feature sharing across tasks and scales, and our model is able to utilize useful task features to enhance the expressivity of other tasks while using fewer computational resources and parameters. The Task-aware Dynamic Routing (TDR) Mechanism To bring more discrimination for different tasks, we introduce task-aware information to our dynamic router to further distinguish the task-specific representations that result in performance degradation. We add task ID embeddings to the Eq. (<ref>) to let features be more discriminative, thus the routing score is calculated as, 𝒢= {σ,softmax} (Conv(GAP(x)⊕ Emb_{t_m,t_a}), where ⊕ is a concatenation operation, Emb_t is the task embedding from a 1 × 1 convolution layer on the specific one-hot task id, and the parameter of this convolution is shared in all tasks. The shapes of GAP output and task id embedding are C ×1×1 and C/8 ×1×1. ℱ^m_3 is the final output 𝐅 of each task in our multi-task network. Through the above three levels of information in our dynamic router, we could have a unified multi-task perception framework without sacrificing the performance of each task. The router only has one 1×1 convolution layer, leading to negligible computation cost. The overall computation graph for our proposed dynamic module is shown in Fig. <ref>. §.§ Task-specific prediction head In this work, we focus on three kinds of tasks, namely panoptic segmentation, monocular 3D object detection and depth estimation. We merge the instance-wise outputs e^(i) and dense features 𝐅 for both instance-level tasks (, instance segmentation and 3D object detection) and dense prediction tasks (, segmentation and depth estimation). The corresponding prediction heads are described as follows. The panoptic segmentation is the basic multitask system that we explore here. It jointly handles instance segmentation and semantic segmentation. Similar to other crop-then-segment models, we first crop a region of interest 𝐑^(i)∈ℝ^D'×56×56 from the dense branch output 𝐅 according to the detected bounding box b^(i) using RoIAlign <cit.>. Then the crops are combined into the final instance-wise predictions guided by the instance embeddings e^(i). For segmentation and 3D detection, we split each e^(i)∈ℝ^C^seg+C^3 D into two vectors e^(i)_seg∈ℝ^C^seg and e^(i)_3D∈ℝ^C^3D respectively. For panoptic segmentation, we use a new instance prediction module, called factored attention, which has fewer parameters but can accept much wider basis features. We split the embedding into two parts e^(i)_seg= [t^(i):s^(i)], where t^(i) is the projection kernel weights and s^(i) is the attention factors. First, we use t^(i) as the (flattened) weights of a 1× 1 convolution which projects the cropped bases 𝐑^(i) into a lower dimension tensor 𝐑'^(i) with width K: 𝐑'^(i) = t^(i)∗𝐑^(i) where t^(i) is the reshaped convolution kernel with size D'× K; and ∗ is the convolution operator[This makes t^(i) a vector of length D'K.]. We choose K=4 to match the design choice of BlendMask <cit.>. We split s^(i) into K diagonal matrix Σ_k∈ℝ^4×4 and combine them with two learnable matrices 𝐔_k, 𝐕_k∈ℝ^4×14 to generate K attention maps 𝐐_k∈ℝ^14×14: 𝐐^(i)_k = 𝐔_k^Σ^(i)_k𝐕_k. Here, we set 𝐔_k and 𝐕_k as network parameters that are shared with all instances. This reduces the instance embedding parameters from 784 to 16 while still enabling us to form position-sensitive attention shapes. 𝐑'^(i) and the full-attention 𝐐^(i) are element-wise multiplied and summed along the first dimension to get the instance mask results. The outer product 𝐮_kd^T 𝐯_kd of the dth row vectors in 𝐔_k and 𝐕_k can be considered as one of the components of 𝐐_k. We visualize all components learned by our network in Fig. <ref>. We add minimal modifications to instance prediction for panoptic segmentation: a unified panoptic segmentation layer which is simply a 1×1 convolution f_pano transforming the output 𝐅 of the dense branch into panoptic logits with C channels. The first C_stuff channels are for semantic segmentation and the rest of C_thing channels are for instance segmentation. We split the weights for f_pano along the columns into two matrix 𝐖_pano = [𝐖_stuff, 𝐖_thing]. The first D'× C_stuff parameters 𝐖_stuff are static parameters. C_stuff is a constant equals to the number of stuff classes in the dataset, , 53 for the COCO dataset. The rest D'× C_thing parameters 𝐖_thing are dynamically generated. During training, C_thing is the number of ground truth instances in the sample. For each instance i, there can be N_i ≥ 0 predictions assigned to it with embeddings {e_n| n=1,…, N_i} in the network assigned to it. For panoptic segmentation, we map them into a single embedding by computing their mean e̅_i = ∑_n e_n/N_i. Then the C_thing embeddings are concatenated into the dynamic weights 𝐖_thing: 𝐖_thing = [e̅_1, e̅_2, …, e̅_C_thing]. The panoptic prediction can be computed with a matrix multiplication 𝐘_pano = 𝐖_pano^𝐅. We combine thing and stuff supervisions and use cross-entropy loss for this prediction. For monocular 3D object detection, we regress the 3D bounding box of each instance i by predicting its 3D locations loc = [c_x, c_y, z] encoded as 2.5D center offsets c_x and c_y, corresponding depth z, dimensions dim = [h, w, l] and its observation angle α encoded as [sinα, cosα], for nuScenes dataset, an attribute label a is also regressed. Accordingly, e^(i)_3D contains all 3D regression properties, e^(i)_3D = [c_x, c_y, z_inst, h, w, l, sinα, cosα, a_1, …, a_A], where a_1, …, a_A are the nuScenes attribute logits. To get the final instance depth prediction z, we add z_inst to a densely predicted depth value from cropped bases R, z = z_inst + GAP(R)^⊤ w_z, where GAP is a global average pooling layer and w_z ∈ℝ^D' is a network parameter for dense depth prediction. We use the disentangled 3D corner regression loss similar to <cit.>: ℒ_3D= ℒ_attr(a_1, …, a_A) + ∑_k ∈loc,dim,α‖B̂_̂k̂-B ‖_1 where B̂_̂k̂ is the 3D bounding box coordinates predicted with loc, dim and α respectively, B is the ground truth box and ℒ_attr is a classification cross entropy loss on nuScenes attributes. For monocular depth estimation, we add three convolution layers to the output basis 𝐅 to regress depth for every pixel, and 2× interpolation operation after each convolution layer to upsample the output till to the original image size. Formally, the overall loss function of our multi-task framework can be formulated as, ℒ= ∑(ℒ_fcos + λ_3D× (ℒ_ctr+αℒ_dim+ℒ_ori+βℒ_loc) + ℒ_mask+ℒ_pano + ℒ_depth) where ℒ_fcos is the original loss of FCOS, ℒ_mask and ℒ_pano denote the cross entropy loss that we used for panoptic masks. ℒ_depth is L1 loss. λ_3D = 0.4 which is set to balance the loss. α = 2, β = 0.5, both empirically set through the single-task experiments. § EXPERIMENTS §.§ Dataset and implementation details Dataset For pairwise and multi-task training, we select the Cityscapes dataset <cit.>, which contains monocular 3D detection, depth, and panoptic segmentation annotations for 20 semantic categories related to urban scene understanding. The dataset comprises 5,000 finely annotated images, divided into 2,975 for training, 500 for validation, and 1,475 for testing. In addition, we evaluate our basic two-branched framework on hybrid benchmark datasets. NuScenes <cit.> contains 1.4M 3D object bounding boxes on 200K+ images over 83 logs. NuImages provides 700K segmentation masks on 93K images in more varied scenes (nearly 500 logs). These two datasets share the same 10 instance categories, and nuImages includes semantic masks for drivable surfaces. we validate the efficacy of our approach in this partial label setting on nuScenes and nuImages datasets for joint segmentation and 3D detection. Metric For panotic segmentation, we use the standard panoptic quality (PQ) metric. For 3D object detection, the official detection score (DS) metric is used in Cityscapes <cit.>. In nuScenes, the official evaluation metrics for the detection task are provided. The mean average precision (mAP) of nuScenes is calculated using the center distance on the ground plane rather than the 3D intersection over union (IoU) to align predicted results with ground truth. The nuScenes metrics also contain 5 types of true positive metrics (TP metrics), including ATE, ASE, AOE, AVE, and AAE for measuring translation, scale, orientation, velocity, and attribute errors, respectively. The nuScenes also defines a detection score (NDS) as NDS= 1/10[5mAP+ ∑_mTP ∈ TP(1 - min(1, mTP))] to capture all aspects of detection tasks. The depth performance are measured by absolute relative error(Abs. Rel.), depth accuracy δ = max(d_pred/d_gt, d_gt/d_pred) and root mean square error(RMSE). Data augmentation Similar to SMOKE <cit.>, we regress a point that is defined as the projected 3D center of the object on the image plane. The projected keypoints allow us to fully recover the 3D location for each object with camera parameters. Let [ x y z ]^ represents the 3D center of each object in the camera frame. The projection of 3D points to points [ x_c y_c ] ^ on the image plane can be obtained with the camera intrinsic matrix K in a homogeneous form: z_c ×[ u; v; 1; ]= [ f_x 0 u_0; 0 f_y v_0; 0 0 1; ]×[ x_c; y_c; z_c; ] When rescale and crop augmentation are used, it can be seen as the change in the camera intrinsic matrix. , if we resize an image at a ratio of s, then crop it at [x_0, y_0], the matrix is changed as follows: [ f_x u_0; f_y v_0; ]= [ f_x× s u_0× s - x_0; f_x× s v_0× s - y_0; ] To maintain the geometry consistency, we use relative crop ranging from [0.5, 1.0] of origin image size instead of the random crop. Using the above augmentations enables a stable training process. Implementation We implement our models based on the open-source project AdelaiDet[<https://git.io/AdelaiDet>]. Unless specified otherwise, COCO pre-trained ResNet-50 is used as our backbone. We train our unified multi-task network on 8 Telsa V100 with batch size 32. The training schedule is 90k iterations and the learning rate is reduced by a factor of 10 at iteration 60K and 80K. Note that in single-task experiments, task-awareness information is disabled. For monocular depth estimation, we set the max regression distance to 120 meters. §.§ Main results We compare our multi-task results with other state-of-the-art methods on Cityscapes benchmark. Table <ref> reports the results on panoptic segmentation, 3D object detection and depth estimation. Our method achieves significant improvements across all tasks. In particular, using a ResNet-50 backbone, D2BNet performs significantly better on 3D object detection and depth estimation tasks on Cityscapes test set, outperforming the previous best single-task methods by 3.8 DS and 0.09 depth accuracy. D2BNet surpasses the other multi-task method, MGNet, by 4.3 PQ and 1.37 RMSE on panoptic segmentation and depth estimation. We use official Panoptic-Deeplab open-source code and re-implement it by adding the same depth prediction module as ours, and an FCOS3D head to fit the multi-task setting. Panoptic-Deeplab suffers from task conflicts, while D2BNet outperforms it on three tasks, which is attributed to our dynamic modules. [4]<https://www.cityscapes-dataset.com/benchmarks/#3dbbox-results> In addition, We compare the inference time of D2BNet with the multi-head framework Panoptic-DeepLab for a single panoptic segmentation task, panoptic segmentation with 3D detection task, and joint three tasks. We add a separate FCOS3D <cit.> head to Panoptic-DeepLab models for 3D detection. The inference time is measured with the ResNet-50 backbone in batch size 1. In Panoptic-DeepLab, the final stage is dilated. The input resolution is 1024 × 2048. Computation statistics for different frameworks are shown in Fig. <ref>. Our model saves significant computation time by reusing most features across multiple tasks. §.§ Ablation experiments In this section, we conduct an empirical study of the design choices for the three levels of awareness in our dynamic module and the entire two-branched architecture under a multi-task setting using different benchmark datasets. Dynamic modules in multi-task network We evaluate the effectiveness of our dynamic modules under the multi-task, full label setting by adding them to the baseline. The experiment is implemented on Cityscapes dateset. As shown in Table <ref>, each module is beneficial for every task, and combining all three levels of awareness achieves the best results on all tasks at once. Pairwise joint training on Cityscapes To have an intuitive understanding of the mutual influence of co-training tasks, we disable our task- and channel-aware modules, then train three tasks pairwise, the results are shown in Table <ref>. To preserve geometric prior, depth and 3D detection tasks are trained using relative crop, therefore panoptic segmentation is trained in identical augmentation, which sacrifices nearly 1.8 points on panoptic quality metric compared to random crop. In our experiments, both 3D detection and depth estimation tasks benefited from co-training with segmentation, while the segmentation task co-trained with depth resulted in noticeable performance degradation. 3D object detection and segmentation from partial labels on nuScenes To demonstrate the efficacy of our multi-task framework on other benchmarks, we train it on nuScenes and nuImages datasets. Since images in nuScenes and nuImages are not overlapping, we are facing a missing label problem for joint training. we experiment with three different settings: (i) single-task training, (ii) alternate training where two datasets are joint and each batch contains data from a single task, (iii) pseudo-labeling where we train on nuScenes 3D detection dataset with segmentation annotations generated by a single-task model trained on nuImages. We apply different augmentations for different tasks, if the batch has 3D detection annotations, we only apply horizontal flip, otherwise, we apply flip and random resize with short-size from [720, 1080]. Results are shown in Table <ref>, both joint training methods have positive a impact on the 3D detection task. For alternate training, we resample nuScenes and nuImages with ratio 1: 2, for individual task, the training iterations is about halved, which could probably explain the performance drop in mask AP compared to single-task training. The performance of 3D object detection with segmentation labels We compare our model to the previous best vision-only methods for 3D object detection on nuScenes. Although these two datasets are not explicitly linked, nuImages could contain similar scenes in nuScenes datasets. To avoid including external data, we only use the COCO pretrained model on nuImages to generate panoptic segmentation pseudo labels on nuScenes. We use a ResNet-101 backbone with deformable convolutions on the last two stages with interval 3 and train for 450k iterations with batch size 16. Quantitative results are shown in Table <ref>. Our solution achieves 1st place on the fourth nuScenes 3D detection challenge in the vision-only track. §.§ Qualitative results on Cityscapes We demonstrate some qualitative results in Fig. <ref> on Cityscapes dataset. For a clear visualization, multi-task predictions are shown in the last two rows. The second row is our predictions on 2D object detection, 3D object detection and panoptic segmentation, and the visualized results of depth estimation are shown in the last row. §.§ Relations between panoptic segmentation and depth estimation In the early stage of devising our sharing network, we attempt to figure out the relationship among dense prediction tasks. Simply combining tasks in a unified network results in performance degradation. To investigate the sharing relationship, we designed experiments to co-train panoptic segmentation and depth estimation in separated branches with additional shared parameters. The weighting scores on {𝐌_l} in the instance branch and {𝐅_l} in DR1Conv projections in the dense-branch are visualized in Fig. <ref>, Our co-training on panoptic segmentation and depth estimation tasks revealed information sharing on both the instance and dense branches, with varying degree of feature sharing across branches and feature scales. Taking panoptic segmentation and depth estimation as instance, tasks share contextual features in the instance branch and the sharing extents have no significant changes across different FPN layers, while it shows a decreased tendency of sharing as the size of the dense feature map increases. As a result, we set routers both on different FPN levels and branches. §.§ Results of our D2BNet on single tasks In this section, we take the routers away and validate the effectiveness of our proposed model for single-task on different benchmarks. The performance of panoptic segmentation We compare D2BNet with recent panoptic segmentation networks on the COCO test-dev split. We increase the training iterations to 270K (3 × schedule), tuning the learning rate down at 180K and 240K iterations. The running time is measured on the same machine with the same setting. We use multi-scale training with shorter side randomly sampled from [640, 800]. We run the models with batch size 1 on the whole COCO val2017 split using one GTX 1080Ti GPU. We calculate the time from the start of model inference to the time of final predictions, including the post-processing stage. Results on panoptic segmentation are shown in Table <ref>. Our model achieves the best speed-accuracy trade-off and is two times faster than the mainstream separate frameworks. Particularly, the running time bottleneck for UPSNet <cit.> is the stuff/thing prediction branches and the final fusion stage, which makes the R-50 model almost as costly as the R-101 DCN model. Our method is faster than Panoptic FCN because our instance prediction module is more efficient. We also compare the instance segmentation results on nuImages dataset. On nuImages, we train with the 1× schedule and random resize with short size [720, 1080] and 512 × 1024 crop augmentations. There are no panoptic evaluation protocols for nuImages, so we compare our model with Mask R-CNN implemented in MMDet[<https://github.com/open-mmlab/mmdetection3d/tree/master/configs/nuimages>]. Models both use pretrained weights on COCO. We compare D2BNet with recent instance segmentation networks on the COCO test-dev split. We increase the training iterations to 270K (3 × schedule), tuning the learning rate down at 180K and 240K iterations. All instance segmentation models are implemented with the same code base, Detectron2[<https://github.com/facebookresearch/detectron2>] and The running time is measured on the same machine with the same setting. We use multi-scale training with shorter side randomly sampled from [640, 800]. Results on instance segmentation are shown in Table <ref>. §.§ Additional ablation results on single tasks Effectiveness of dynamic factors in location-aware module DR1Conv has two dynamic components 𝐀 and 𝐁. And each of them has the effect of channel-wise modulation pre-/post- convolution respectively. By removing both of them, our basis module becomes a vanilla FPN. We train networks with each of these two components masked out. The second row in Table <ref> shows that DR1Conv can improve both the thing and stuff segmentation qualities. The combination of these two dynamic factors yields higher improvement than the increments of the two factors individually added together. Context feature position The contextual information 𝐌 is computed with the features from the box tower of the FCOS <cit.>, a crucial difference from self-attention and squeeze-and-excite blocks. To examine this effect, we move the top layer for contextual information computation to the FPN outputs and class towers, which both badly hurts the segmentation performance, AP_75 especially, even worse than the vanilla baseline without dynamism. Results are shown in the third row and Table <ref>. This proves that the correspondence between instance embedding and contextual information is important. Channel width of the dense branch Choosing a proper channel width of the dense branch is also important for panoptic segmentation accuracy. A more compact basis output of size 32 does not affect the class agnostic instance segmentation result but will lead to much worse semantic segmentation quality, which has to discriminate 53 different classes. To accurately measure the influence of different channel widths and make sure all models are fully trained, we train different models with the 3x schedule. Doubling the channel width from 32 to 64 can improve the semantic segmentation quality by 2.1. Results are shown in Table <ref>. Border padding We also notice that border padding can affect the performance of semantic segmentation performance. The structure difference between our dense branch and common semantic segmentation branch is that we have incorporated high-level feature maps with strides 64 and 128 for contextual information embedding. We assume that this leads to a dilemma over the padding size. A smaller padding size will make the features spatially misaligned across levels. However, an overly large padding size will make it very inefficient. Making an 800×800 image divisible by 128 will increase 25% unnecessary computation cost on the borders. We tackle this problem by introducing a new upsampling strategy with is spatially aligned with the downsampling mechanism of strided convolution and reduce the padding size to the output stride, i.e., 4 in our implementation. Results are shown in Table <ref>, our aligned upsampling strategy requires minimal padding size while being significantly better in semantic segmentation quality PQSt. Efficiency of the factored attention We compare the performance and efficiency of different instance prediction modules in Table <ref>. Our factored attention module is almost as efficient as the channel-wise modulation and can achieve the best performance. In addition, we visualize all components learned by our network in Fig <ref>. Position sensitive attention for panoptic segmentation Unfortunately, even though beneficial for instance segmentation, we discover that position sensitive attention has a negative effect on panoptic segmentation. It enforces the bases to perform position-sensitive encoding on all classes, even for stuff regions, which is unnecessary and misleading. The panoptic performance for different instance prediction modules are shown in the fourth row at Table <ref>. Using factored attention makes the semantic segmentation quality drop by 2.6 points. Thus, in our panoptic segmentation and multi-task experiments, the factor attention is only used in thing regions, the stuff classes are directed using convolutional layers to predict masks. We also study the dense module channel width choices and border padding effect on the segmentation performance. As a result, we choose channel width 64 for panoptic segmentation and aligned upsampling to reduce the border padding size. § CONCLUSION In this work, we propose a Dynamic Two-Branched Network (D2BNet) for multi-task perception, targeting to share features as much as possible and leverage the common representation among tasks. We break down tasks into two branches, using instance and dense branches to extract higher- and lower-level information, respectively, We then apply task-specific prediction heads for the final predictions. Cross-branch information communication is performed with a lightweight dynamic operation, DR1Conv. Meanwhile, use a task- and channel-wise dynamic router to isolate task-specific features and utilize the common properties of tasks. The benefits are twofold: a structure with better feature-sharing properties lays the foundation for joint instance-wise and dense prediction multi-task learning research, while also reducing the computation cost in real-world applications. § DATA AVAILABILITY STATEMENT Datasets used in this work are all publicly available. NuScenes and NuImages datasets are available at https://www.nuscenes.org/. Cityscapes is available at https://www.cityscapes-dataset.com/, and MSCOCO at https://cocodataset.org/. § ACKNOWLEDGEMENT This work was supported by National Key R&D Program of China (No. 2020AAA0106900), the National Natural Science Foundation of China (No. U19B2037, No. 62206244), Shaanxi Provincial Key R&D Program (No. 2021KWZ-03), Natural Science Basic Research Program of Shaanxi (No. 2021JCW-03). IEEEtran
http://arxiv.org/abs/2306.01460v2
20230602113722
ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
[ "Andrew Jesson", "Chris Lu", "Gunshi Gupta", "Angelos Filos", "Jakob Nicolaus Foerster", "Yarin Gal" ]
cs.LG
[ "cs.LG" ]
Robust Bayesian Inference for Measurement Error Models Charita Dellaporta Department of Statistics, University of Warwick / correspondance to , Theodoros Damoulas Department of Statistics & Department of Computer Science, University of Warwick =================================================================================================================================================================================================================== This paper introduces a novel method for enhancing the effectiveness of on-policy Deep Reinforcement Learning (DRL) algorithms. Three surprisingly simple modifications to the A3C algorithm: (1) processing advantage estimates through a ReLU function, (2) spectral normalization, and (3) dropout, serve to not only improve efficacy but also yield a “cautious” DRL algorithm. Where on-policy algorithms such as Proximal Policy Optimization (PPO) and Asynchronous Advantage Actor-Critic (A3C) do not explicitly account for cautious interaction with the environment, our method integrates caution in two critical ways: (1) by maximizing a lower bound on the value function plus a constant, thereby promoting a conservative value estimation, and (2) by incorporating Thompson sampling for cautious exploration. In proving that our algorithm maximizes the lower bound, we also ground Regret Matching Policy Gradients (RMPG), a discrete-action on-policy method for multi-agent reinforcement learning. Our rigorous empirical evaluations across various benchmarks demonstrate our approach's improved performance against existing on-policy algorithms. This research represents a substantial step towards efficacious and cautious DRL algorithms, which are needed to unlock applications to complex, real-world problems. § INTRODUCTION Deep Reinforcement Learning (DRL) is a paradigm to approximate solutions to complex sequential decision-making problems in domains such as robotics <cit.>, autonomous driving <cit.>, strategy games <cit.>, and human-computer interaction <cit.>. In recent years, DRL algorithms have achieved state-of-the-art performance on many challenging benchmarks <cit.>. However, their success in real-world applications does not only depend on their capacity to execute tasks while simultaneously refining the equations defining their action policy. It also hinges on cautious policy execution in the face of finite observations of a world in flux to avoid catastrophic results. On-policy algorithms, such as Proximal Policy Optimization (PPO) <cit.> or Asynchronous Advantage Actor-Critic (A3C) <cit.>, incorporate differentiable policies that are updated based on recent interactions with the environment. Such recency bias, and their potential to actively sample informative observations, make on-policy approaches compelling candidates for applications in real-world non-stationary environments. However, neither PPO nor A3C explicitly accounts for cautious environmental interaction. In response, we propose a novel method that explicitly incorporates caution in decision-making in two significant ways: (1) by maximizing a lower-bound on the value function plus a constant to promote algorithmic decision-making under a conservative estimate of value <cit.>; and (2) by integrating careful exploration around action values with higher estimated value via Thompson sampling <cit.>. Only three surprisingly simple modifications to the A3C algorithm are needed to achieve this: (1) the lower-bound on value is realized by processing advantage estimates through a ReLU function, (2) the additive constant is regularized by applying spectral normalization to promote conservative estimates of value, and (3) Thompson sampling is enabled by adopting dropout and weight normalization. Through our thorough empirical assessments on the Gymnasium and Brax MuJoCo benchmarks for continuous control <cit.>, we show that our approach consistently outperforms existing on-policy algorithms such as PPO and A3C. Furthermore, our method shows competitive performance to these state-of-the-art on-policy methods in environments found in the MinAtar and ClassicControl benchmarks <cit.>. Consequently, this paper offers a novel enhancement to boost the efficacy of on-policy DRL algorithms, underpinned by comprehensive theoretical proof and extensive empirical evidence of its effectiveness. While sufficiently cautious algorithmic interaction with the world is still a distant goal, we hope this research will catalyze the development of further efficacious and careful applications of DRL for solving complex, real-world problems. § BACKGROUND Notation. We consider a discounted, -horizon Markov Decision Process (MDP) defined by the tuple (, , P, , γ), where is the state space, is the action space, P is the state transition probability, is the immediate reward upon transitioning from state to state ^', and γ∈ [0, 1] is the discount factor. MDPs provide a framework for modeling sequential decision-making problems, where an agent interacts with an environment over discrete time steps to achieve a goal <cit.>. Following the notation of <cit.>, we define states at time ∈ by the d-dimensional, real-valued, random variable, _: Ω→⊆^d, with observable instances _ = _(ω_): ∀ω_∈Ω. We define actions by the m-dimensional random variable _: Ω→, with observable instances, _ = _(ω_): ∀ω_∈Ω. Rewards are defined by the continuous-valued random variable, _:Ω→⊆, with observable instances, _ = _(ω_): ∀ω_∈Ω. Let the random variable, _∑_ = + 1^γ^ - 1 - _, denote the discounted return. We use the standard definitions for the conditional action distribution/density (policy), π(|), the state value function under the policy, v_π() _π[ _|_ = ], and state-action value function under the policy, q_π(, ) _π[ _|_ = , _ = ]. On-policy, Actor-critic reinforcement learning. On-policy, Actor-critic approaches to reinforcement learning are called policy-gradient methods, in that they seek to optimize a policy function, π(|, ), differentiable concerning parameters, , to maximize the expected discounted return under the policy, v_π(). On-policy approaches differ from off-policy approaches in that they only use recent observations from the current policy to achieve this objective. Actor-critic methods differ from other policy-gradient methods because they fit an approximate value function (critic), v(, ), to the data collected under the policy, in addition to optimizing the policy function (actor). The critic is typically used in actor optimization but not generally for decision-making. Deep reinforcement learning implements the actor and critic using neural network architectures, where the function parameters correspond to network weights. We denote the parameters of the actor and critic networks as and , respectively. The output likelihood of the actor makes distributional assumptions informed by characteristics of the action space, . For continuous action spaces, the likelihood is commonly an independent multivariate normal distribution with homogeneous noise variance, π(_|_, ) ∼𝒩(|μ(, ), Iσ^2()), where σ^2() = (σ^2_1, …, σ^2_m) is the vector of inferred action noise variances. For discrete action spaces, the likelihood is often a categorical distribution, π(_|_, ) ∼Categorical(|μ(, )). In both cases, the mean parameter of the likelihood, μ(, ), is the m-dimensional, vector-valued output of a neural network architecture with parameters, . Critic networks are commonly fit using a mean squared error objective, which corresponds to a univariate normal output likelihood with unit variance, p(g|, ) ∼𝒩(| v(, ), 1), where the mean parameter is the approximate value function, v(, ), and is given by the scalar-valued output of any neural network architecture with parameters, . The baseline on-policy, actor-critic policy gradient algorithm seeks to perform gradient ascent with respect to the “performance” function, J() v_π(_0, ), where v_π(_0, ) is the value function with respect to the parameters . By the policy gradient theorem <cit.>, we have: ∇_ J() = ∇_ v_π(_0) ∝∫_ρ() ∫_ q_π(, ) ∇_π(|, ) d d. <cit.> show that a generalization of this result includes a comparison of the state-action value function, q_π(, ), to an arbitrary baseline that does not vary with the action, . When the baseline is the state value function, v_π(), we have an objective in terms of the advantage function <cit.>, h_π(, ) q_π(, ) - v_π(), namely: ∇_ J() ∝∫_ρ() ∫_ h_π(, ) ∇_π(|, ) d d. This formulation in terms of all actions can be further simplified in terms of observed actions and states as: ∇_ J() ∝_π[ h_π(_, _) ∇_logπ(_|_, )]. We use _π to denote an expectation over states _ and actions _ collected under the policy π(|). In general, because neither the state-action, q_π(, ), nor the state value, v_π(), functions are known, we need an estimator for the advantage function. For compactness, we will focus on the generalized advantage estimator (GAE) proposed by <cit.>: h(_, _, ) = ∑_ = + 1^ (γλ)^ - 1 - δ_ - + 1^, where 0 < λ≤ 1, and δ_^ = _ + γ v(_ + 1; ) - v(_; ) is the temporal difference (TD) residual of the value function with discount, γ <cit.>. The GAE then yields a low-variance gradient estimator for the policy function: ∇_ J() _π[ h(_, _, ) ∇_logπ(_|_, )]. Finally, the actor and critic networks are generally optimized by using mini-batch stochastic gradient descent <cit.> to fit the functions induced by the network weights to a batch of data collected under the current policy, ^b_π = {_i, _i, _i}_i=1^b. § METHODS In this section, we develop our cautious, on-policy actor-critic algorithm. As a reminder, we realize this algorithm by making three simple changes to the A3C algorithm: first, we process advantage estimates through a ReLU function; second, we regularize network weights using spectral normalization; and third, we implement the actor and critic networks as Bayesian Neural Networks to enable Thompson sampling. We provide the theoretical grounding to prove that clipping the advantages during policy optimization results in optimizing a lower bound on the value function plus a constant. We show that under standard assumptions, the constant is equal to the expected, clipped difference in the state value function, γ v_π(') - v_π(), over all actions, , and next-states, ', under the policy given state, , and that we can regularize it using spectral normalization. And finally, we detail how to enable cautious exploration via Thompson sampling by adding dropout and weight decay. The following theorem formalizes the main result of our paper. Let, _∑_ = + 1^γ^ - 1 - _, denote the discounted return. Let q_π(, ) = _π[ _|_ = , _ = ], denote the state-action value function, and v_π() = _π[ _|_ = ], denote the state value function, under policy π(|, ). Let (x)^+ max(0, x). Assume, without loss of generality, that rewards, _, are non-negative. Assume that the gradient of the policy, ∇π(|, ), is a conservative vector field. Then, performing gradient ascent with respect to, ∇_ J() = _π[ (q_π(_, _) - v_π(_) )^+ ∇_logπ(_|_, ) ], maximizes a lower-bound, v_π^*(), on the state value function, v_π(), plus a constant: v_π^*() ≤ v_π() + C(), where, C() = ∬( γ v_π(') - v_π() )^+ d(' |_=, _ = ) dΠ(|_=), is the expected, clipped difference in the state value function, γ v_π(') - v_π(), over all actions, , and next states, ', under the policy given state, . Here, we use ∫… dΠ(|) to denote ∑_…π(|) for discrete action spaces and ∫…π(|)d for continuous action spaces. Similarly, we use ∫… d(' |, ) to denote ∑_'… p(' |, ) for discrete state spaces and ∫… p(' |, )d' for continuous state spaces. Proof is provided in <Ref>. Bounding the constant C(). Considering the value function, v_π(), as K-Lipschitz continuous and assuming that the expected value of the value function, v_π(') over next-states, ', is equal to the value function evaluated at the current state, v_π(). Then, when γ =1, the constant is bounded proportional to the expected absolute difference between states. C() = ∬( v_π(') - v_π() )^+ d(' |_=, _ = ) dΠ(|_=) = 1/2∬( v_π(') - v_π() + | v_π(') - v_π() | ) d(' |_=, _ = ) dΠ(|_=) = 1/2∬| v_π(') - v_π() | d(' |_=, _ = ) dΠ(|_=) ≤1/2∬ K|| ' - || d(' |_=, _ = ) dΠ(|_=). This interpretation motivates using spectral normalization <cit.> of the value function estimator weights, v(, ), which regulates the Lipschitz constant, K, of the estimator and can improve performance in the off-policy reinforcement learning setting <cit.>. Moreover, when using the generalized advantage estimator with the same assumptions, the constant is given by: C() = 1/2∬| γλ v_π(') - v_π() | d(' |_=, _ = ) dΠ(|_=). Since γλ < 1, the GAE also serves to regularize the constant. Cautious exploration. We propose Bayesian inference over the actor and critic parameters to enable cautious exploration via Thompson sampling <cit.>. This involves introducing posterior distributions over the policy parameters, q(|_n-1), and value function estimator parameters, q(|_n-1). Here, _n-1 = {_i, _i, _i}_i=1^|_n-1| is data collected under the policy, π(|, _n-1), over a set of horizons, _n-1 = ^n-1_1 ∪^n-1_2 ∪…. In general, any inference technique is permissible. In <Ref>, we outline the procedure for the case of approximate inference using dropout Bayesian Neural Networks (BNNs) following <cit.>. For a dropout BNN, the posterior distribution for the policy parameters is of the form q(|, p), where is the expected value of the parameters, and p is the dropout rate. Similarly, the posterior distribution for the value function parameters is of the form q(|, p), where is the expected value of the parameters, and p is the dropout rate. We optimize each dropout BNN by minimizing the Kullback–Leibler divergence between a prior distribution and its approximate posterior. We term this method VSOP: for Variational [b]ayes, Spectral-normalized, On-Policy reinforcement learning. <Ref> details VSOP for dropout BNNs. § RELATED WORKS §.§ On-policy methods VSOP is an on-policy RL algorithm. <Ref> compares the gradient of the performance function, ∇ J(), for VSOP with those for relevant on-policy algorithms. We discuss each algorithm below. Mirror Learning. Trust Region Policy Optimization (TRPO) <cit.> is an on-policy, actor-critic method that improves upon the baseline policy gradient method by incorporating a constraint on the maximum size of policy updates. TRPO takes small steps toward improvement and limits the step size to ensure that the new policy does not deviate significantly from the old policy. TRPO achieves this by optimizing a surrogate objective function that approximates the expected reward under the new policy while imposing a constraint on the KL divergence between the new and old policies. TRPO is effective in various high-dimensional and continuous control tasks. Proximal Policy Optimization (PPO) <cit.>, like TRPO, improves upon the baseline policy gradient method by constraining the maximum size of policy updates. However, instead of using a KL divergence constraint, PPO employs a clipped surrogate objective function to limit the size of policy updates. PPO simplifies the optimization procedure compared to TRPO, making it more computationally efficient and easier to implement. While TRPO and PPO constrain policy updates based on the ratio between the new and old policies, VSOP constrains policy updates according to the sign of the estimated advantage function. As such, PPO and TRPO are instances of the mirror learning framework <cit.>, whereas VSOP does not inherit the same theoretical guarantees. <cit.> explores the Mirror Learning space by meta-learning a “drift” function. They term their immediate result Learned Policy Optimization (LPO). Through its analysis, they arrive at Discovered Policy Optimisation (DPO), a novel, closed-form RL algorithm. Regret Matching Policy Gradient (RMPG) <cit.> is inspired by an objective called regret policy gradient (RPG), which maximizes a lower-bound on the advantages: (h(, ))^+≤ h(, ). RPG directly optimizes the policy for an estimator of the advantage lower-bound, denoted as ∇_ J^RPG(). RMPG, being inspired by RPG, has a different objective, ∇_ J^RMPG(). In both cases, q(, , ) is a parametric estimator of the state-action value function, q_π(, ). RMPG has demonstrated improved sample efficiency and stability in learning compared to standard policy gradient methods. VSOP is closely related to RMPG; however, we provide the missing theoretical foundations to ground RMPG (<Ref>), extend RMPG from the all actions formulation making it more suitable for continuous control (<Ref>), and employ the GAE rather than the state-action value function estimator, q(, , ). Risk Sensitive Reinforcement Learning. Instead of optimizing expected value, risk-sensitive RL methods optimize a measure of risk. <cit.> propose the risk-averse CVaR-PG which seeks to minimize the Conditional Value at Risk (CVaR), Φ(θ) _π[ _|_≤ν_α], where ν_α is the α-quantile of the return, _, distribution under the policy, π(|, ). Relatedly, <cit.> have used the CVaR as a baseline function for standard policy updates. By focusing only on the worse case trajectories, CVaR-PG is susceptible to “blindness to success,” thus <cit.> propose a Cross-entropy Soft-Risk algorithm (CeSoR) to address this. <cit.> and <cit.> also propose uncertainty aware, risk-averse methods. For model-based policy gradient methods, <cit.> propose Ensemble Policy Optimization (EPOpt), which incorporates restricting policy updates to be risk-averse based on the CVaR and uses ensembles to sample hypothesized models. In contrast to the above risk-averse methods, <cit.> present Risk Seeking Policy Gradient (RSPG) which focuses on maximizing best-case performance by only performing gradient updates when rewards exceed a specified quantile of the reward distribution. <cit.> provide a comprehensive discussion on risk-sensitive RL. §.§ Off-policy methods Self Imitation Learning (SIL) <cit.> is a hybrid method that uses clipped advantage estimates to improve the performance of on-policy algorithms such as PPO and A2C by learning from its successful off-policy trajectories. By leveraging experience replay, SIL encourages the agent to imitate its high-reward actions. Self Imitation Advantage Learning (SIAL) <cit.> extends SIL to the off-policy domain. SIAL uses the clipped advantage function to weigh the importance of different actions during self-imitation, enabling the agent to focus on actions that yield higher long-term rewards. Importantly, even though SIL and SIAL only update policies when advantage estimates are positive, they differ from VSOP in that they are off-policy algorithms that learn from successful past trajectories and optimize different objectives based on max-entropy reinforcement learning <cit.>. §.§ Thompson Sampling in Deep Reinforcement Learning Thompson sampling has been extensively explored in conventional and Deep Q-Learning <cit.> to improve exploration and sample efficiency. <cit.> and <cit.> propose similar sampling-based exploration strategies for Deep Q-Learning. <cit.> propose a Thompson sampling strategy based on an ensemble of quantile estimators of the state-action value distribution. In the context of policy gradient methods, related Upper Confidence Bound (UCB) <cit.> and Hamiltonian Monte-Carlo (HMC) <cit.> approaches are proposed for off-policy Soft Actor-Critic (SAC) <cit.>, and <cit.> proposes an elliptical episodic reward for general use. <cit.> propose Selective Noise Injection using fixed dropout masks to sample policies and then actions, but stop short of formalizing this as Thompson sampling. Similarly for <cit.>. We believe our work is the first to formalize and show the benefit of Thompson sampling for on-policy actor-critic methods. § EXPERIMENTS We comprehensively evaluate VSOP against on-policy RL methods across various domains, including continuous and discrete action spaces and diverse dimensionalities in both the action and observation spaces. Furthermore, we evaluate our method using both PyTorch <cit.> and JAX <cit.> frameworks. In <Ref>, we compare VSOP to baseline implementations of PPO, A3C, and RMPG on the Gymnasium <cit.> implementation of MuJoCo <cit.> for continuous control (<Ref>). In this setting, we further ablate the effect that positive advantages, spectral normalization, and Thompson sampling each has on performance (<Ref>), investigate the relationship between Thompson sampling and asynchronous parallelization (<Ref>), show that spectral normalization and Thompson sampling also have non-negligible positive effects for PPO (<Ref>), and offer comparison to off-policy approaches like SAC <cit.> and Twin Delayed DDPG (TD3) <cit.> (<Ref>). In <Ref>, we exploit the fast iteration cycles offered by vectorized JAX implementations and the gymnax framework <cit.> to perform fair comparisons of VSOP, PPO, A2C, and DPO under equal hyper-parameter search budgets. §.§ Gymansium MuJoCo For this evaluation, we build off of <cit.>'s https://github.com/vwxyzjn/cleanrlCleanRL package which provides reproducible, user-friendly implementations of state-of-the-art reinforcement learning algorithms using PyTorch <cit.>, Gymnasium <cit.>, and Weights & Biases <cit.>. Overall, several code-level optimizations for PPO reproducibility <cit.> are superfluous for our method in this setting. For example, we omit advantage normalization, value loss clipping <cit.>, gradient clipping, and modification of the default Adam <cit.> epsilon parameter as they either do not lead to an appreciable difference in performance or have a slightly negative effect. However, we find that orthogonal weight initialization, learning rate annealing, reward scaling/clipping, and observation normalization/clipping have non-negligible positive effects on performance <cit.>. In addition to adding dropout, weight decay regularization, and spectral normalization, we also look at model architecture modifications not present in the CleanRL implementation: layer width, number of hidden layers, layer activation, layer normalization <cit.>, and residual connections. We find that ReLU activation functions <cit.>, increasing layer width to 256, and a dropout rate of 0.01-0.04 are beneficial. We find that network depth and residual connections are benign overall. In contrast to recent findings for offline data in off-policy reinforcement learning <cit.>, layer normalization — whether applied to the actor, the critic, or both — is detrimental to performance. We give full details in <Ref>. §.§.§ Comparison to on-policy baselines. First, we compare tuned VSOP to baseline implementations: PPO, A2C, and RMPG. We use the CleanRL <cit.> implementation of PPO, the StableBaselines3 <cit.> hyper-parameter settings for A2C, and the VSOP optimal hyper-params for RMPG. <Ref> summarizes these results. VSOP improves over baseline PPO in five environments, matches its performance in four environments, and is worse in just one environment, Pusher. VSOP improves over A3C in all environments but Pusher, where performance is statistically equal. Finally, VSOP improves over RMPG in all environments. §.§.§ Ablation of mechanisms. Next, we investigate the influence of our four proposed mechanisms on the performance of VSOP. For reference, the mechanisms are positive advantages, single-action setting, spectral normalization, and Thompson sampling. <Ref> summarizes these results, where we see that positive advantages and operating in the single-action regime impact performance on MuJoCo significantly. Spectral normalization and Thompson sampling also influence performance on MuJoCo positively, especially in high-dimensional action and observation space settings such as Humanoid, Humanoid Stand-Up, and Ant. The performance gains for spectral normalization align with results given by <cit.> and <cit.> for DDPG <cit.>, DRQ <cit.>, Dreamer <cit.>, DQN <cit.> and C51 <cit.>. §.§.§ Closing the gap to off-policy methods Interestingly, we see that applying spectral normalization and dropout to PPO also yields an improvement. We call this augmentation VSPPO and provide detailed analysis in <Ref>. In <Ref>, we compare VSOP and VSPPO to SAC and TD3. We close the performance gap significantly for environments such as Humanoid, Half-Cheetah, Ant, and Humanoid Stand-up. §.§ Gymnax Environments PureJaxRL <cit.> uses Gymnax <cit.> and Jax <cit.> to enable vectorization, which facilitates principled hyper-parameter tuning. Using it, we explore several environments and compare VSOP, PPO, A3C, and DPO. We use Bayesian hyper-parameter optimization <cit.> and give each algorithm a search budget of 100 steps. We search over hyper-parameters such as the learning rate, number of update epochs, number of mini-batches in an update epoch, the GAE λ parameter, the max gradient norm, and the width of the network. We give full implementation details in <Ref>. <Ref> shows the overall ranking of each method. VSOP is competitive with DPO and improves over PPO and A3C. <Ref> summarize the results for Classic Control. Performance of each method is in general statistically equal, but VSOP shows significant gain on MountainCar Continuous. <Ref> summarize the results for MinAtar <cit.>. VSOP shows significant improvement over PPO and A3C in Space Invaders. We see marginal improvement over PPO and DPO in Breakout, with significant improvement over A3C. VSOP trails the baselines significantly in Asterix and Freeway. <Ref> summarize the results for Brax MuJoCo <cit.>. We perform paired t-tests for the last episode between each method and VSOP. We threshold at a p-value of 0.1 to indicate significance. VSOP significantly outperforms A3C in all environments. VSOP significantly outperforms PPO in four of nine environments (InvertedDoublePendulum, Pusher, Reacher, and Walker2d), is statistically equivalent in two environments (Hopper and HumanoidStandUp), and is significantly less effective in three environments (Ant, HalfCheetah, and Humanoid). VSOP outperforms DPO on Ant, is statistically equivalent in four environments (HumanoidStandUp, Pusher, Reacher, and Walker2d), but is significantly less effective in four environments (HalfCheetah, Hopper, Humanoid, and InvertedDoublePendulum). Overall, VSOP outperforms A3C and PPO and is competitive with DPO. § CONCLUSION We have presented a novel approach for improving the performance of on-policy DRL algorithms by incorporating cautious interaction. Our method realized through simple modifications to the A3C algorithm, optimizes a lower bound on value plus a constant and integrates exploration via Thompson sampling. We provide a theoretical justification for our approach by demonstrating that our algorithm optimizes this lower bound. Our empirical evaluations across several diverse benchmarks confirm our approach's improved performance compared to existing on-policy algorithms. Although achieving sufficiently cautious algorithmic interaction with the world remains a distant goal, our research constitutes a significant stride toward this objective. We trust that our work will catalyze further advancements in the field, propelling the development of more cautious and efficacious DRL applications in resolving complex, real-world problems. § BROADER IMPACT Algorithmic decision-making is becoming increasingly present in many areas of our life. While this has the potential for benefit, it is also known to automate and perpetuate historical patterns that are often unjust and discriminatory <cit.>. We believe that cautious interaction is a necessary feature for the type of deployed algorithmic decision-making systems the RL community envisions, but that technological solutions alone will not suffice. § ACKNOWLEDGEMENTS AJ would like to thank Luisa Zintgraf and Panagiotis Tigas for the crash course in reinforcement learning. The authors would like to thank everyone who engaged with this https://twitter.com/anndvision/status/1622915369131180033?s=46 t=MBxzmV7t6dGtGBUvQsCDtgTwitter thread. Specifically, we would like to thank Johan Ferret for highlighting Self-Imitation Advantage Learning, Wilka Carvalho for highlighting Self-Imitation Learning, Nathan Grinsztajn for highlighting Risk Seeking Policy Gradients, Ohad Rubin for highlighting Discovered Policy Optimization, and Marc Lanctot for the detailed discussion on Regret Matching Policy Gradients. The authors would like to thank Jannik Kossen for brainstorming the title. Finally, the authors thank Jacob Beck and all anonymous reviewers for their valuable feedback and suggestions. plainnat § THEORETICAL RESULTS §.§ Proof of Theorem 1 Let, _∑_ = + 1^γ^ - 1 - _, denote the discounted return. Let q_π(, ) = _π[ _|_ = , _ = ], denote the state-action value function, and v_π() = _π[ _|_ = ], denote the state value function, under policy π(|, ). Let (x)^+ max(0, x). Assume, without loss of generality, that rewards, _, are non-negative. Assume that the gradient of the policy, ∇π(|, ), is a conservative vector field. Then, performing gradient ascent with respect to, ∇_ J() = _π[ (q_π(_, _) - v_π(_) )^+ ∇_logπ(_|_, ) ], maximizes a lower-bound, v_π^*(), on the state value function, v_π(), plus a constant: v_π^*() ≤ v_π() + C(), where, C() = ∬( γ v_π(') - v_π() )^+ d(' |_=, _ = ) dΠ(|_=), is the expected, clipped difference in the state value function, γ v_π(') - v_π(), over all actions, , and next states, ', under the policy given state, . Here, we use ∫… dΠ(|) to denote ∑_…π(|) for discrete action spaces and ∫…π(|)d for continuous action spaces. Similarly, we use ∫… d(' |, ) to denote ∑_'… p(' |, ) for discrete state spaces and ∫… p(' |, )d' for continuous state spaces. <Ref> shows that the policy-gradient theorem <cit.> can be expressed in terms of the clipped advantage function, mypurpleh_π^+(, ) = mypurple(q_π(, ) - v_π() )^+mypurplemax(0, q_π(, ) - v_π()), as, ∇ v_π() = ∫_∑_k=0^∞[ γ^k ∫_mypurpleh_π^+(𝐱, )∇ dΠ(|𝐱) ] d(→𝐱; k, π) + ∫_∑_k=0^∞[ γ^k ∫_1(q_π(𝐱, ) > v_π(𝐱) ) v_π(𝐱) ∇ dΠ(|𝐱) ] d(→𝐱; k, π) + ∫_∑_k=0^∞[ γ^k ∫_1(q_π(𝐱, ) ≤ v_π(𝐱) ) q_π(𝐱, ) ∇ dΠ(|𝐱) ] d(→𝐱; k, π), where, (→𝐱; k, π), is the probability of transitioning from state to state 𝐱 in k steps under policy π. The first right hand side term above defines the gradient of the lower-bound, v_π^*(), with respect to : ∇ v_π^*() ∫_∑_k=0^∞[ γ^k ∫_mypurpleh_π^+(𝐱, )∇ dΠ(|𝐱) ] d(→𝐱; k, π). Letting, v_π^*(_0)=∫_∑_k=0^∞γ^k ∫_ h_π^+(, ) ∇ dΠ(|) d(_0 →; k, π), a straightforward continuation of the policy gradient theorem <cit.> will show that ∇ J() ∇ v_π^*(_0) ∝∬ h_π^+(, ) ∇_ dΠ(|, ) dP(). We then arrive at <Ref> by moving from the all states/actions to single state/action formulation: ∇ J() ∇ v_π^*(_0), by definition ∝∬(q_π(, ) - v_π() )^+ ∇_ dΠ(|, ) dP(), <cit.> = _π[ ∫(q_π(_, ) - v_π(_) )^+ ∇_ dΠ(|_, ) ], = _π[ ∫(q_π(_, ) - v_π(_) )^+ ∇_ dΠ(|_, )/dΠ(|_, dΠ(|_, ], = _π[ ∫(q_π(_, _) - v_π(_) )^+ ∇_logπ(_|_, )]. Now we need to show that, v_π^*() ≤ v_π() + ∬( γ v_π(') - v_π() )^+ d(' |_=, _) dΠ(|_=). To do so, we will first prove that it holds for episodes, , of length 1, then that it holds for episodes of length 2. These two proofs will then prove <Ref> for episodes of arbitrary length by mathematical induction and conclude the proof. For episodes of length 1, |T| = 1, we have ∇ v_π() = ∫ q_π(, ) ∇ dΠ(|) + ∫∇ q_π(, ) dΠ(|), = ∫ q_π(, ) ∇ dΠ(|) + ∫( ∇∫ d(|, ) ) dΠ(|), = ∫ q_π(, ) ∇ dΠ(|), = ∫ h_π^+(, ) ∇ dΠ(|) + ∫( 1(q_π > v_π) v_π() + 1(q_π≤ v_π) q_π(, ) ) ∇ dΠ(|). Therefore, for |T| = 1, ∇ v_π^*() = ∫ h_π^+(, ) ∇ dΠ(|) In order to recover v_π^*(), we need to use the work of <cit.> to define an inverse function for the gradient. Assume that the policy, π(|, ), is a smooth, infinitely differentiable function with respect to . Further, let the gradient of the policy, ∇π(|, ) = [ ∂/∂θ_1π(|, θ_1),; ⋮; ∂/∂θ_kπ(|, θ_k) ], be a conservative vector field. We call β̃(∇π(|, )) the inverse of the gradient operation, ∇π(|, ). Assuming that π(|, ) is a representative of β̃, we have that, π(|, ) = β̃(∇π(|, )), = ∫_γ∇π(|, ) d𝐱, = ∫_γ∂/∂θ_1π(|, θ_1) dθ_1 + … + ∂/∂θ_kπ(|, θ_k) dθ_k, where γ is a path from the fixed reference point, _0, to . The conservativeness of ∇π(|, ) guarantees that the integrals are path independent. Now we have, v_π^*() = β̃( ∫ h_π^+(, ) ∇ dΠ(|) ), = ∫ h_π^+(, ) β̃(∇ dΠ(|) ), linearity = ∫ h_π^+(, ) dΠ(|), <Ref> ≤∬( + ( γ v_π(') - v_π() )^+ ) d(', |, )dΠ(|), <Ref> = v_π() + ∬( γ v_π(') - v_π() )^+ d(' |, )dΠ(|), |T| = 1 which concludes the proof for episodes of length 1. For episodes of length 2, |T| = 2, we have ∇ v_π() = ∫ q_π(, ) ∇ dΠ(|) + ∫∇ q_π(, ) dΠ(|), = ∫ q_π(, ) ∇ dΠ(|) + ∭ q_π(', ') ∇ dΠ(' |') d(' |, ) dΠ(|) + ∭( ∇∫' d(' |', ') ) dΠ(' |'), = ∫ q_π(, ) ∇ dΠ(|) + ∭ q_π(', ') ∇ dΠ(' |') d(' |, ) dΠ(|), = ∫ h_π^+(, ) ∇ dΠ(|) + ∭ h_π^+(', ') ∇ dΠ(' |') d(' |, ) dΠ(|) + ∫( 1(q_π > v_π) v_π() + 1(q_π≤ v_π) q_π(, ) ) ∇ dΠ(|) + ∭( 1(q_π > v_π) v_π(') + 1(q_π≤ v_π) q_π(', ') ) ∇ dΠ(' |')d(' |, ) dΠ(|). Therefore, for |T| = 2, ∇ v_π^*() = ∫ h_π^+(, ) ∇ dΠ(|) + ∭ h_π^+(', ') ∇ dΠ(' |') d(' |, ) dΠ(|). Finally, we apply the β̃ operator: v_π^*() = β̃(∫ h_π^+(, ) ∇ dΠ(|) + ∭ h_π^+(', ') ∇ dΠ(' |') d(' |, ) dΠ(|)), = ∫ h_π^+(, ) β̃(∇ dΠ(|)) + ∭ h_π^+(', ') β̃(∇ dΠ(' |')) d(' |, ) dΠ(|), linearity = ∫ h_π^+(, ) dΠ(|) + ∭ h_π^+(', ') dΠ(' |') d(' |, ) dΠ(|), <Ref> ≤∬ d(|, ) dΠ(|) + ∬( γ v_π(') - v_π() )^+ d(' |, )dΠ(|) + ∭ h_π^+(', ') dΠ(' |') d(' |, ) dΠ(|), <Ref> ≤∬ d(|, ) dΠ(|) + ∬( γ v_π(') - v_π() )^+ d(' |, )dΠ(|) + ∬γ v_π(') d(' |, ) dΠ(|), <Ref> = v_π() + ∬( γ v_π(') - v_π() )^+ d(' |, )dΠ(|). rearranging terms ∇ v_π() can be written in terms of h_π^+(, ). ∇ v_π() = ∇[ ∫ q_π(, ) dΠ(|) ], = ∫ q_π(, ) ∇ dΠ(|) + ∫∇ q_π(, ) dΠ(|), = ∫( mygreenh_π^+(, ) + myfuchsia1(q_π > v_π) v_π() + mypurple1(q_π≤ v_π) q_π(, )) ∇ dΠ(|) + ∫∇ q_π(, ) dΠ(|), = ∫( mygreenh_π^+(, ) + myfuchsia1(q_π > v_π) v_π() + mypurple1(q_π≤ v_π) q_π(, )) ∇ dΠ(|) + ∫∇[ ∫( + γ v_π(') ) d(', |, ) ] dΠ(|), = ∫( mygreenh_π^+(, ) + myfuchsia1(q_π > v_π) v_π() + mypurple1(q_π≤ v_π) q_π(, )) ∇ dΠ(|) + γ∬∇ v_π(') d(' |, ) dΠ(|), = ∫( mygreenh_π^+(, ) + myfuchsia1(q_π > v_π) v_π() + mypurple1(q_π≤ v_π) q_π(, )) ∇ dΠ(|) + γ∬[ ∫ q_π(', ') ∇ dΠ(' |') + γ∫∇ v_π(”) d(”|', ') dΠ(' |') ] d(' |, ) dΠ(|), = ∫( mygreenh_π^+(, ) + myfuchsia1(q_π > v_π) v_π() + mypurple1(q_π≤ v_π) q_π(, )) ∇ dΠ(|) + γ∬[ ∫( mygreenh_π^+(', ') + myfuchsia1(q_π > v_π) v_π(') + mypurple1(q_π≤ v_π) q_π(', ')) ∇ dΠ(' |') + γ∫∇ v_π(”) d(”|', ') dΠ(' |') ] d(' |, ) dΠ(|), = ∫_∑_k=0^∞[ γ^k ∫_mygreenh_π^+(𝐱, )∇ dΠ(|𝐱) ] d(→𝐱; k, π) + ∫_∑_k=0^∞[ γ^k ∫_myfuchsia1(q_π(𝐱, ) > v_π(𝐱) ) v_π(𝐱)∇ dΠ(|𝐱) ] d(→𝐱; k, π) + ∫_∑_k=0^∞[ γ^k ∫_mypurple1(q_π(𝐱, ) ≤ v_π(𝐱) ) q_π(𝐱, )∇ dΠ(|𝐱) ] d(→𝐱; k, π) v_π^v_π() ≤∬ d(|, )dΠ(|) + ∬( γ v_π(') - v_π() )^+ d(' |, )dΠ(|) v_π^v_π() ∫ h_π^+(, ) dΠ(|) = 1/2∫( q_π(, ) - v_π + |q_π(, ) - v_π| ) dΠ(|) (2max(0, a) = a + |a|) = 1/2∫( ∫( + γ v_π(') - v_π() ) d(', |, ) + | ∫( + γ v_π(') - v_π() ) d(', |, ) | ) dΠ(|) ≤1/2∬( + γ v_π(') - v_π() + | + γ v_π(') - v_π() | ) d(', |, ) dΠ(|) (Jensen's inequality) ≤1/2∬( 2 + γ v_π(') - v_π() + | γ v_π(') - v_π() | ) d(', |, ) dΠ(|) (triangle inequality) = ∬( + ( γ v_π(') - v_π() )^+ ) d(', |, )dΠ(|) (2max(0, a) = a + |a|) When, without loss of generality, rewards, _, are assumed to be non-negative: v_π^v_π() ∫ h_π^+(, ) dΠ(|) ≤ v_π() ∫ h_π^+(, ) dΠ(|) = 1/2∫( q_π(, ) - v_π + |q_π(, ) - v_π| ) dΠ(|) ( 2max(0, a) = a + |a| ) ≤∫ q_π(, ) dΠ(|) (triangle inequality) = v_π() §.§ Relation to Regret Matching Policy Gradient (RMPG) Here we provide a derivation starting from RMPG and arriving at our method. ∇ J() = 𝔼_π^[ ∫_( q_π^(_, ) - ∫_π(^'|_, ) q_π^(_, ^') d^')^+∇_π(|_, ) d] = 𝔼_π^[ ∫_( q_π^(_, ) - v_π^(_) )^+∇_π(|_, ) d] = 𝔼_π^[ ∫_ h_π^+(_, ) ∇_π(|_, ) d] = 𝔼_π^[ ∫_π(|_, ) h_π^+(_, ) ∇_π(|_, )/π(|_, ) d] = 𝔼_π^[ h_π^+(_, _) ∇_π(_|_, )/π(_|_, )] = _π^[ h_π^+(_, _) ∇_logπ(_|_, )] § IMPLEMENTATION DETAILS All code is available at: https://github.com/anndvision/vsophttps://github.com/anndvision/vsop. §.§ Gymansium We build off of <cit.>'s https://github.com/vwxyzjn/cleanrlCleanRL package which provides reproducible, user-friendly implementations of state-of-the-art reinforcement learning algorithms using PyTorch <cit.>, Gymnasium <cit.>, and Weights & Biases <cit.>. Several code-level optimizations <cit.> key to PPO reproducibility are superfluous for our method. We omit advantage normalization, value loss clipping <cit.>, gradient clipping, and modification of the default Adam <cit.> epsilon parameter as they either do not lead to an appreciable difference in performance or have a slightly negative effect. However, we find that orthogonal weight initialization, learning rate annealing, reward scaling/clipping, and observation normalization/clipping remain to have non-negligible positive effects on performance <cit.>. In addition to adding dropout, weight decay regularization, and spectral normalization, we also look at model architecture modifications not present in the CleanRL implementation: layer width, number of hidden layers, layer activation, layer normalization <cit.>, and residual connections. We find that ReLU activation functions <cit.>, increasing layer width to 256, and a dropout rate of 0.01-0.04 are beneficial. We find that network depth and residual connections are benign overall. In contrast to recent findings in the context of offline data for off-policy reinforcement learning <cit.>, layer normalization — whether applied to the actor, the critic, or both — is detrimental to performance. In <Ref>, we present the hyperparameters used for the VSOP, VSPPO, RMPG, A3C, and PPO algorithms when trained on Gymnasium MuJoCo environments. The table lists hyperparameters such as the number of timesteps, thread number, and learning rate, among others. Each algorithm may have a unique set of optimal hyperparameters. Please note that some hyperparameters: 'clip ϵ', 'norm. adv.', and 'clip v-loss' may not apply to all algorithms, as these are specific to certain policy optimization methods. The 'width' and 'activation' fields correspond to the architecture of the neural network used by the policy, and the 'weight decay' and 'dropout' fields pertain to the regularization techniques applied during training. In general, tuning these hyperparameters is crucial to achieving optimal performance. Note that Adam optimization <cit.> is used for all algorithms except for A3C where RMSProp <cit.> is used. We report median values and standard error measurements over ten random seeds. §.§ Gymnax We optimize the hyper-parameters for each algorithm for each set of environments using a Bayesian optimization search strategy <cit.>. Each algorithm has a budget of 100 search steps. We use NVIDIA A100 GPUs. The hyperparameters we search over include learning rate, number of steps, number of environments, GAE λ, update epochs, number of minibatches, and the maximum gradient norm. We also search over the hidden layer width for Brax-MuJoCo and MinAtar environments. Each hyperparameter has a specific search space and transformation applied during the search. We summarize the search sapce in <Ref>. For the MinAtar environments, the hyper-parameters search spaces are: the number of steps in [2, 8] (transformed to 2^x where x is the integer part of the sample), GAE λ in [0.0, 1.0] (rounded to the nearest multiple of 0.002), learning rate in [1e-4, 1e-3] (rounded to the nearest multiple of 0.00005), update epochs in [1, 10] (rounded to the nearest integer), maximum gradient norm in [0.0, 5.0] (rounded to the nearest multiple of 0.1), number of minibatches in [0, 6] (transformed to 2^x), update epochs in [1, 10] (rounded to the nearest integer), and number of minibatches in [0, 7] (transformed to 2^x), and hidden layer width in [6, 10] (transformed to 2^x). We set the γ and number of environments to fixed values at 0.99 and 64, respectively. For MuJoCo-Brax, we do not search over the number of environments or steps. Instead we set them to fixed values at 0.99, 2048, and either 10 or 5, respectively. The search space for the remaining hyper-parameters the same ranges as for the MinAtar environments. Further, we only optimize over the Humanoid, Hopper, and Reacher environments for 20 million steps. We test for each environment for 50 million steps. Finally, for Classic Control environments, we employ the same hyperparameter search as for MinAtar, except that we search over the number of environments in [2, 8] (transformed to 2^x where x is the integer part of the sample) and we do not search over the hidden layer width, instead setting it to a fixed value of 64. This strategy allows us to thoroughly explore the hyperparameter space and find values that generalize well across a variety of different tasks. Further it allows us to fairly compare each algorithm. <Ref> report the final hyper-parameter values for PPO, VSOP, and A3C. All reported results for MinAtar, Classic Control, and MuJoCo-Brax respectively are given by mean values and 68% confidence intervals over 20 random seeds. During tuning we use 2 random seeds and for testing we use a different set of 20 random seeds, as per the guidance of <cit.>. § ADDITIONAL RESULTS §.§ Comparing the effects of asynchronous parallelization and Thompson sampling When tuning on the MuJoCo-Brax environment, we found that the positive-effect of Thompson sampling on performance became diminished. In the MuJoCo-Brax setting we used asynchronous parallelization with 2048 environments and just 10 steps per environment for 20480 steps per model update. Whereas in the Gymnasium setting we use just 1 environment and 2048 steps per update. <Ref> summarizes an investigation to see if parallelization and/or update frequency mitigates the positive effects of Thompson sampling. This investigation is still on-going and we will leave it for follow up work. We do see, that Thompson sampling is necessary in the single environment setting: red-solid vs red-dashed lines. We also see that decreasing the update frequency and increasing parallelization seems to yield better results when no dropout is applied. This can be seen by comparing the smaller difference between solid and dashed purple lines (256 threads, 32768 steps per update) with the larger difference between solid and dashed orange lines (16 threads, 2048 steps per update). This is a progressive trend as we move through red, orange, yellow, green, blue, and purple. The trend is stable but more pronounced as we decrease the mini-batch size. §.§ Spectral norm and Thompson sampling improve PPO Interestingly, we see this same trend when applying spectral normalization and dropout to PPO. In <Ref> we compare VSOP to the original PPO, and our own implementation that adds Thompson sampling and spectral normalization, VSPPO. In <Ref> we compare how Thompson sampling and spectral norm effect PPO.
http://arxiv.org/abs/2306.04262v3
20230607090019
Self-Adjusting Weighted Expected Improvement for Bayesian Optimization
[ "Carolin Benjamins", "Elena Raponi", "Anja Jankovic", "Carola Doerr", "Marius Lindauer" ]
cs.LG
[ "cs.LG" ]
DoEdesign of experiment BOBayesian optimization AFacquisition function EIExpected Improvement WEIWeighted Expected Improvement SAWEISelf-Adjusting Weighted Expected Improvement PIProbability of Improvement UCBUpper Confidence Bound LCBLower Confidence Bound ELAexploratory landscape analysis GPGaussian Process TTEITop-Two Expected Improvement TSThompson Sampling DACDynamic Algorithm Configuration ACAlgorithm Configuration CMA-ESCMA-ES ASalgorithm selection PIASper-instance algorithm selection PIACper-instance algorithm configuration AFSAcquisition Function Selector VBSvirtual best solver RFrandom forest UBRUpper Bound Regret IQMinterquartile mean Permutation Equivariant Graph Framelets for Heterophilous Graph Learning Jianfei Li^†,  Ruigang Zheng^†,  Han Feng,  Ming Li,  Xiaosheng Zhuang* J. Li, R. Zheng, H. Feng, and X. Zhuang are with the Department of Mathematics, City University of Hong Kong, Hong Kong, China. (Emails: [email protected], [email protected], [email protected], [email protected]) M. Li is with the Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China. (Email: [email protected]) ^† Equal contribution * Corresponding author July 31, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF). Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited. In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions. We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO. On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable anytime performance compared to handcrafted baselines and serves as a robust default choice for any problem structure. The suitability of our method also transfers to HPOBench. With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand. § INTRODUCTION Black-box problems are challenging to optimize because we do not have direct access to the underlying structure of the problem landscape. To optimize them, we can sequentially evaluate different points x and use the obtained objective values f(x) to choose which point(s) to evaluate next, but we do not have a priori information where to find the most promising regions or how to best trade off exploration of the search space with exploitation of regions that appear to be very promising. Formally, in black-box optimization we want to find the minimum x^* of a given function f, x^* ∈_x∈𝒳 f(x), without having access to the function itself other than through the queries. Typical black-box problems occur in engineering or hyperparameter optimization (HPO), where the quality of potential solutions is evaluated via numeric simulations or training machine learning models. Balancing exploration with exploitation is particularly challenging when we have a low number of available function evaluations in relation to the size of the search space 𝒳. A popular approach to address such settings is BO <cit.>, often promoted as sample-efficient for expensive black-box optimization. The main idea of BO is to use a probabilistic surrogate model (e.g., a Gaussian Process), iteratively refining an approximation of the problem landscape that guides the optimization process. BO starts with an initial design or DoE, obtained from sampling strategies, e.g., random sampling, low-discrepancy sequences such as Sobol', or Latin Hypercube design <cit.>. With these initial points, the surrogate model is built to approximate the unknown objective function and capture the uncertainty of the true function value on unobserved points. The AF (a.k.a. infill criterion) is a utility function to trade off exploration of underexplored areas and exploitation of presumably promising ones. The point with the highest acquisition function value is queried. Afterwards, the surrogate model is adjusted with the new observation, and the optimum is updated if the new point improves the target value of the best-so-far observation. These steps are repeated for a given overall optimization budget. Besides accurate probabilistic surrogate models and the type and size of initial design <cit.>, the exploration-exploitation trade-off is crucial for successful and efficient optimization. Since the landscape of the black-box optimization problem is unknown, it is a priori unclear which AF should be chosen for the optimization problem at hand. Even worse, since each problem has its unique landscape, we need different exploration-exploitation trade-offs <cit.>. Because there are different choices of AFs, e.g., PI <cit.>, EI <cit.>, UCB <cit.>, TS <cit.>, Entropy Search <cit.> and Knowledge Gradient <cit.>, selecting a suitable one for the problem at hand with insights on the landscape remains challenging. Furthermore, in the past, the choice of an AF has been considered static over the BO process. Prior works suggest that mixed AF-strategies <cit.> or even very simple schedules switching from EI to PI can improve anytime performance of BO; however, for each problem different schedules, incl. static ones, perform best <cit.>. Performance can be improved by selecting an AF-schedule with a meta-learned selector based on the ELA features <cit.> of the initial design which factors in the problem at hand  <cit.>. Nevertheless, this approach has its limitations. First, it requires a large and expensive initial design compared to the overall budget in order to compute the ELA features, and the ideal size of it is unknown <cit.>. Second, the selector is trained for a specific budget, and it is unclear how it transfers to other dimensions, optimization budgets, or initial designs. In this work, we instead aim for a self-adjusting yet simple approach to adapt the exploration-exploitation trade-off in a data-driven way throughout the optimization process. For this, we propose to adaptively set the weight of WEI <cit.> in an online parameter control fashion <cit.>. Depending on how we parametrize WEI, we can be more explorative, recover EI, or lean towards a modulated, exploitative PI. The crucial questions to answer here are [label=(*)] * When should we adjust ? and * How should we adjust ? We propose a new method, dubbed SAWEI. Inspired by a termination criterion for BO <cit.>, we adjust the weight whenever BO tends to converge, indicated by the UBR. We adjust opposite to the dominant search attitude, either towards exploration or exploitation. The key mechanism behind SAWEI is illustrated in <ref>. We demonstrate the effectiveness of our method SAWEI on the BBOB functions of the COCO benchmark <cit.> and on tabular benchmarks from HPOBench <cit.> against baselines of established AFs and previously proposed handcrafted AF-schedules for . § RELATED WORK One line of works directly focuses on improving AFs <cit.>. To overcome the fact that EI can sometimes be too exploitative, <cit.> uniformly sample one of the two most promising points instead of always choosing the most promising one according to EI. <cit.> offer efficient implementation of Monte-Carlo AFs (no closed-form solution available) as well as a one-shot formulation of the Knowledge Gradient. A different approach is to meta-learn a neural AF via Reinforcement Learning to achieve better sample-efficiency on downstream tasks <cit.>. A different line of work is concerned with combining different AFs, e.g., by building a portfolio of AFs (EI, PI, UCB with different hyperparameter settings) and then using an online multi-armed bandit strategy to assign probabilities of which AF to use at which step, called GP-Hedge or Portfolio Allocation <cit.>. Their work indicates that the performance of Portfolio Allocation highly varies with the number of arms and their respective hyperparameter settings. Similarly to Portfolio Allocation, <cit.> update weights of their portfolio (UCB, EI, TS <cit.>, TTEI <cit.>) in an online manner. They do not include PI as they observe it exhibits inferior performance compared to other single static AFs. In addition, robust versions of EI, PI, and UCB can be combined to a multi-objective AF combining the strengths of the individual ones <cit.>. In this work, we take a step back and ask ourselves what we could achieve by employing a simplistic approach of self-adjusting the exploration-exploitation trade-off of WEI. It has also been shown in other optimization-related areas that dynamic choices are beneficial in terms of performance, e.g., in evolutionary computation <cit.>, planning <cit.> and deep learning <cit.>. Recently, the introduction of DAC <cit.> underlines the potential of employing dynamic schedules (as opposed to selecting algorithm components on the fly, as is usually done in evolutionary computation <cit.>). Related to that, also setting the weight of WEI has been investigated. <cit.> propose to cycle through α∈{0.1, 0.3, 0.5, 0.7, 0.9 } to pulse from exploring to exploiting. This idea is based on the suggestion to cycle through global-local balances during the search <cit.>. However, this heuristic is oblivious to the current state of the search. Another line of work proposes to simply query WEI n times with n different values of in parallel <cit.> with the drawback of potentially uninformative function evaluations. The weights for the exploration and exploration terms in WEI can also be set via rewards obtained by calculating the accuracy of the surrogate model <cit.>. However, the definition of the rewards is lacking and their method needs to be reset from time to time for the case when the exploration term causes to repeatedly propose the same configuration. § SELF-ADJUSTING WEIGHTED EI In our method, the SAWEI, we adaptively set the weight ∈[0,1] of the WEI to steer the exploration-exploitation trade-off. WEI <cit.> is defined as: WEI(; α) = α z() ŝ() Φ[z()]_exploitation-driven + (1 - α) ŝ() ϕ[z()]_exploration-driven with z() = (f_min - ŷ()) / ŝ(), f_min being the lowest observed function value, ŷ() and ŝ() the predicted mean and standard deviation from the surrogate model, and ϕ and Φ being the PDF and CDF of a Gaussian distribution, respectively. The coefficient weighs the exploration and exploitation terms. For example, α=0.5 recovers standard EI <cit.> and α=1 has a similar behavior as PI() = Φ[z()] <cit.>. With α=0 we only utilize the exploration term, but this does not equal pure exploration or complete randomness. When To Adjust In order to be able to set adaptively, we need an indicator of the progress of the optimization. Recently, <cit.> proposed a termination criterion to stop BO for hyperparameter optimization. If the UBR falls under a certain threshold, they terminate. UBR estimates the true regret at iteration k by: UBR(G_k;𝒳) = r_k := min_∈ G_kUCB_k () - min_∈𝒳LCB_k () with G_k being the history of all evaluated points, 𝒳 being the entire search space, and LCB and UCB being the lower and upper confidence bound, e.g., UCB(x) = μ_t(x) + β_t σ_t(x) and LCB(x) = μ_t(x) - β_t σ_t(x), respectively. The first term of UBR estimates the worst-case function value of the best-observed point, a.k.a. the incumbent, and the second term is the lowest function value across the whole search space. This means the smaller the gap between both terms becomes, the closer we are to the asymptotic function value under the current settings of the optimizer. We empirically show that the UBR indeed changes after we change the acquisition function during the optimization in <ref>, supporting our intuition. The UBR does not directly operate on function values, but UCB and LCB are computed on the surrogate model instead. Instead of using UBR to stop the optimization process, it serves as an indicator for us when to adjust components, i.e., update the value of . Our rule is: When the gradient of UBR over the last n steps becomes close to 0, we adjust the exploration-exploitation attitude with . The sensitivity to the gradient is controlled by our hyperparameter ϵ. How to Adjust The remaining question is how to adjust , by how much and into which direction. We propose a rather simple, yet effective additive change by Δ_α. Our intuition is to set opposite to the current search attitude, since the current search attitude led to convergence of the optimization. The term search attitude describes the current search behavior, whether the acquisition function is more explorative or more exploitative. We set Δ_α = 0.1 to allow for gradual changes. We determine the sign of Δ_α by the recent search attitude: depending on whether the exploration-term a_explore or exploitation-term a_explore of <ref> is larger for the last selected point _next, the current search attitude was either steered more for exploring or exploiting, respectively. The terms are the summands of WEI and defined as follows: a_explore(_next) = ŝ(_next) ϕ[z(_next)] a_exploit(_next) = z(_next) ŝ(_next) Φ[z(_next)] We use a_exploit = Φ[z(_next)], omitting z(_next) ŝ(_next), which is equal to PI.[Empirically, both methods perform almost equivalent for BBOB but not for HPOBench, see <ref>. We conjecture that original a_exploit is less exploitative than the original PI. Since we look for a strong (global) signal on how exploitative a point was, we opted for PI instead of the WEI term.] Please note that we only do this for determining the search attitude. Now if the exploration term is bigger than the exploitation term, i.e., a_explore > a_exploit, the current search attitude is exploration. We inspect the attitude and adjust in the opposite direction, to provide a chance for more exploration or exploitation in contrast to the currently dominating attitude. SAWEI in a Nutshell We illustrate and summarize our method SAWEI in <ref> and in <ref>. Our goal is to adjust the exploration-exploitation trade-off based on the current search attitude whenever the UBR converges. SAWEI enhances the standard BO pipeline by calculating the UBR in each iteration and by tracking the search attitude via the exploration term and the exploitation term of WEI. First, we define and evaluate the initial design and train our surrogate model (Line 1). Then, as long as we have function evaluations left (Line 2), we query the acquisition function (here WEI) for the next point to be evaluated (Line 3). Meanwhile, we track the search attitude with the exploration and exploitation terms of WEI (Line 4, see <ref>). The function is evaluated as usual with the proposed point and we update our history and our surrogate model (Lines 5-7). Now we calculate the UBR estimating the gap to the true regret based on the history of evaluated points and the search space (Line 8). We smooth the history of UBR with moving IQM (25-75 quartiles) with a window size of 7 (Lines 9-10, ). Based on this smoothed version, we check whether UBR has converged, i.e., the gradient of UBR is close to 0 (Line 11). In more detail, we signal time to adjust when the last absolute gradient is close to 0 with an absolute tolerance of ϵ times the last observed maximum of the absolute gradient. If it is the case, we adjust the weight α of WEI based on the search attitude (Line 12). The search attitude is calculated with the exploration and exploitation terms of WEI. § EXPERIMENTS In our experiments, we empirically evaluate our method SAWEI on different benchmarks and compare it to baselines from the literature and handcrafted ones. We benchmark the algorithms on the BBOB functions from the COCO problem suite <cit.> and on HPOBench <cit.>. Our implementations are built upon the BO tool SMAC3 (v2.0.0b1) <cit.>. We use a standard GP as configured in SMAC's BlackBoxFacade and SMAC optimizes the acquisition function with a combination of local and random search which also applies to minimizing LCB in <ref> for calculating the UBR. We set β_t = 2 log (d t^2 / β), β = 1 for UCB/LCB as done in SMAC following the original UCB <cit.>. The code is available at <https://github.com/automl/SAWEI>. The exact setting for our method is ϵ=0.1 and adding or subtracting Δα=0.1. We set our convergence check horizon to n=1, i.e., we check whether the last gradient is close to 0. We validate our hand-crafted settings an ablation study in <ref>. Our evaluation protocol repeats the optimization 10 times with different random seeds and calculates the IQM across seeds to robustly estimate the regret per function. For each schedule, we then determine the rank for each of the 24 BBOB functions and compute the global rank across functions. For the rank table, we aggregate the ranks across the single tasks per schedule with the IQM. In the plots over optimization steps, we show the mean and 95 confidence interval across all the functions. BBOB For the 24 noiseless, synthetic BBOB functions <cit.> we set the dimensionality to 8, the budget of the initial design to 24 function evaluations (FEs), and the budget for the surrogate-based optimization to 256 FEs. We optimize the first three instances of each function. In BBOB, the instances are obtained by scaling, shifting, and rotating the base function (hence preserving the problem structure but changing the embedding). HPOBench We evaluate all methods on the tabular machine learning benchmarks from HPOBench <cit.>. To this end, we randomly selected eight tasks from the OpenML dataset <cit.> and optimize a Random Forest, MLP, SVM, Logistic Regression, and XGBoost. We allow an initial design of 15 FEs and a BO-based optimization budget of 100 FEs. For each FE, we average the metric over the five available seeds. Baselines We compare our data-driven, self-adjusting method SAWEI to [label=(*)] * the well-established best practice of simply using a single AF (EI, PI, and LCB) and * hand-designed schedules of α , see <ref>. We start with static schedules of α∈{0, 0.5, 1}, either more exploring, EI, or more exploiting. Further, we define a schedule from EI ((α=0.5)) to modulated PI ((α=1)), and vice versa, as a step function with 5 steps. In addition, we compare to hard switches from EI to PI <cit.> as well as the Gutmann-Sobester pulse cycling through α <cit.>. We also include Portfolio Allocation <cit.> and use their portfolio of nine acquisition functions consisting of different parametrizations of UCB, PI, and EI. §.§ Results BBOB Our method SAWEI ranks among the first based on final performance (cf. <ref>), which is very similar to dynamic baselines going from EI (α = 0.5) to the modulated PI (α = 1). One drawback of the hand-designed schedules is that the optimization budget needs to be defined beforehand, whereas our method is self-adjusting and is oblivious of the total budget. Surprisingly, the modulated PI is comparatively strong and performs better than EI, suggesting that the BBOB landscapes require a higher percentage of exploitation. SAWEI also exhibits a favorable anytime performance, making it a consistent and robust default choice, see <ref>. Schedules dominating SAWEI only do so for a portion of the optimization, hence they are not consistent. Confirming results from <cit.>, the effect of switching from EI to PI can be clearly seen as a boost in the ranks. On BBOB, the generally well-performing schedules involve PI which our method can easily mimic. SAWEI finds a suitable transition from exploring to exploiting per-run. In general, the tendency of the α-schedules traversed by SAWEI is moving from exploration to exploitation. Often, we can observe a decrease, a change to more exploration again, after some iterations. On one BBOB function, the multi-modal Schwefel function (F20) with weak global structure (<ref>), SAWEI manages to efficiently transform from EI (α = 0.5, higher explorative attitude) to modulated PI (α = 1) with an exploitative attitude. At the end of the optimization, when the basin was already discovered, SAWEI decreases α to more exploration to explore the surroundings. We can also clearly observe the effect of the hand-designed switching (EI → PI (x )) in the sharp bends downwards in the log regret and upwards in the UBR, although SAWEI discovers a more suitable point and can change its attitude again. On Katsuura, which is highly multi-modal and has weak global structure (<ref>), SAWEI increases α more slowly to exploitation, presumably because of the highly rugged landscape, see <ref>. Also here, SAWEI discovers the boost from changing from exploration to exploitation. If we look closely we can see that the UBR jumps up after the switch happened for the switching schedules (EI to PI) which is an indication of the adequacy of UBR as a state descriptor. All schedule plots for each BBOB function, as well as the box plots of the final log regrets can be found in <ref>. HPOBench On HPOBench we see that SAWEI also has a favorable anytime performance, see <ref>, and ranks among the first for the final log regret (<ref>). It is on par with Explore (α = 0), and they are directly followed by Portfolio Allocation and EI. The supremacy of the exploratory schedules is quite surprising, given the simplicity commonly attributed to response landscapes in HPO <cit.>. We will investigate this further in our future work. With a closer look at the schedules, we see the general trend to start from EI (α=0.5) and go to Explore (α = 0) which is the complete opposite of the BBOB behavior. Boxplots of the final log regret and all plots with log regret, UBR and α over time can be found in <ref>. Comparison of BBOB and HPOBench In summary, we observe that the optimal schedule and search behavior vary on two levels. First, for a given problem type, the optimal schedule varies across the single tasks. Second, the search behavior depends on the type of problem, whether we optimize synthetic functions in BBOB or find optimal hyperparameters for machine learning models in HPOBench. SAWEI mimics the strategy fitting best to the problem at hand and exhibits the most favorable rank distribution across domains, see <ref>. BBOB in general requires more exploitation and HPOBench more exploration which is visible prominently in two ways. First, PI performs better on BBOB than on HPOBench and EI vice versa. Second, SAWEI's trajectories of are contrary on BBOB and HPOBench (see <ref>) and thus adjust to the required search attitude. Ablation on BBOB We perform an ablation study to assess the sensitivitiy of our method to its hyperparameters. In particular, we vary Δα∈{0.05, 0.1, 0.25}, i.e., the amount to add or subtract to our current weight α. In addition, we can track the attitude in different ways: either just considering the last step (), or accumulating the terms until the last point where the best configuration (the incumbent) changed () or until the last adjustment happened (). In the latter cases, a_explore and a_exploit become sums. This hyperparameter defines the convergence check horizon n, which is varied during the run for the latter two options. Finally, we vary the sensitivity to the gradient of UBR by the width of the tolerance band when compared to 0: ϵ∈{0.05, 0.1, 0.5, 1 }. The bigger ϵ, the more often we switch. We evaluate all 36 combinations on all 24 BBOB functions with 10 seeds and 1 instance on 8 dimensions and assess the hyperparameter importance with fANOVA <cit.>. We normalize the log regret for each BBOB function and use this as the performance metric. We show the marginals of each hyperparameter in <ref>. The sensitivity ϵ to the gradient has a slight tendency to 0.05 but the overall differences are small, we argue that the exact timing of the signal to adjust is less important. In addition, setting the granularity of Δα is quite robust to the exact setting. In contrast, tracking the attitude has a tendency to favor checking the exploration/exploration terms until the last adjust. It is likely that on other benchmarks the importances might change and our default of ϵ=0.1, =last, Δα=0.1 proves to be a robust one. § LIMITATIONS AND FUTURE WORK Our method SAWEI introduces a slight overhead due to the need to optimize LCB for computing UBR in each iteration. Everything that follows, namely deciding whether and how to adjust , is negligible in terms of computational cost. In our analysis, we did not experiment with the initial value of , which may not be optimal for every tested function. Also, our method does not allow jumps or resetting , which could also be beneficial. In this context, defining directly as a function of the exploration/exploitation terms of WEI could be a way to allow more flexibility. One limitation is that so far we have only combined EI and PI. Our approach can easily be extended to any linear combination of two acquisition functions. Moreover, we can combine SAWEI with DAC <cit.> to learn policies of across instances and tasks. More generally, we strongly believe that meta-learning and self-adjustment should go hand in hand, another topic to be explored in future work. Building on the work by <cit.>, one could consider to warmstart SAWEI using meta-models utilizing ELA features <cit.>. Future work, and a current limitation, is the investigation of more domains as the domains show large variations. Finally, we believe that also other components of BO like the surrogate model could benefit from self-adjusting choices. § CONCLUSIONS Through a self-adjusting choice of the acquisition function in Bayesian Optimization, we aim to benefit from two main levers: (1) an automated identification of the AF best suitable for the unknown task at hand (e.g., while PI performs better than EI on BBOB, it is the other way around for HPO problems), and (2) an adjustment to the different needs during the optimization process. Our method SAWEI uses the convergence of UBR as a criterion for when to adjust its parametrized acquisition function. SAWEI proves to achieve promising performance on two classic benchmark suites, BBOB and HPOBench, outperforming the static EI and PI AFs. It is hence able to achieve both goals, (1) and (2), listed above. It furthermore does not only achieve good final ranks, but also exhibits a favorable anytime performance on both suites. As a side result of our study, we observe that the general trends in BBOB and HPOBench are orthogonal to each other: while SAWEI generally traverses from EI (exploration) to a modulated PI (exploitation) for BBOB, it moves from EI to even more exploration on HPOBench. This demonstrates the need for flexible, on-the-fly-adjustment of BO components. Broader Impact Statement: After careful reflection, the authors have determined that this work presents no notable negative impacts on society or the environment, since it presents a foundational approach without any concrete application at hand. The authors gratefully acknowledge the computing time provided to them on the high-performance computers Noctua2 at the NHR Center PC2 under the project hpc-prf-intexml. These are funded by the Federal Ministry of Education and Research and the state governments participating on the basis of the resolutions of the GWK for the national high performance computing at universities (www.nhr-verein.de/unsere-partner). Carolin Benjamins and Marius Lindauer acknowledge funding by the German Research Foundation (DFG) under LI 2801/4-1. Elena Raponi acknowledges funding by the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF). § SUBMISSION CHECKLIST * For all authors… * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? * Did you describe the limitations of your work? * Did you discuss any potential negative societal impacts of your work? * Have you read the ethics author's and review guidelines and ensured that your paper conforms to them? <https://automl.cc/ethics-accessibility/> * If you are including theoretical results… * Did you state the full set of assumptions of all theoretical results? No theoretical results. * Did you include complete proofs of all theoretical results? * If you ran experiments… * Did you include the code, data, and instructions needed to reproduce the main experimental results, including all requirements (e.g., with explicit version), an instructive with installation, and execution commands (either in the supplemental material or as a url)? * Did you include the raw results of running the given instructions on the given code and data? We will upload the datasets upon acceptance. * Did you include scripts and commands that can be used to generate the figures and tables in your paper based on the raw results of the code, data, and instructions given? * Did you ensure sufficient code quality such that your code can be safely executed and the code is properly documented? * Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixed hyperparameter settings, and how they were chosen)? * Did you ensure that you compared different methods (including your own) exactly on the same benchmarks, including the same datasets, search space, code for training and hyperparameters for that code? * Did you run ablation studies to assess the impact of different components of your approach? See main. * Did you use the same evaluation protocol for the methods being compared? * Did you compare performance over time? Rank over time * Did you perform multiple runs of your experiments and report random seeds? 10 seeds * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? * Did you use tabular or surrogate benchmarks for in-depth evaluations? * Did you include the total amount of compute and the type of resources used (e.g., type of gpus, internal cluster, or cloud provider)? * Did you report how you tuned hyperparameters, and what time and resources this required (if they were not automatically tuned by your AutoML method, e.g. in a nas approach; and also hyperparameters of your own method)? * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets… * If your work uses existing assets, did you cite the creators? * Did you mention the license of the assets? * Did you include any new assets either in the supplemental material or as a url? * Did you discuss whether and how consent was obtained from people whose data you're using/curating? * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? * If you used crowdsourcing or conducted research with human subjects… * Did you include the full text of instructions given to participants and screenshots, if applicable? * Did you describe any potential participant risks, with links to Institutional Review Board (irb) approvals, if applicable? * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? § HARDWARE AND RUNTIME All experiments are conducted on a CPU cluster with 990 nodes with AMD Milan 7763 CPUs. The compute time for the BBOB 8d functions was 45min each so 14040 = 585 in total on CPU (including ablation). The compute time for the HPOBench was 90 each so 288 = 12 in total on CPU. § SEARCH ATTITUDE We determine the search attitude based on the exploration-term of WEI and PI (Φ[z(_next)], <ref>). Originally we compared the exploration-term with the exploitation-term of WEI, the latter being a modified version of PI (z(_next) ŝ(_next) Φ[z(_next)]). We evaluate both versions on BBOB (all 24 functions, 8d, 3 instances, 10 seeds, like in main) and HPOBench (5 models on 8 tasks, 10 seeds, like in main). In Figure <ref> on BBOB, we see that the current version (SAWEI (ours)) achieves slightly lower log regret than the one using the modified PI term (SAWEI (modPI)) but otherwise the distributions seem very similar. On HPOBench, the log regret of SAWEI (modPI) is drastically worse than for SAWEI (ours). Please note that we denote the optimum log regret of log (0) by -10000. This can be explained by the traversed , see Figure <ref>. SAWEI (modPI) adjusts to exploitation where exploration is required. In addition, SAWEI (modPI) is not able to reduce again for BBOB. § UBR INTUITION The UBR can be used to stop BO <cit.>. This means the UBR signalizes whether it is worth to continue optimization. We add our intuition that this holds for the current optimizer settings. This is empirically supported by observing the UBR for the switching policies (EI to PI) where we see sharp bends in the UBR after switching, see <ref>. In our case "current setting" implicitly describes the search attitude whether it is exploring or exploiting. Therefore we can use the UBR to signal when we should change our search attitude. § BBOB RESULTS § HPOBENCH RESULTS
http://arxiv.org/abs/2306.08342v1
20230614081838
On Repeated Measurements of a Quantum Particle in a Harmonic Potential
[ "Filip Gampel", "Mariusz Gajda" ]
quant-ph
[ "quant-ph" ]
APS/123-QED Institute of Physics Polish Academy of Sciences Aleja Lotnikow 32/46, 02-668 Warszawa, Poland We study evolution of a quantum particle in a harmonic potential whose position and momentum are repeatedly monitored. A back-action of measuring devices is accounted for. Our model utilizes a generalized measurement corresponding to the Positive Operator-Valued Measure. We assume that upon measurement the particle's wavefunction is projected onto one of possible detector states depending on the observed result. We chose these post-measurement states to be moving Gaussian wavepackets. The Wave Function Quantum Monte-Carlo formalism is used to simulate single quantum trajectories of the particle. We show how classical trajectories emerge in course of observation and study in detail dispersion of position and momentum of the particle. On repeated measurements of a quantum particle in a harmonic potential Mariusz Gajda July 31, 2023 ====================================================================== § INTRODUCTION Position and momentum are fundamental quantities characterizing the dynamics of a classical particle. The time-dependent position of a particle is directly related to what an observer sees while monitoring its motion. The concept is thus very intuitive. According to classical mechanics, measurements in principle do not affect the system, and their precision can be arbitrarily high. In contrast, quantum mechanical measurements always somehow affect the system, and moreover, the relation between a wavefunction (or density operator) describing the state of a system to what is actually being observed is not so obvious. The first approach to resolve these issues is known as the Copenhagen interpretation <cit.>, which, until today, forms the basis for the textbook version of quantum mechanics. A central role is played by the Born rule, which gives probabilities of positive answers to yes/no questions related to measurement outcomes. When a measurement is completed, an answer is obtained and the wavefunction changes discontinuously, in accordance with the result and the von Neumann (and Lüders) postulate of wavepacket reduction <cit.>. P. Langevin expressed this rule in the introduction to the textbook `La theorie de l'observation en mecanique quantique' <cit.>) by F. London and E. Bauer, in the following words: `The wave function it [the quantum theory] uses to describe the object no longer depends solely on the object, as was the case in the classical representation, but, above all, states what the observer knows and what, in consequence, are his possibilities for predictions about the evolution of the object. For a given object, this function, consequently, is modified in accordance to the information possessed by the observer'. The Copenhagen interpretation gives a well defined prescription on how to use the theory in practice. However, it is not the only existing interpretation of concepts such as wavefunction and measurements. After decades, the issue of collapse, nonlocality and measurement still remains a subject of scientific discussion <cit.>. Iwo Białynicki-Birula and Zofia Białynicka-Birula (Z-IBB) identify in their textbook `Quantum electrodynamics'<cit.> the fundamental postulates of quantum theory pertaining to the relation between the density operator and measurement. The postulates are very formally combined into four axioms which can be, under some simplifications, summarized as follows: i) the elementary questions, i.e. the yes/no questions, are represented by projectors, P; ii) the state of a system is represented by a non-negative, self-adjoint, and trace-one density operator ρ; iii) the density operator determines probabilities p of affirmative answers to elementary questions in accordance with the Born rule p=Tr{Pρ}; iv) every dynamical variable A is represented by a self-adjoint operator A, and can be assigned a spectral family of projectors, E^(A)_λ symbolizing questions whether the value of a dynamical variable A is not larger than λ. As mentioned by Z-IBB, `since the set of probabilities p is the only information in quantum theory available about the state of the system, from the operational point of view the concept of state of the system should be identified with the function p( P) defined on the set of all questions.' The collapse postulate is missing from the Z-IBB axioms. One might possibly find it in the statement quoted above, equating the state of the system to the function p( P). The answer to any question apparently modifies it. On the other hand, the issue might have seemed purely academic at the time, since realistic measurements in quantum mechanics were generally believed to be destructive. This excludes the possibility of repeated measurement on the same quantum system and limits the relevance of the collapse postulate. E. Shrödinger <cit.>, one of the founding fathers of the quantum mechanics, wrote: 'We never experiment with just one electron, or atom, or (small) molecule, we sometimes assume that we do; this inevitably entails ridiculous consequences . In the first place, it is fair to say that we are not experimenting with single particles, any more than we can raise ichtyhysauria in the zoo'. Nowadays, such measurements are not only theoretically considered, but also performed in labs. Modern variations of the Copenhagen interpretation, such as QBism – Quantum Bayesianism <cit.>, postulate that an agent (e.g. a physicist) observing a system abruptly modifies their knowledge (the set of probabilities) once a measurement outcome becomes available. The wavefunction expresses the individual agent's state of knowledge. `...There is no real state of a physical system. What one chooses to regard as the physical system and what state one chooses to assign to it depend on the judgment of the particular physicist who questions the system and who uses quantum mechanics to calculate the probabilities of the answers.', as stated by N. David Mermin <cit.>. According to Qbists, since there is no objective wavefunction of the system, there is no collapse either. Other points of view assume that the wavefunction, the state of the system, has attributes of reality, being independent of an observer. The issue of an apparent collapse, disliked by many physicists, is resolved in various ways. Everett's many-world interpretation (MWI) is one such approach <cit.>. Non-local hidden variable theories, Bohmian mechanics being the best known example, are another possibility <cit.>. The MWI postulates that upon measurement, the system, which finds itself entangled with the measuring apparatus, does not collapse to some observed state, but rather that all components of the wavefunction associated with possible measurement outcomes continue to evolve according to the Schrodinger equation of the composite system. Because of the linearity of the Schrodinger equation these components do not interact and form seperate `branches' or `worlds'. One must accept uncountable copies of themselves and the world living different lifes. Bohmian mechanics introduces additional hidden variables, coordinates (e.g. particle positions) associated with a configuration of the system under consideration. The particles move guided by a `pilot wave', which is equivalent to the wavefunction of orthodox Quantum Mechanics (QM) and evolves according to the Schrodinger equation. It is thus the hidden variables which are actually observed in a measurement. Each of the varying interpretations of QM – of which we only mentioned a few – forces us to accept some non-intuitive, seemingly problematic postulate about reality. If none of them is found satisfactory, one must accept the view that collapse, an abrupt discontinuous change of the system, triggered by measurement, is a `real and wild' thing. None of the interpretations presented above may be falsified on grounds of present knowledge. The problems, at this stage of understanding, seem of a philosophical nature, their experimental verification elusive. However, the various interpretations may imply effects measurable in future experiments and lead to different generalizations of quantum theory. The first studies of repeated measurement of continuous variables can be found in works of Mensky <cit.> and Davies, <cit.>. Great experimental progress in cooling and trapping of single ions, opened many possibilities of repeated measurement of a single quantum system. The first spectacular example is observation of quantum jumps, i.e. dark periods in the fluorescence spectrum of an optically driven trapped ion <cit.>. The experiments fueled the interest in the theory of repeated quantum measurements. The proper description of a system under repeated measurements calls for the inclusion of information gained, a back-action of the meters, as part of the dynamics. Different methods were developed <cit.>. The theoretical approach utilizes an open system formalism. It is based on the Gorini-Sudarshan-Kossakowski-Lindblad (GKSL) equation for the density operator <cit.>. This allows for studying all statistical properties of the system. Instead of solving directly the GKSL equation, for different reasons it may be preferable to look for single realizations of a wavefunction dynamics. Obviously such individual trajectories are stochastic in nature. Averaged over many realizations, they provide a description equivalent to the time dependent density operator. The general theoretical framework governing wavefunction dynamics of this kind involves the introduction of the so-called stochastic Schrödinger equation (SSE) <cit.>. It should be noted that the choice of an SSE is not unique and in general there are many realizations ('unravelings') corresponding to one GKSL equation. In fact, the formalism of SSE need not to be invoked at all for the construction of concrete numerical schemes generating the stochastical trajectories. One notable example <cit.> is known as the Wave Function Quantum Monte-Carlo (WFQMC) method. This formalism is often used by atomic physicists since it allows easily to generate sequences of events mimicking experiments with atoms and photons. In this paper we use WFQMC to analyse statistical characteristics of trajectories determined by simultaneous repeated measurement of position and momentum of a quantum particle. First, we specify our model, we define jump operators and introduce the WFQMC approach. Then we present exemplary trajectories and discuss the time dependence of dispersion of position and momentum for different choices of detection parameters. Conclusions are presented in the final section. § MONTE CARLO DYNAMICS OF A WAVEFUNCTION We study a phase-space trajectory of a quantum particle, continuously monitored by an array of detectors. Here we use the theoretical model introduced by us in <cit.>. We assume that every measurement provides a value of position and momentum of the particle at this instant. A sequence of such readouts gives a phase-space trajectory. Each simultaneous measurement of position and momentum satisfies Heisenberg's uncertainty principle. We apply an open system formalism: our system is a quantum particle described by the Hamiltonian H_0, while the detectors form a reservoir. We assume that the reservoir has no memory. The problem of simultaneous measurement of position and momentum was for the first time considered by E. Arthurs and J.L. Kelly <cit.>. The recent studies of A.J. Scott and G.J. Milbourn <cit.> assumed a different detection model than studied here. They assumed the von Neumann type of coupling between a particle and a meter, and used a formalism based on an Ito stochastic Schrödinger equation <cit.>. The main difference is, thus, in the form of the jump operators assumed here. The effect of coupling of the system to the reservoir of detectors is described by the `jump operators' C_i,j specified in the following part of the paper. The general form of a completely positive and trace preserving map which describes time-homogeneous dynamics of the density operator ρ of a system coupled to the Markovian reservoir via operators C_i,j is given by the Gorini-Kossakowski-Sudarshan-Lindblad equation: <cit.>: ρ̇ = i [ ρ, H_0 ] + ℒ_relax (ρ), where H_0 is the self-adjoint Hamiltonian of the system and ℒ_relax is a relaxation operator of the Lindblad form, accounting for an effect of the environment: ℒ_relax (ρ) = - 1/2∑_α(C_i,j^† C_i,jρ + ρ C_i,j^† C_i,j) + ∑_α C_i,jρ C_i,j^†. We chose C_i,j to be proportional to projectors onto detector's states |α_i,j⟩: C_i,j= √(γ) |α_i,j⟩⟨α_i,j|, where γ gives the characteristic clicking rate (probability per unit time) and |α_i,j⟩ are complex Gaussian wavepackets, which in position representations have the form: ⟨ x|α_i,j⟩ = 1/(2 πσ^2)^1/4 e^-(x-x_i)^2/4 σ^2 e^i k_j x. Spatial points x_i and momenta ħ k_j define positions of the detectors in a phase-space. These locations are a matter of choice. Here we assume that they form a rectangular lattice with spacing d_x and d_p respectively. In what follows we will use the index α as a shortcut notation for two indices, α≡ (i,j) and C_α≡ C_i,j. The operators C_α are responsible for a reduction of the particle's wavefunction, a jump, caused by the interaction with the reservoir. C_α projects onto non-orthogonal states, thus C_α C_β≠ 0 for α≠β. Therefore, the measurement we defined does not belong to the class of a Projective-Valued Measure (PVM). This is in accordance with the modern formulation of a measurement process, which extends the concept of measurements to account for real observations, whose results also depend on the characteristics of the measuring apparatus and procedure. For details on this Positive Operator-Valued Measure (POVM) see Ref. <cit.>. Projectors are substituted by an arbitrary number of positive operators, the effects E_i, whose sum gives identity ∑_i E_i = I <cit.>. In the case studied here, the effects are related to a jump within a time interval dt caused by E_α = dt C^†_α C_α, or alternatively a no jump event, E_0=1-∑_α E_α. To assure that all effect operators are positive, the time step dt must be sufficiently small. We take care of this fact. Instead of solving the GKSL equation, in the following we use one of its possible unravellings, the Quantum Monte Carlo Wave Function method <cit.>. The idea of the approach is to generate an ensemble of individual trajectories. Each one can be viewed as a single, possible realization of the dynamics of the wavefunction. Averaging over many such trajectories yields the time dependence of the density operator in accordance with the GKSL equation: ρ(t) = |ψ⟩⟨ψ |. The WFQMC method simulates stochastic evolution in which for each time step the quantity |ϕ'(t+δ t)⟩ is calculated, by evolving the state for an infinitesimal time δ t with the non-unitary Hamiltonian: H=H_0-i/2∑_α C^†_α C_α. One of two possibilities is then selected: a jump or no-jump event. A jump to the state |α⟩ is selected with the probability: δ p_α = δ t ⟨ϕ' (t) | C^†_α C_α | ϕ' (t) ⟩ = γδ t |⟨α | ϕ' (t) ⟩|^2. The time-step δ t has to be sufficiently small to assure that ∑_αδ p_α is smaller then one. If the jump takes place, the particle's wavefunction changes discontinuously: |ϕ (t+ δ t) ⟩ = |α⟩. The probability of no jump is equal to: P_0 = 1 -∑_αδ p_α. If the `no jump' event takes place, the state is essentially replaced with |ϕ'(t+δ t)⟩. However, since the Hamiltonian (<ref>) does not preserve the norm, it is first normalized: |ϕ (t+ δ t) ⟩ = (1-i H δ t ) |ϕ (t) ⟩/||(1-i H δ t ) |ϕ (t) ⟩||, The evolution of the wavepacket corresponds thus to a random sequence of jump and no-jump events. In our approach, every jump is interpreted as an act of measurement. Projection onto states associated with detectors is reminiscent of the reduction of a wavepacket. The nonunitary evolution accounts for the Hamiltonian dynamics of the particle as well as for interaction with the detectors. The hermitian part H_0 is the sum of kinetic and potential energy H_0=p^2/2m+V(x). Interaction with the detectors is represented by the nonhermitian term, i/2 C^†_α C_α. This term causes a a kind of `accummulation` of the wavefunction around the detector positions in phase space <cit.>. In each timestep δ t every detector contributes to the particle wavefunction, ϕ, by an amount proportional to ∝ -1/2γδ t⟨ x|α⟩⟨α|ϕ⟩. Our choice of jump operators C_α fulfills a number of basic assumptions about a sensible detector of position and momentum. First of all, a meter of position should click if the probability to find a particle in its neighbourhood is large. In our case this probability is proportional to the squared overlap of the wavefunction with the state associated with the detector. Once the meter `fires', the particle wavefunction should be reduced according to the information gained, so the post-measured state is localized around the position of the detector. Choosing the detector states to be Gaussians stands to reason. The width σ_x gives the precision of the measurement. We would like our detectors to be `gentle' to the objects under measurement. By this we do not mean a weak measurement, but we want the particle velocity, assumed to be proportional to the probability density current, to be not significantly affected due to detection. The post-measurement state of the particle should preserve some information about its pre-measurement momentum at the detection point. To this end we equip the detectors at every spatial location with a variety of kinetic momenta by assigning to every Gaussian spatial profile plane waves of momenta ħ k_n. The momenta can take various values as discussed above. The probability of clicking is thus maximal, if both the position and momentum of the particle fits one of the detector states. This discussion is obvious if one considers the detector's wavefunction not in position but in momentum space. The Fourier transform of a detector state (Eq.(<ref>)) is: ⟨ k|α_mn⟩ = (2 σ^2/π)^1/4 e^-σ^2 (k-k_n)^2 + i x_m(k- k_n), a Gaussian superposition of plane-waves of momenta centered around k_n. The detector is very sensitive to wavefunctions whose local velocity at x_m is close to k_n. § STATISTICAL CHARACTERIZATION OF PARTICLE'S TRAJECTORIES In our work we study `trajectories' of a particle, resulting from the detection process, i.e. sequences of position and momentum measurements of the particle in a harmonic potential V=1/2 m ω^2 x^2. We use harmonic oscillator units, i.e as unit of length a_ho=√(ħ/mω), unit of momentum q_0=ħ/a_ho, time τ_0=1/ω, and energy ε_0=ħω. From now on all quantities are expressed in these units. The hermitian Hamiltonian has the form: H_0 = 1/2 x^2 + 1/2 p^2, placing position and momentum on equal footing. We assume that detectors are characterized by a spatial width σ=√(1/2). Selecting this value ensures that the detectors formally have the same width in momentum space. Moreover, as the detectors project into coherent states, the post-measurement uncertainty in position and momentum is minimal according to Heisenberg's principle. Similarly to the hermitian part of the Hamiltonian, the coupling to detectors is symmetric, and position and momentum are on an equal footing. We choose the same numerical value for detector spacing d_x=d_p=d. In our calculations, we impose the initial wavefunction of the particle to be identical to one of the Gaussian detector states Eq.(<ref>), centered at (x_0=nd, p_0=0). Here n is a natural number, chosen such that x_0 is close to 20, so that for different grid densities d we get comparable initial conditions. The particle thus starts with zero velocity at some distance from the potential's minimum. This distance defines the classical amplitude of the harmonic oscillation, and comprises several other detectors (n ≫ 1), so that the subsequent motion can be monitored with sufficient resolution. In our numerical experiment, we simulate a large number of trajectories, where by trajectory we mean a time series of detection events, `clicks' of meters at phase-space locations (x_i,p_i) at instants t_i. An example of a single realization of the measurement experiment is shown in Fig. <ref>. The particle follows a circular orbit in phase-space, as would be expected for a classical particle. Some random departures from this orbit are clearly visible. Moreover, the radius of the orbit grows slowly in time, i.e. the energy of the observed particle increases. Individual trajectories, resulting from the stochastic process, differ one from another. Their statistical properties are the main objects of our interest. First, we analyze the average phase space trajectory (⟨ x(t)⟩,⟨ p(t) ⟩). Using the WFQMC formalism we generate 5000 trajectories for each choice of parameters. The detection events are random and discrete points in time, so to get a mean trajectory we introduce coarse-grained time by dividing the timeline into small intervals, [t, t+δ t], where δ t= 0.1, and calculate the mean position and momentum for all clicks from the ensemble falling into the interval. Mean trajectories, both in position and momentum space, show that on average, the particle follows a classical path (cf. Fig. <ref>). Position as well as momentum oscillate with the harmonic oscillator frequency, and are phase-shifted by π/2. Deviations of a single realization from the average trajectory are characterized by the second moment of the click distribution, i.e. the dispersion δ^2 x(t) =⟨ x(t)^2 ⟩ -⟨ x(t)⟩^2 and δ^2 p(t) = ⟨ p(t)^2⟩ -⟨ p(t)⟩^2. Because of the symmetry of the Hamiltonian Eq.(<ref>) and Eq.(<ref>), the dispersion in position and momentum should be equivalent δ^2x=δ^2p. Simulations essentially confirm these expectations, which is why in Figure <ref>, we only plot the dispersion function δ (t) ≡√(δ^2 x). The dispersion functions of position and momentum actually differ by a small modulation due to the π/2 phase shift of position and momentum of the particle. This will be discussed later on in this section. The time dependence, fitted to the numerical results, is found to be δ^2 (t) ≈ D t+δ_0^2, where D is a diffusion coefficient and δ_0^2 the initial dispersion, independent of γ. δ_0^2 is a result of the initial wavepacket having finite width even when identical to a detector wavefunction. In other words, because of lack of orthogonality, immediately after localization at a detector at position (x_0,p_0)=(j_0d,k_0d), the particle may be captured by a different detector (x,p)=(jd,kd). In our model the probability distribution of a subsequent click of a detector α_j,k, under the condition that such a click occurs within a short time from the first one, is approximately equal to the discretized Husimi function Q(j,k) of the initial state <cit.>: Q(j,k) =| ⟨α_j,k| α_j_0,k_0⟩|^2 = e^-d^2((j-j_0)^2+(k-k_0)^2)/2/∑_j,k| ⟨α_j,k| α_j_0,k_0⟩|^2 . According to the discussion above, the dispersion squared of the initial spatial position of the monitored particle is δ_0^2=∑_j,kQ(j,k) (jd)^2. If d ≪ 1, summation can be substituted by integration, which yields δ^2_0 ≈ 1. In the case of numerical results shown in Fig.(<ref>), this condition is not satisfied (d=2.16), however we surprisingly find that this continuous approximation still works quite well. For large times, the initial dispersion can be neglected, and Eq. (<ref>) indicates that on the top of the harmonic oscillation the particle undergoes Brownian motion. Deviations from the mean trajectory grow as the square root of time, suggesting a diffusion process characterized by the coefficient D. Moreover from dimensional analysis it seems that dynamical quantities such as δ^2(t) should depend on the dimensionless parameter γ t. Indeed, detailed studies confirm this prediction (see Fig. <ref>, upper panel). This implies that the diffusion coefficient D grows linearly with γ, which is plausible since this implies more frequent detection of the particle. Similarly, the denser the detector grid, the more detectors there are monitoring the particle, which again leads to a higher detection frequency and larger perturbations of the classical trajectory. In the lower panel of Fig. <ref> we show the dependence of the diffusion coefficient on the detector spacing d for a fixed value of γ=1.0. Results clearly show that D is inversely proportional to the squared detector spacing. Our numerical experiment allows to postulate the following dependence of the diffusion coefficient on the parameters of the observation process: D ≈ 2πγ/d^2. The analytical formula Eq.(<ref>) shows very good agreement with numerical calculations. This formula may also be confirmed by approximate analytical considerations. The diffusion coefficient is related to the squared mean displacement of the walking particle per unit of time, i.e.: D=γ∑_j,ke^-d^2(j^2+k^2)/2(dj)^2. Using the continuum approximation, jd=x, kd=p, and ∑_j,k→1/d^2∫ dx dp, the diffusion coefficient is equal to: D ≈γ/d^2∫ dx dp x^2 e^-(x^2+p^2)/2 = 2πγ/d^2. We thus recovered Eq. (<ref>), which was obtained by fitting to numerical data. A more careful analysis indicates that in addition to the Brownian diffusion characterized by linear growth of the dispersion δ^2(t), there are small-amplitude oscillations with frequency 2 ω. These oscillations can be explained assuming small dephasings of individual trajectories x = x_0 cos(t+δφ) with respect to the average x = x_0 cos(t). The dephasing gives an oscillatory contribution to the dispersion ⟨ x^2 ⟩ - ⟨ x ⟩^2 ≈δφ^2 sin^2t. A similar oscillatory character of dispersion of position and momentum was observed in <cit.>, where phase space dynamics of a continuously monitored particle in an anharmonic potential is studied. In this work however the dispersion is bounded, contrary to the present result. This is because the authors of <cit.> study the limit of very frequent and very weak measurements, whereas the present work treats a series of strong measurements at discrete points in time. Each measurement is performed at the `Heisenberg limit', i.e. it minimizes the uncertainty relation: σ_x σ_p = 1/2 Such measurement necessarily introduce growing fluctuations. Our studies indicate that dispersion of trajectories is model/system sensitive. This fact was also noticed by us in <cit.>, where different types of diffusion were found for alternative POVM's of measurement operators. Fluctuations of position and momentum of the particle lead to increasing its energy. It is because of this that when we observe a sample trajectory in phase space, it tends to be a circular motion spiralling outwards (see Fig. <ref>). The radius of the circle in phase space increases with time, r(t)= √(2⟨ E(t) ⟩). It follows directly from Eq. (<ref>) that the average energy of the particle, ⟨ E ⟩ = 1/2⟨ x^2 ⟩ + 1/2⟨ p^2 ⟩, grows linearly with time: ⟨ E(t) ⟩ = δ^2 (t) + E_0 = Dt+(δ_0^2+E_0), where E_0=1/2(x_0^2+p_0^2) is the initial energy of a classical particle at initial position x_0 with initial momentum p_0. By dividing the energy scale into small intervals Δ E we can obtain the energy distribution p_E(t) of the ensemble of trajectories as a function of time. This distribution around t=0, as obtained from our simulations, is shown in Fig.<ref>. It is a relatively narrow function centered around E_0. Again, as in the case of position dispersion, the initial distribution of energy can be approximately obtained from analytic calculations. As previously we use the continuum approximation: j(j_0)d → x(x_0), k(k_0)d → p(p_0), and Q_i,j→ P(x,p) = 1/(2 π) e^-(x-x_0)^2/2e^-(p-p_0)^2/2. If initially the particle is placed at phase-space location (x_0,p_0) then the initial energy distribution is: p_E = ∫ dx dp P(x,p) δ(E-1/2(x^2+p^2)). Using that 2 E_0=x^2_0 + p^2_0 we get: p_E=e^-(E+E_0)I_0(√(2E_0)√(2E)), where I_0(z) is the modified Bessel function of the first kind. The energy distribution as given by Eq.(<ref>) is plotted in the upper panel of Fig. <ref>. Again, the continuous approximation works quite well even for parameters which do not fully legitimate usage of the formula. We stress that to get the energy histogram we accumulated data from the time interval 0<t<2π, so strictly speaking the histogram does not give the energy distribution exactly at t=0, but a distribution averaged over the first period of the oscillation. For large times t, this initial energy distribution evolves into a thermal one: p_E(t) = 1/ϵ(t) e^-E/ϵ(t), The width and mean of this distribution ϵ(t) depends on time. Setting ϵ = kT allows us to formally define a temperature for the system, identifying the repeated measurement process with a type of `heating'. The distribution p_E in the thermal regime is shown in the lower panel of Fig. <ref>. The temperature of the ensemble grows with time, and for large times it becomes kT(t)=⟨ E ⟩ = δ^2(t) ≈ Dt. This analytical prediction again agrees well with numerical results. In summary, we studied a quantum particle in an external harmonic potential which is repeatedly monitored by an array of detectors regularly distributed in phase-space. We employed an open system formalism, treating detectors as an external reservoir. Coupling of the particle to the meters is given by jump operators whose action is to project the particle's wavefunction onto coherent states characterizing the detectors. We use the Wave Function Quantum Monte-Carlo method to generate ensembles of time-dependent wavefunctions. We interpret every generated wavefunction as a single realization of the particle's dynamics, which in addition to continuous evolution experiences quantum jumps related to observations. We show that on average the trajectories follow a classical path. This results is similiar to the one in <cit.>, where a von Neumann type of coupling between the system – a nonlinear oscillator – and meters was considered. The random quantum jumps in position and momentum space introduce fluctuations on top of the harmonic motion. We have shown that these fluctuation have the character of Brownian motion, a diffusive process with dispersion of position and momentum growing linearly with time. We numerically found the diffusion coefficient and its dependence on the detector clicking rate γ and detector spacing d. Going back to dimensional units, we see that the diffusion coefficient D_x in position space is proportional to the Planck constant: D_x=4 πγħ/d_x d_pσ^2, signifying the quantum character of this process. Again, this is due to the fact that our measurements are performed at the limit set by the Heisenberg uncertainty limit (cf. Eq. (<ref>)). d_x d_p is the action equal to the area of an elementary cell in phase space, determined by the detector spacing. Finally, we found that the repeated observation introduces heating of the particle, the energy distribution of the trajectory ensemble at large times becomes thermal and the effective temperature grows linearly in time. Our studies of the system under continuous monitoring and comparison to similar studies <cit.>, show that observed mean trajectories correspond to the clasical ones, however deviation from the mean, the dispersion, significantly depends on the system studied and details of detection process, in particular on a choice of the Positive Operator-Valued Measure. We do not know whether the particular measurement schemes considered here can ever be realized in practice. However, the model we formulate is fully admissible in view of the present understanding of quantum measurement theory. As such it is legitimate to study its consequences. Paraphrasing the words of prof. Iwo Białynicki <cit.>: As to the usefulness of our results, we have no opinion at all. Perhaps someone else could see whether they are good for anything. Acknowledgements The paper is dedicated to Professor Iwo Białynicki-Birula on the occasion of his 90th birthday. Lectures on Quantum Mechanics, given by the Professor at the Physics Department of Warsaw University in the fall semester of 1976, played a very important role in the scientific development of one of us (MG). Moreover, MG is especially grateful to the Professor for his particular care which MG experienced during his professional life. The Professor and his wife, Zofia, were always eager to offer their friendly help at moments of important decisions. We thank Magdalena Załuska-Kotur for illuminating us on the methods of determination of a diffusion coefficient and for pointing to us that 5/2≈√(2π). This work was supported by the Polish National Science Centre grant No 2019/32/Z/ST2/00016, through the project MAQS under QuantERA, which has received funding from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No 731473. 99 Born1927M. Born, Physical Aspects of Quantum Mechanics, Nature 119, 354 (1927). Heisenberg1930W. Heisenberg, Die Physikalischen Prinzipien der Quantentheorie, S. Hirzel, Leipzig 1930.; W. Heisenberg, The physical principles of the quantum theory. Translators C. Eckart, F.C. Hoyt, Dover Publications, Inc., 1949, doi:10.1007/BF01699141. vonNeumann1932 von Neumann, J., 1932, Mathematische Grundlagen der Quantenmechanik (Springer, Berlin); reprinted 1981; English translation by R. T. Beyer, 1955: Mathematical Foundations of Quantum Mechanics (Princeton University, Princeton, NJ). Peres Peres, Asher (2002). "Popper's experiment and the Copenhagen interpretation". Studies in History and Philosophy of Modern Physics. 33: 23. arXiv:quant-ph/9910078, doi:10.1016/S1355-2198(01)00034-X. Luders51 G. Lüders, Uber die Zustandsanderung durch den Messprozess, Ann. Phys. (Leipzig) 8, 322 (1951). London39F. London and E. Bauer, La theorie de I'observation en mecanique quantique (Hermann, Paris, 1939) [English translation in Quantum Theory and Measurement, edited by J. A. Wheeler and H. Zurek (Princeton University, Princeton, New Jersey, 1983), p. 217]. Everett57 H. Everett, On the foundations of quantum mechanics, Mudd Manuscript Library - Remote Storage (ReCAP), PRIN 685.1957.17 (1957). Bohm D. Bohm, Phys. Rev. 85, 166 (1952), 85, 180 (1952). Wigner63E. P. Wigner, The problem of measurement, Am. J. Phys. 31, 6 (1963). Kraus83K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory (Springer, Berlin, 1983). Ludwig83G. Ludwig, Foundations of Quantum Mechanics I and II (Springer, Berlin, 1983). Holevo01 A.S. Holevo, Statistical Structure of Quantum Theory, Lecture Notes in Physics Monographs 67 (Springer, Berlin, 2001). Zurek2003 W. Hubert Zurek: Decoherence, einselection, and the quantum origins of the classical, Rev. Mod. Phys., Vol. 75, No. 3, (2003). Wiseman10 H.M. Wiseman, G.J. Milburn, Quantum Measurement and Control (Cambridge, Cambridge University Press, 2010). IBB I. Białynicki-Birula and Z. Białynicka-Birula, Quantum electrodynamics, Elsevier Ltd. 1975, ISBN 978-0-08-017188-3. Schrodinger E. Schrödinger, Are there quantum jumps? Part II - PhilPapers, British Journal for the Philosophy of Science 3 (11):233-242 (1952). Caves02 Caves, C. M., Fuchs, C. A. Schack, R. Phys. Rev. A65, 022305 (2002). Fuchs10 Fuchs, C. A. Preprint at http://arxiv.org/abs/1003.5209 (2010). Fuchs13Fuchs, C. A., Schack, R. Rev. Mod. Phys.85,1693–1715 (2013). Mermin14Mermin, N. David . "Physics: QBism puts the scientist back into science". Nature. 507 (7493): 421–423, (2014), doi:10.1038/507421a. Mermin2022 N. Davis Mermin, There is no quantum measurement problem, Physics Today 75, 6, 62 (2022); doi: 10.1063/PT.3.5027. Mensky79 M. B. Mensky, Quantum restrictions for continuous observation of an oscillator, Phys. Rev. D. 20, 384 (1979), doi:10.1103/PhysRevD.20.384. Davies69Davies,E. B., Quantum stochastic processes, Commun. Math. Phys., 15, 277 (1969). Davies76 E.B. Davies, Quantum theory of open systems, Academic Press London (1976). Toschek78W. Neuhauser; M. Hohenstatt; H. Dehmelt; P. Toschek, Optical Sideband Cooling of Visible Atom Cloud Confined in Parabolic Well. Physical Review Letters. 41 (4): 233–236 (1978). doi:10.1103/PhysRevLett.41.233. Dehmelt86W. Nagourney, J. Sandberg, H. Dehmelt, Shelved optical electron amplifier: observation of quantum jumps, Phys. Rev. Lett. 56, 2797 (1986) . Toschek86Th. Sauter; W. Neuhauser; R. Blatt; P. E. Toschek, Observation of Quantum Jumps, Physical Review Letters. 57, 1696 (1986), doi:10.1103/PhysRevLett.57.1696 Wineland86 J.C. Bergquist, R.G. Hulet, W.M. Itano, D.J. Wineland, Observation of quantum jumps in a single atom, Phys. Rev. Lett. 57, 1699 (1986) . Caves86 C. Caves, Quantum mechanics of measurements distributed in time. A path-integral formulation, Phys. Rev. D 33, 1643 (1986) Caves87C.M.Caves and G.J.Milburn, Quantum-mechanical model for continuous position measurements, Phys.Rev. A 36, 5543 (1987). Barchielli82 A.Barchielli, L.Lanz and G.M.Prosperi, A model for the macroscopic description and continual observations in quantum mechanics, Nuovo Cim. 72B, 79 (1982). Barchielli84 A.Barchielli, L.Lanz and G.M.Prosperi, Foundations of Quantum Mechanics, edited by S. Kamefuchi et al. (Physical Society of Japan, Tokyo, 1984), p. 165. Diosi88aL. Diósi, Continuous quantum measurement and itô formalism, Physics Letters A 129 419 (1988). Diosi88b L. Diósi, Localized solution of a simple nonlinear quantum Langevin equation, Phys. Lett. A132, 233 (1988). Gisin84 N.Gisin, Quantum measurements and stochastic processes, Phys.Rev.Lett. 52, 1657 (1984). Jacobs06 K. Jacobs and D.A. Steck, A straightforward introduction to continuous quantum measurement, Contemporary Physics, 47:5, 279-303, (2006), DOI: 10.1080/00107510601101934. Gorini76 V. Gorini, A. Kossakowski and E. C. G. Sudarshan, Completely positive dynamical semigroups of n-level systems, J. Math. Phys. 17(5), 821 (1976), doi:https://doi.org/10.1063/1.522979 Lindblad76 G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys. 48(2), 119 (1976), doi:https://doi.org/10.1007/BF01608499. Gisin92 N. Gisin and I. C. Percival, The quantum-state diffusion model applied to open systems, J. Phys. A: Math. Gen. 25, 5677 (1992). Carmichael93 H.J. Carmichael, An Open Systems Approach to Quantum Optics (Berlin: Springer-Verlag) (1993). Zoller95P. Zoller, C.W. Gardiner, Quantum noise in quantum optics: the stochastic Schrödinger equation. In S. Reynaud, E. Giacobino, and J. Zinn-Justin eds., Fluctuations quantiques, (Les Houches 1995) (North-Holland, Amsterdam, 1997) pp. 79–136. Ueda17Yuto Ashida and Masahito Ueda, Multiparticle quantum dynamics under real-time observation, Phys. Rev. A 95, 022124 (2017). Dalibard92 J. Dalibard, Y. Castin, and K. Mølmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68, 580 (1992). Molmer93K. Mølmer, Y. Castin, and J. Dalibard, Monte Carlo wave-function method in quantum optics, J. Opt. Soc. Am. B 10, 524-538 (1993). Gampel23 F. Gampel and M. Gajda, Continuous simultaneous measurement of position and momentum of a particle, Phys. Rev. A 107, 012420 (2023). Arthurs65 E. Arthurs and J. L. Kelly, JR, On the Simultaneous Measurement of a Pair of Conjugate Observables, Bell System Technical Journal, 44: 4. April 1965 pp 725-729. Scott01 A. J. Scott and G. J. Milburn, Quantum nonlinear dynamics of continuously measured systems, Phys. Rev.A63, 042101 (2001). Bialynicki85 I. Bialynicki-Birula, Exact solutions of nonrelativistic classical and quantum field theory with harmonic forces. Lett. Math. Phys. 10, 189–194 (1985). Original quote: As to the usefulness of my results, I have no opinion at all. Perhaps someone else could see whether they are good for anything.
http://arxiv.org/abs/2306.11843v1
20230620185121
Retrieval-Based Transformer for Table Augmentation
[ "Michael Glass", "Xueqing Wu", "Ankita Rajaram Naik", "Gaetano Rossiello", "Alfio Gliozzo" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.DB", "cs.IR" ]
Quantum Parallelized Variational Quantum Eigensolvers for Excited States Christian Schilling July 31, 2023 ======================================================================== Data preparation, also called data wrangling, is considered one of the most expensive and time-consuming steps to perform analytics or building machine learning models. Preparing data typically involves collecting and merging data from complex heterogeneous, and often large-scale data sources, such as data lakes. In this paper, we introduce a novel approach toward automatic data wrangling in an attempt to alleviate the effort of end-users, e.g. data analysts, in structuring dynamic views from data lakes in the form of tabular data. Given a corpus of tables, we propose a retrieval augmented transformer model that is self-trained for the tasks of row/column population and data imputation. Our self-learning strategy consists in randomly ablating tables from the corpus and training the retrieval-based model with the objective of reconstructing the partial tables given as input with the original values/columns. We adopt this strategy to first train the dense passage retrieval model encoding portions of tables as passages, and then the end-to-end model trained to perform table augmentation tasks. Our experiments two datasets for table augmentation show that our self-trained framework consistently outperforms both supervised statistical methods and the current state-of-the-art Data preparation, also called data wrangling, is considered one of the most expensive and time-consuming steps when performing analytics or building machine learning models. Preparing data typically involves collecting and merging data from complex heterogeneous, and often large-scale data sources, such as data lakes. In this paper, we introduce a novel approach toward automatic data wrangling in an attempt to alleviate the effort of end-users, e.g. data analysts, in structuring dynamic views from data lakes in the form of tabular data. We aim to address table augmentation tasks, including row/column population and data imputation. Given a corpus of tables, we propose a retrieval augmented self-trained transformer model. Our self-learning strategy consists in randomly ablating tables from the corpus and training the retrieval-based model to reconstruct the original values or headers given the partial tables as input. We adopt this strategy to first train the dense neural retrieval model encoding table-parts to vectors, and then the end-to-end model trained to perform table augmentation tasks. We test on EntiTables, the standard benchmark for table augmentation, as well as introduce a new benchmark to advance further research: WebTables. Our model consistently and substantially outperforms both supervised statistical methods and the current state-of-the-art transformer-based models. § INTRODUCTION The way organizations store and manage data is rapidly evolving from using strict transactional databases to data lakes that typically consist of large collections of heterogeneous data formats, such as tabular data, spreadsheets, and NoSQL databases. The absence of a unified schema in data lakes does not allow the usage of declarative query languages, e.g. SQL, making the process of data preparation[Also referred as data wrangling or data munging.] dramatically expensive <cit.>. Data preparation involves several phases, such as data discovery, structuring, cleansing, enrichment and validation, with the purpose of producing views commonly organized in a tabular format used to create reports <cit.> or to gather feature sets to build machine learning models <cit.>. The schemaless nature of data lakes makes data discovery and structuring even more challenging since the tasks of joinability and unionability among tables become non-deterministic <cit.>. In this work, we propose a novel end-to-end solution based on a retrieval augmented transformer architecture with the aim to support end-users, such as data analysts, in the process of constructing dynamic views from data lakes. To this end, we address three table augmentation tasks <cit.>: automatic row and column population and cell filling (or data imputation). Figure <ref> illustrates the three core tasks in table augmentation. All tasks proceed from a query or seed table. In the case of self-supervised training, this seed table is formed by ablating rows, columns or cell values from an existing table in the data lake. The task of column header population, also simply called column population, is to extend the table with additional possible column names or headers. This is a way of suggesting additional data that could be joined into this table. In the task of cell filling there is a specific unknown cell, for which the model predicts a specific value. The task of row population is only populating the key column for a row. This is the column that contains the primary entity that the remainder of the row contains data for, sometimes referred to as a row header. Typically this is the first column in a table. Approaches to table augmentation can be purely parametric <cit.>, in which case the data lake is used to train the parameters of the model, but not used during inference. In this setting, the table augmentation model must draw the possible augmentations for rows, columns and cells from its trained parameters. Alternatively, with retrieval-based models <cit.>, the data lake can also be used at inference to provide evidence for proposed augmentations. This has two key advantages: 1) the model need not memorize the data lake – or even a significant fraction of it, and 2) the model can provide justification for its predicted augmentations in the form of a provenance table or tables. The key contributions of this paper are: (1) We introduce the first end-to-end, retrieval-based model for table augmentation. Our Retrieval Augmented Table Augmentation (RATA) model uses a biencoder retrieval model for neural indexing and searching tables from data lake, and a reader transformer to identify augmentations from retrieved tables. (2) Our model establishes a new state-of-the-art across all three tasks in table augmentation, while also providing additional value with its provenance information. (3) We create and release a new dataset for table augmentation, expanding the scope of evaluation beyond Wikipedia. This dataset, based on <cit.>, is also larger and more diverse than the standard Wikipedia-based dataset <cit.>. § RELATED WORK Table augmentation can be divided into three sub-tasks: row population, column population, and cell filling. For row and column population, <cit.> identifies and ranks candidate values from both the table corpus and knowledge base. Table2Vec <cit.> trains header and entity embeddings from a table corpus in a skip-gram manner and uses the embeddings for the task. Although TaBERT <cit.> was developed as a foundational model primarily for question answering, its embeddings have also been applied for row and column population. Recent work formulates the task as multi-label classification and fine-tunes large-scale pre-trained models such as TABBIE <cit.> and TURL <cit.>. TABBIE consists of three transformers for converting cells, columns and rows to vector representations. A corrupt cell detection task is the pretraining task used to learn these embeddings on the table corpus. To fine-tune a trained TABBIE model for the column header population task, a concatenated [CLSCOL] embedding of the columns is passed through a single linear and softmax layer and trained with a multi-label classification objective. Similarly, for the row population task a multi-class classification is carried out on the first column's [CLSCOL] representation. For cell filling, InfoGather <cit.> retrieves tables from the table corpus and selects values from retrieved tables. <cit.> extends the system to retrieve from both the table corpus and knowledge base. Their system that uses only the table corpus as the source is called TMatch, which we compare to in Section <ref>. <cit.> combines predictions both from table retrieval and from a machine learning-based value imputation system. <cit.> directly applies pre-trained TURL model to the task since cell filling is similar with its pre-training objective. Cell filling is also related to the task of value imputation, i.e., to provide an assumed value when the actual value is unknown, usually using machine learning methods <cit.>. In addition to augmenting individual entities, column headers or cells, some other work aims to join tables over entire rows or columns with retrieved tables <cit.>. Retrieval-augmented models have been successfully applied to many tasks. For open-domain question answering (ODQA), DPR learns dense representation to retrieve evidence and trains a separate reader to select answer from retrieved evidence <cit.>. RAG uses a generator to generate outputs conditioned on retrieved evidence and jointly trains DPR with a generator on the downstream task <cit.>. RAG is shown to achieve good performance on knowledge-intensive NLP tasks such as ODQA, fact verification, slot filling, etc <cit.>. further introduces a reranker to boost performance <cit.>. Retrieval-augmented models are also shown to be effective on zero-shot slot filling <cit.>, and multilingual keyphrase generation <cit.>. Similar models have also been applied to table-related tasks such as open-domain table question answering <cit.>. In our work, we apply the architecture to table augmentation. § APPROACH While the row, column, and cell predictions of purely parametric table augmentation methods may be useful on their own, they can be much more effective for a human-in-the-loop use case if they are supported by provenance. A user of a data preparation application may be unwilling to simply accept the prediction of a model, but when paired with evidence from the data lake, that prediction can be better assessed. Furthermore, the retrieval model itself may be useful for exploration and general search in a data lake. In this view, table augmentation can be seen as self-supervised pretraining for table retrieval. Fortunately, there is now considerable work on retrieval augmented transformer models <cit.>. These models augment the parametric knowledge of the transformer, with non-parametric knowledge in the form of an indexed corpus. To do so, they use a neural retrieval model based on DPR (Dense Passage Retrieval)  <cit.> that is trained end-to-end to assist in generation. We build on this line of research to introduce a general model for all table augmentation tasks: row population, column header population and cell filling. Our model, Retrieval Augmented Table Augmentation (RATA), comprises of an index of tables, a retrieval component, and a reader or selection component. The table index is built from the tables in the training set, which are first decomposed into table-parts, then transformed into sequences for use with standard retrieval approaches. The retrieval component is a biencoder architecture similar to DPR <cit.>, but trained without ground truth on correct provenance. We call this Dense Table Retrieval or DTR. The reader component is an extractive approach. An extractive rather than generative approach ensures that the model's predictions are always grounded in actual data, rather than speculative guesses. The extractive approach is also a more natural fit for row and column population tasks, where there is no required order to the answers. Finally, the extractive approach permits an initial training phase for the retrieval component where the answer-bearing tables are considered as a bag of positives. Figure <ref> illustrates the tasks of table augmentation by example. Formally, the input I is a table with r rows and c columns comprising a caption 𝒞, headers 𝐇, and matrix of cell values, 𝐕. One of the columns, usually the first, is indicated as the key column key. I = ⟨𝒞, 𝐇, 𝐕, key ⟩, 1 ≤ key ≤ c 𝐇 = [h_1, h_2, ..., h_c] 𝐕 = [ v_1,1, v_1,2, ..., v_1,c; ...; v_r,1, v_r,2, ..., v_r,c; ] The input table is ablated in a task specific way to produce a query table and gold answers, ⟨ Q, 𝐆⟩, described as follows: Q_rp = ⟨𝒞, 𝐇, 𝐕_ 1..𝐧_𝐬𝐞𝐞𝐝, key ⟩ 𝐆_rp = {𝐕_i,key : i > n_seed} Q_cp = ⟨𝒞, 𝐇_1..𝐧_𝐬𝐞𝐞𝐝, 𝐕_.., 1..𝐧_𝐬𝐞𝐞𝐝, key ⟩ 𝐆_cp = {𝐇_i : i > n_seed} Q_cf = ⟨𝒞, 𝐇, 𝐕 ∖ 𝐯_𝐢,𝐣, key ⟩ 𝐆_cf = { v_i,j} where rp, cp and cf refer to the row population, column header population and cell filling tasks, respectively. §.§ End-to-End Model Figure <ref> shows how tables in a data lake are first indexed to provide a non-parametric knowledge store. Each table is first split into chunks of up to three rows plus the header, which we refer to as table-parts. We form sequence representations of these table-parts following work in other transformer-based approaches to tables <cit.>. The table-part sequence representations (S^t) are formed from the row sequence representations (S^r_i) and the table caption: S^r_i = ⊕_j=1^c h_j ⊕`:'⊕ v_i,j⊕`*' S^t = 𝒞⊕[SEP]⊕⊕_i=start^end S^r_i⊕`|' Here ⊕ indicates concatenation and the strings `:', `*', and `|' delimit the header, cell value contents, and each row respectively. Any distinctive tokens can work as delimiters since the transformer will learn an appropriate embedding representation. These sequences are then projected to vectors using the context encoder by taking the [CLS]. We index the dense representations for all table-parts in the data lake using FAISS <cit.> with Hierarchical Navigable Small World <cit.>. Figure <ref> shows the architecture of our approach, Retrieval Augmented Table Augmentation (RATA). The input query is encoded to a vector for retrieving related table-parts from the indexed data lake. Similar to table-part representation, we form sequence representation for the query, use a query encoder to encode it, and take the [CLS] vector as query representation. Both the context encoder and the query encoder use the BERT_BASE architecture. We use unnormalized dot product to score a pair of query q and table-part d. Top-k table-parts with highest scores will be retrieved. score(q, d) = BERT_qe(q)_[CLS]·BERT_ce(d)_[CLS] After the top-k most relevant table-parts are retrieved, the reader component selects the most likely augmentations for the query table. In the case of column population, the candidate augmentations are all headers from retrieved table-parts; for cell filling it is all cells; and for row population it is only those cell values that are entities. The sequence representation of the query table is paired with each table-part representation, using the standard [CLS] and [SEP] token to demarcate the bounds of each sequence. In the table-part representation, the candidates are marked by special begin and end tokens: `' and `'. This combined sequence is then the input to a transformer encoder (initialized from BERT_LARGE <cit.>). For each pair of candidate answer marks (`' and `'), the final token embeddings are concatenated to produce a single vector. Then a linear layer is applied to predict the likelihood that the candidate is a correct answer to the query. α = [i : t_i = “⟨" ] ω = [i : t_i = “⟩" ] ans_n = t_α_n + 1, t_α_n + 2, ..., t_ω_n - 1 C = [ E_α_0⊕ E_ω_0; E_α_1⊕ E_ω_1; E_α_2⊕ E_ω_2; ... ] ρ = softmax(C ·𝐰_𝐜𝐚𝐧𝐝𝐢𝐝𝐚𝐭𝐞) Formally, the input is a sequence of tokens T = [t_0, t_1, ...]. The transformer encoder produces a sequence of embeddings BERT_reader(T) = E = [e_0, e_1, ...]. The candidate representation vectors, C, are then multiplied by the learned parameter vector 𝐰_𝐜𝐚𝐧𝐝𝐢𝐝𝐚𝐭𝐞 and a softmax is applied to produce the reader scores, ρ, for the retrieved table-part. Note that the likelihood for a given answer occurrence ans_n is ρ_n. The candidate likelihood vectors for each of the top-k retrieved table-parts, ρ^1, ρ^2, ..., ρ^k, are then combined with the softmax normalized retrieval scores, 𝐫 = [r_1, r_2, ..., r_k], to provide a probability distribution over all candidates in all retrieved table-parts. Since these scores are for each occurrence of a candidate string, we aggregate over each distinct normalized candidate string by summing the likelihoods for all occurrences. This produces the final score, s(a) for each answer string a. The loss is the negative log-likelihood of all gold answer strings, 𝐆. Because of this formulation, during training any instance with no correct candidates in any retrieved table-part is skipped. 𝐩^j = softmax(𝐫)_j ·ρ^j s(a) = ∑_j=1^k ∑_n : ans^j_n = a𝐩^j_n loss = -∑_a ∈𝐆 log ( s(a) ) We use answer normalization to determine if a candidate matches a gold answer, as described in Appendix <ref>. For row population and cell filling in EntiTables, the cell values are already linked to entities so normalization is not necessary. For RATA training, we iterate through the tables in the training set. To construct input query from a table, we ablate either all rows after the first n_seed (row population), or all columns after the first n_seed (column population), or a particular cell (cell filling). We ensure that table-parts from the query table are not retrieved by filtering the retrieved results. Like most previous approaches to end-to-end training of neural retrieval, we train only the query encoder in the end-to-end training phase. This avoids expensive re-indexing of the entire data lake either each time the context encoder is updated, or periodically as in ANCE <cit.>. §.§ Retrieval Training While it is possible in theory to train neural retrieval entirely through impact in the end-to-end table augmentation tasks, a good initialization is important for learning. Without an initial effective retrieval model, there is no answer-bearing evidence to train the reader model, and therefore a high fraction of training examples will be skipped <cit.>. One possible approach is to use a pretraining task for retrieval, such as the Inverse Cloze Task <cit.> or a retrieval-based masked language model <cit.>. In the table augmentation task, there is the option of training with answer-bearing evidence as positives. Since the reader is purely extractive, any evidence that does not contain a correct augmentation string is necessarily a negative. However, not every table-part that contains an answer is a positive. We use a multiple instance learning setup for the positives: we train under the assumption that at least one of the table-parts containing a correct answer is a positive. To gather the training data for retrieval we build an initial keyword index using Anserini[<https://github.com/castorini/anserini>]. We use BM25 <cit.> to retrieve potentially relevant table-parts for each table query. From each training table we construct a query for row population, column population or cell filling. Since these queries are constructed from ablated tables, we know a (potentially incomplete) set of correct augmentations or answers. Note that there may be other equally correct augmentations. But since this is a self-supervised task, we consider only the headers or cell values that actually occurred in the table to be correct. Formally, the query constructed from a training table is a pair of the ablated table, Q and the set of gold answers 𝐆. The set of table-parts retrieved by the initial retrieval method, for example BM25, is given as 𝐑. A retrieved table-part is in the positive set, 𝐑^+, if it contains any gold answer, otherwise it is a hard negative, 𝐑^-. 𝐑^+ = { d : d ∈𝐑∧∃ a ∈𝐆, a ∈ d } 𝐑^- = 𝐑 - 𝐑^+ Following <cit.>, we use batch negatives along with the retrieved “hard negatives”. The batch B = [⟨ q_1, 𝐑_1 ⟩, ⟨ q_2, 𝐑_2 ⟩, ..., ⟨ q_bz, 𝐑_bz⟩] is processed to produce vectors for all queries and retrieved table-parts. All query vectors are multiplied with all table-part vectors to produce scores between all pairs. A softmax is applied per-query to give the normalized scores. Finally, the loss is the negative log-likelihood for the positive scores. ℛ = ⋃_i = 1^bz𝐑_i ρ_i = softmax([score(q_i, d) : d ∈ℛ]) loss = - ∑_i = 1^bz log ( ∑_d ∈𝐑^+_iρ_i,d) Note that since we are summing over the probability of all table-parts in the positive set, 𝐑^+, it is not necessary for all answer-bearing retrieved table-parts to be high scoring. Instead, it follows the multiple instance learning framework. All instances marked negative are negative, while at least one instance in the positive set is positive. § DATASET Prior work on table augmentation has focused on tables derived from Wikipedia <cit.>. In order to better assess the proposed methods and provide the research community with a new benchmark, we introduce a new dataset for table augmentation: . We construct this dataset using the tables crawled and extracted by <cit.>. We start from the English relational tables of WDC Web Table Corpus 2015. We further filter the dataset to remove the most common types of noisy tables: calendars formatted as tables, lists of forum posts and torrent links, tables with less than four rows or columns, and tables that format large blocks of text. Because previous work on table augmentation focused so heavily on Wikipedia tables, we exclude from this dataset any tables crawled from any “wikipedia” domain. We also deduplicate the corpus, ensuring that there are no two tables with the same content in their cells. Following filtering and deduplication we sample 10 thousand tables each for the development and test sets and one million tables for training. However, in our experiments we use only 300 thousand training examples to limit the computational cost. To parallel the setting of EntiTables we use the “key column” identified by <cit.> as the target column for row population and we consider entities to be those strings that occur at least three times in the key column for any table in the train set. § EXPERIMENTS We experiment on two datasets of tables across three tasks. Table <ref> gives statistics on these datasets. EntiTables <cit.> contains 1.6M tables collected from Wikipedia where entity mentions are normalized into its name in DBPedia. For row and column population, we use the development and test sets released by <cit.> each containing 1,000 randomly sampled queries. For cell filling, we use the test set released by <cit.>. The test set contains 1,000 queries uniformly sampled from four main column data types: entity, quantity, string, and datetime. Though <cit.> use human annotations as gold labels, we notice that the human annotations are of low quality, so we use the original values in the table cells as gold labels. is based on <cit.> – 154M relational tables extracted from HTML tables in Common Crawl. We process the corpus as described in Section <ref>. For column population we use the original development and test sets of 10,000 tables each. While for row population we necessarily exclude any tables without any entities in the key column after the first n_seed rows. For cell filling, we use heuristic rules to classify cell values into three types: quantity, string and datetime. Then, we sample 3,000 queries uniformly from the three types as test set and sample another 3,000 queries as development set. We compare our method with two deep learning-based baselines, TABBIE <cit.> and BART <cit.>. Both TABBIE and BART have no retrieval component involved. TABBIE, described in Section <ref>, uses three transformers: one for cell values, one for rows, and one for columns. It produces vector embeddings for each cell and each row and column of a table. We follow <cit.> for the row and column population and base our experiments on the partial released code and pretrained model[https://github.com/SFIG611/tabbie]. To apply TABBIE to cell filling, we formulate it as classification on the concatenation of the row and column embedding vectors, similar to row and column population. The classification vocabulary is collected from the training corpus: all cell values that occur at least ten times. We also report the published results for TABBIE on the EntiTables dataset, although we were unable to reproduce these results for row population. BART is a sequence-to-sequence model that takes the linearized table as the source text and generates the row entities, cell headers, or cell value as the target text. We use a beam search in decoding (beam size = 35) to produce a ranked list of predictions. We use the FAIRSEQ toolkit <cit.> for these experiments. For RAG we use the implementation in Hugging Face transformers <cit.>. For both BART and RAG, the sequence representation of the query tables is the same as in RATA. On the EntiTables dataset, we also compare against probabilistic methods that first retrieve tables from the table corpus and next select values for table augmentation. We compare against the published results of <cit.> for row and column population, and against TMatch <cit.> for cell filling. For evaluation, we report Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain over the top ten outputs (NDCG@10) for the final prediction performance of row population, column population, and cell filling. To evaluate the performance of DTR retrieval, we also report answer-bearing MRR, where a retrieved table-part is considered correct if it contains one of the correct answers. To determine the significance of these results we use a 95% confidence interval on the t-distribution. We also applied a sampling permutation test, but this did not change any conclusions regarding significance. § RESULTS Table <ref> contains our results for the row population task. Our model, RATA, is able to greatly outperform all other methods on both datasets. Using the non-parametric knowledge of the table corpus is very advantageous for the large and specific vocabulary of entities in key columns. Table <ref> contains our results for the column population task. RATA is again substantially better than the other methods, although not by as wide a margin as the row population task. The BART baseline is the best performing of the alternatives with an MRR lower by 6% to 15%. Results on cell filling task are in Table <ref>. Our method outperforms all baselines on both datasets. TABBIE performs the worst due to the large classification vocabulary and out-of-vocabulary issue. On EntiTables dataset, retrieval-based methods including TMatch and RATA significantly outperform non-retrieval methods including TABBIE and BART. Figure <ref> shows an example output from RATA. On WebTables, however, BART outperforms RATA. We notice that BART can achieve high scores by either copying values from other rows (as in Figure <ref> and Figure <ref>), or producing values similar with in other rows (as in Figure <ref> and Figure <ref>). As shown in the examples, this strategy is able to achieve good performance. Effect of Retrieval To analyze the effectiveness of the DTR component, we report answer bearing MRR in Table <ref>. We notice that DTR is well trained after the initial retrieval training phase and achieves higher answer bearing MRR compared to BM25. End-to-end training provides meaningful supervision for retrieval and further improves MRR on most tasks. By comparing Table <ref>, <ref>, <ref> with Table <ref>, we notice that the final task MRR is close to answer bearing MRR. When the correct answer is present in the retrieved table, the reader can select the correct answer at high accuracy. This indicates that the bottleneck of our system is retrieval. Number of Retrieved Table-Parts RATA was trained with 5 retrieved table-parts for all tasks. This relatively small number for the retrieval size provides good efficiency during training, since train time scales roughly linearly with the number of query / table-part pairs that must be processed by the reader transformer component. But during inference, we are able to adjust the number of retrieved table-parts more freely. Figure <ref> shows that table augmentation performance monotonically increases as more evidence is retrieved for row population and cell filling, but column population performance does not improve past 5. § CONCLUSION Our retrieval-based transformer architecture for table augmentation, RATA, is able to greatly advance the state-of-the-art in three table augmentation tasks: row population, column population, and cell filling. The non-parametric knowledge in the table corpus is able to substantially enhance the table augmentation capabilities. Furthermore, by training an effective table-to-table retrieval model we are able to provide provenance for the system's proposed augmentations. We also introduce a new benchmark dataset for table augmentation: and evaluate our model and two recent transformer baselines. Our code for RATA and the newly introduced dataset are available as open source[<https://github.com/IBM/retrieval-table-augmentation>]. § LIMITATIONS A limitation of RATA is always assuming the answer is included in the retrieval corpus, which is not always true. When the corpus does not contain the correct answer, the desired behavior is to inform the user that the answer cannot be obtained, but RATA will provide a poorly supported answer. This also encourages RATA to learn spurious correlations when the retrieved tables coincidentally contain the same value, but does not really support the answer. This problem is especially serious when the answer is very generic (for example, numbers like “0”) and same values by coincidence are common. This is related to the answerable question issue <cit.> or evidentiality issue <cit.> for question answering. For cell-filling on WebTables, BART outperforms RATA often by either copying values from other rows of the query table or producing values similar to those in other rows. However, as shown in Figure <ref>, RATA's retrieval is often not helpful. Usually, the information required to fill the query table is not repeated in the corpus, so the retrieved table cannot support the query. As a result, RATA is simply retrieving some similar table, and selecting similar values in the tables. § ETHICS STATEMENT Scientific work published at ACL 2023 must comply with the ACL Ethics Policy.[<https://www.aclweb.org/portal/content/acl-code-ethics>] We encourage all authors to include an explicit ethics statement on the broader impact of the work, or other ethical considerations after the conclusion but before the references. The ethics statement will not count toward the page limit (8 pages for long, 4 pages for short papers). acl_natbib § APPENDIX § MODEL HYPERPARAMETERS Our model is fine-tuned from two BERT_BASE models for the retriever and one BERT_LARGE model for the reader. This totals 2 · 110M + 340M = 560M parameters. Table <ref> shows the hyperparameters used in our experiments. The only hyperparameter that varied for the tasks and datasets was the batch size. § DATASET AND TASK SPECIFICS We use two types of answer normalization. For EntiTables column population we implement case-insensitive matching by normalizing both predictions and gold answers to lowercase. For all row and column population in we use a normalization that removes unicode accents and non-ASCII characters then lowercases. Cell filling does not use normalization. For reproduction of results from TABBIE on Entitables we carry out the following steps. Column Header Population Based on the above mentioned normalization we create a vocabulary of 182,909 column headers for the Entitables dataset which is approximately equal to the 127,656 possible header labels mentioned in the paper  <cit.>. Each of the possible headers occurs atleast twice in the training dataset. Row Population Except for above mentioned normalization we use entities which have occurred atleast 7 times in the training dataset which lead to 308,841 possible entities. THis is approximately equal to the 300,000 entities mentioned in  <cit.>. Cell Filling Except for the above mentioned normalization we use cell values which have occurred atleast 10 times in the training dataset. § CELL FILLING BART EXAMPLES Additional BART cell filling output examples on WebTables dataset are in Figure <ref>. § COMPUTE INFRASTRUCTURE All row and column population experiments were done on a single P100 GPU. This gave train times of 24 to 48 hours. All cell filling experiments were done on a single A100 GPU, with train times of 24 hours.
http://arxiv.org/abs/2306.09083v1
20230615122635
Numerical Simulation of Large-Scale Nonlinear Open Quantum Mechanics
[ "Marc Roda-Llordes", "Davide Candoli", "Piotr T. Grochowski", "Andreu Riera-Campeny", "Thomas Agrenius", "Juan José García-Ripoll", "Carlos Gonzalez-Ballestero", "Oriol Romero-Isart" ]
quant-ph
[ "quant-ph" ]
Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland Instituto de Física Fundamental IFF-CSIC, Calle Serrano 113b, Madrid 28006, Spain We introduce a numerical method to simulate nonlinear open quantum dynamics of a particle in situations where its state undergoes significant expansion in phase space while generating small quantum features at the phase-space Planck scale. Our approach involves simulating the Wigner function in a time-dependent frame that leverages information from the classical trajectory to efficiently represent the quantum state in phase space. To demonstrate the capabilities of our method, we examine the open quantum dynamics of a particle evolving in a one-dimensional weak quartic potential after initially being ground-state cooled in a tight harmonic potential. This numerical approach is particularly relevant to ongoing efforts to design, optimize, and understand experiments targeting the preparation of macroscopic quantum superposition states of massive particles through nonlinear quantum dynamics. Numerical Simulation of Large-Scale Nonlinear Open Quantum Mechanics O. Romero-Isart July 31, 2023 ==================================================================== § INTRODUCTION The field of levitodynamics <cit.>, which focuses on levitation and control of microobjects in vacuum, allows us to study the center-of-mass motional dynamics of a particle in a highly isolated environment. Since the mechanical potential in which the particle moves can be controlled both dynamically <cit.> and statically <cit.>, levitated particles offer a unique platform to study nonlinear conservative mechanics. Furthermore, the center-of-mass thermal energy can be removed, either via active or passive feedback, to the ultimate limit where only quantum fluctuations are present <cit.>. Center-of-mass ground-state cooling and the control of the mechanical potential open up the possibility to study nonlinear quantum mechanics with a microsolid containing billions of atoms <cit.>. In order to design, optimize, and understand experimentally feasible protocols involving nonlinear quantum mechanics, it is crucial to have a reliable numerical tool that allows us to efficiently simulate the dynamics while accounting for sources of noise and decoherence. In this paper we provide such a tool in the particularly relevant and challenging scenario of multiscale dynamics induced by center-of-mass cooled massive particles evolving in wide nonharmonic potentials. More specifically, the center-of-mass motion of cooled microparticles exhibits minute fluctuations (i.e., zero-point motion), smaller than the size of a single atom. Experimentally feasible nonharmonic potentials are wider than zero-point motion length scales, that is the distance between classical turning points is orders of magnitude larger than the zero-point length scale. Hence, the dynamics triggered in those nonharmonic potential will generate large phase-space expansions. This expansive dynamics will eventually activate the nonharmonicities in the potential, such as at turning points, which in the case of coherent dynamics can create phase-space structures at or even below the Planck scale <cit.>. This multiscale phase-space dynamics of the particle's center-of-mass state will be studied through the time evolution of the corresponding Wigner function. The use of the Wigner function is advantageous as it enables us to incorporate sources of noise and decoherence (i.e., open dynamics) while also clearly identifying quantum features (e.g., through negative values in the Wigner function). To effectively describe the scenario of interest, which involves large phase-space expansions and small phase-space features and is thus different from previous studies <cit.>, an efficient numerical representation of this specific dynamics is necessary. We propose using a time-dependent phase-space grid where the grid points move according to the classical trajectory dictated by the nonharmonic potential. This procedure places the grid points where they are most relevant, thereby improving computational efficiency. We call this numerical tool Q-Xpanse, and it has proven invaluable in the design, optimization, and understanding of a recent proposal for generating macroscopic quantum superpositions of a nanoparticle through the nonlinear quantum mechanics induced in a wide double-well potential <cit.>. This paper is structured as follows: In Section <ref>, we present the theoretical framework for our method, including the time-dependent change of variables leading to the time-dependent phase-space grid. In Section <ref> and in a dedicated Appendix section, we detail our numerical implementation using finite differences and classical trajectory propagation. We then examine the dynamics in weak quartic potentials as an example of large expansions with Planck-scale quantum features in Section <ref>. Finally, we conclude with our final remarks and outlook in Section <ref>. § WIGNER FUNCTION DYNAMICS IN THE LIOUVILLE FRAME We consider a particle with mass evolving in a one-dimensional potential () in the presence of noise. We describe the state of the particle through its Wigner function W(,,t). The equation of motion for the Wigner function is given by [W(,,t)]t = + + W(,,t). The first term generates conservative (i.e., Liouville) classical dynamics and is given by = - / + [()]. The second term generates genuine quantum dynamics and is given by = ∑_n=1^∞(-1)^n/(2n+1)!ħ^2n/4^n∂^2n+1()/∂^2n+1∂^2n+1/∂^2n+1. Note that is zero for quadratic potentials (i.e., potentials with only linear and harmonic terms). The third term models the presence of noise and generates dissipative dynamics. For levitated nanoparticles, it is convenient to consider <cit.> = 1 + + ħ^2 /2^2[^2]^2, where = k_B T/ħ + _1, k_B is the Boltzmann constant, and =ħ/(2)^1/2 is a convenient length unit associated to the zero-point motion fluctuations of the quantum ground state of a harmonic potential with frequency . This source of noise models a linear coupling to a thermal bath <cit.> of temperature T, with damping rate γ, and the presence of a stochastic white-force term with displacement noise rate given by Γ_1. The results of this paper are based on using the Wigner function in a time-dependent frame that we call the Liouville frame, which is defined as (,,t) ≡ e^- t W(,,t). Since is the generator of classical dynamics, one can use the Liouville theorem to write (,,t) = W((,,t),(,,t),t), where (,,t) and (,,t) are the solutions to the classical equations of motion for point particles moving in the potential () in the absence of noise with initial position and momentum given by and respectively. Namely, they are solutions of ∂(,,t)/∂ t = (,,t)/, ∂(,,t)/∂ t = -. ∂(x)/∂ x |_x=(,,t), with (,,0) = and (,,0) =. In the Liouville frame, the Wigner function evolves as [(,,t)]t = e^- t + e^ t (,,t). The Wigner function in the Liouville frame evolves only due to the presence of quantum effects and/or noise, that is ∂(,,t)/∂ t =0 if = =0. The original Wigner function W(,,t) can be obtained from the Wigner function in the Liouville frame (,,t) by W(,,t) = e^ t(,,t) = ((,,-t),(,,-t),t), that is, by using backward propagation in time of the classical trajectories. In the following section, we will show that numerically solving eq:Wigner_FPE_flowing on a fixed regular phase-space grid is highly efficient in situations involving large expansions because it corresponds to solving eq:Wigner_FPE on a time-dependent, irregular phase-space grid that places grid points where they are most crucial. This key idea is illustrated in fig:1 for the example of a particle evolving in a pure quartic potential which we will further discuss in sec:example. § NUMERICAL SIMULATION IN THE LIOUVILLE FRAME In this section we explain how to numerically solve the time evolution of the Wigner function in the Liouville frame, namely how to solve eq:Wigner_FPE_flowing. The first step is to explicitly calculate the terms in e^- t + e^ t. This allows us to obtain the explicit form of the partial derivative equation (PDE). As shown in app:num-details, one obtains that eq:Wigner_FPE_flowing reads [ (,,t)]t = ∑_n,m=0^n+m ≤ N_U_nm(,,t) ∂^n+m (,,t)/∂^n ∂^m. Here N_U is the smallest odd number such that ∂^n()/∂^n = 0 for n ≥ N_U+2, which in turn determines that eq:Wigner_FPE_flowing_FULL is a PDE of order N_U. The time-dependent scalar functions _nm(,,t) depend on the physical parameters of the problem (i.e., , (), γ, T, Γ_1), both explicitly and implicitly through the classical trajectories (,,t) and (,,t) and their up to N_U order derivatives with respect to their initial condition . Their derivation and explicit expressions for an up to quartic potential (N_U=3) are given in app:num-details. The second step is to convert the PDE in eq:Wigner_FPE_flowing_FULL into a system of linear equations using the method of finite differences. In the Liouville frame we use a uniform rectangular grid in and with separation between consecutive grid points given by h_>0 and h_>0 along each direction respectively. The grid points are given by (_i,_j) = (x_0,p_0) + (i h_,j h_), for i=0,1,…,N_-1 and j=0,1,…,N_-1. Here (_0,_0) is the bottom left point of the grid which contains N=N_× N_ points. The N values of the Wigner function in the Liouville frame (,,t) evaluated at the grid points are collected by the N-dimensional vector (t) whose components, indexed by k=0,1,…,N-1, are given by _k=iN_ +j(t) = (_i,_j,t). Using a finite difference method (see app:num-details for further details), one obtains a system of linear equations for this vector given by [ (t)]t = (t) (t), where (t) is a square N × N matrix. The equation eq:linear-fd-ode can then be solved using (t+Δ t) = exp(t) Δ t(t), which is valid for a sufficiently small Δ t (see app:num-details). This numerical method relies on developing a numerically efficient way of computing (t), which requires evaluating (,,t) at the grid points. In turn, this requires the values of (,,t) and (,,t) as well as the derivatives of (,,-t) and (,,-t) with respect to at every grid point (_i,_j) and for all instances of time considered in the finite differences approach. Since an analytical formula for the classical trajectories and their derivatives with respect to initial conditions are in general not available for nonharmonic potentials, they need to be efficiently evaluated numerically. We do this by solving eq:classical_ODE and similar differential equations that can be derived for the derivatives of the classical trajectories with respect to initial conditions using a symplectic method, which ensures stability over long integration times <cit.>. Another important tool we use to improve the efficiency of the method is to relate the derivatives of the forward-propagated trajectories with respect to the initial conditions with those of the backward-propagated trajectories by making use of the properties of the associated Jacobian matrices. We provide all the details of this numerical method in app:num-details, a method that we have coded using , and . Formally, solving eq:Wigner_FPE_flowing_FULL in the fixed grid given by the N phase-space points (_i,_j) defined above is equivalent to solving eq:Wigner_FPE in a time-dependent grid given by the N points (_i(t),_j(t)) ≡((_i,_j,t),(_i,_j,t)), see fig:1. In this time-dependent grid, a feature which will be important for our later discussion is the maximum phase-space grid density, namely the minimal distance between two phase-space points. In order to quantify this phase-space density let us introduce the following dimensionless Jacobian matrix for a given phase-space point, namely (,,t) ≡[ [ (,,t)] [ (,,t)]; [ (,,t)] [ (,,t)] ] with ≡ħ/(2). We then define λ^+_i,j(t) and λ^-_i,j(t) as the largest and smallest singular values of the 2×2 matrix (_i,_j,t). One can then define (t) ≡min_i,jλ^-_i,j(t), namely the minimum singular value over all the grid. In this way the dimensionless parameter 1/(t) quantifies the maximum density of the phase-space time-dependent grid. To see this explicitly consider a point in phase space, written without dimensions as 𝐫=(/,/) and a point 𝐫' = 𝐫 + ϵ (cosθ, sinθ) in its close vicinity (i.e., |ϵ| ≪ 1). After the evolution governed by the classical trajectory, the separation between these two points can be expressed, in linear order in ϵ, as |(𝐫',t) - (𝐫,t)|/ϵ≈ |(,,t) (cosθ, sinθ)^T| According to the singular value decomposition of (,,t), the smallest singular value of (,,t) minimizes the distance eq:argument_singular_values over all possible directions, namely θ. § EXAMPLE: QUARTIC POTENTIAL Let us now apply the numerical method presented in this paper to a particular example: quantum mechanics in a purely quartic potential <cit.>. This will allow us to show the applicability of the numerical method and illustrate that solving (,,t) in a constant and regular grid is equivalent to solving W(,,t) in a smart time-dependent irregular phase-space grid, see fig:1. We consider a particle of mass , whose state at t=0, namely W(,,0) = (,,0), is given by the ground state of the harmonic potential () = ^2 ^2/2, see left panel of fig:1. The position and momentum standard deviation of the initial state are given by √(^2) = = √(ħ/(2)) and √(^2)= = ħ/(2) respectively. At t>0, the particle evolves in a purely quartic potential () = (), which we parametrize as ()= 1/η^4 ħ/4/^4. We consider the case of friction-less noise (e.g., dynamics in ultra-high vacuum), namely =0 but />0 in eq:noiseL. The dimensionless parameter η characterizes the strength of the quartic potential. Since the initial kinetic energy of the state is ħΩ/4, the turning point according to classical mechanics, defined as ħΩ/4 = (), is given by /= η. Large phase-space expansions, namely states with spatial delocalization orders of magnitude larger than  <cit.>, will be thus generated for η≫ 1. Let us first analyze the evolution of the first and second phase-space moments. Due to the alignment of the quartic potential with the initial state, the first moments remain constant and equal to zero, namely (t)/ = (t)/ =0. The dynamics of the second moments is shown in fig:2(a), where we plot √(^2 (t) )/(η), √(^2 (t) )/( ), and {,}(t)/(ηħ). Using the η-scaled dimensionless timescale t /η, this plot is for η≫ 1 conveniently independent of η. The plot shows that the state experiences free dynamics during an initial time scale given by 0 < t /η≲ 0.4, where √(^2 (t) )/(η) ≈{,}(t)/(ηħ) grows linearly in time and √(^2 (t) )/( ) remains constant and equal to one. After this initial time interval, the state starts to experience the quartic potential. In particular, at t /η≈ 1.12, when {,}(t)/(ηħ) is equal to zero, the state reaches a maximum value of √(^2 (t) )/(η) of the order of 1, that is, the state is spatially delocalized to a large length scale given by η <cit.>. This expansive dynamics that generates a squeezed state is conveniently accompanied with an increase of the phase-space grid density. This can be shown in fig:2(b), where we plot the η-scaled grid distance η(t) as a function of time, showing that the phase-space density grows as a function of time and is scaled with η. Let us now study the evolution of the Wigner function. In fig:3 we show (,,t) (panel a) and W(,,t) (panel b) for η = 10^3 and Γ/ = 2 × 10^-8 at three instances of time: (i) t =0, (ii) t/η≈ 1.12 when √(^2 (t) )/(η) is the largest and the state generates an interference pattern in the momentum probability distribution, and (iii) t/η≈ 1.56 when {,}(t)/(2 ηħ) reaches its most negative value and the state exhibits an interference pattern in the position probability distribution, see fig:4(a). In fig:2(a) the instances of time (ii) and (iii) are indicated with a vertical dashed line. We emphasize that W(,,t) (panel b) is obtained by simply using eq:W_as_Wflow after having numerically obtained (,,t) (panel a) with the method presented in this paper. Comparing the x axes of panels (a) and (b) of fig:3, one can see how W(,,t) expands significantly more than (,,t). As shown in fig:1, the regular grid points used to represent (,,t) in the Liouville frame are efficiently distributed in the original frame to properly describe W(,,t). The results in fig:3(a) are obtained in a fixed grid of a phase-space length scale given by √(h_/^2+h_/^2)≈ 0.25. The length scale in the time-dependent grid, namely in fig:3(b) is reduced by a factor of (t), which as one can see in fig:2(b), reaches values below 10^-3, way below the phase-space Planck scale <cit.>. This means that to match the accuracy level of our method, using a regular grid in the original frame would need about 10^3 times more grid points. Finally, let us discuss the impact of noise by illustrating how it affects the visibility of the interference pattern in position at the time t /η≈ 1.56. In fig:4(a) we plot the probability distribution P(x) ≡ dp W(,,t) at this particular instance of time for η=10^3 and /=2 × 10^-8. We define x_f as the distance between the largest interference peak and its neighboring peak. For the parameters in fig:4, we obtain x_f/≈ 21.2. As shown in the inset of fig:4(a), the scaling of this distance with η is given by x_f/≈ 2.11 η^1/3. The visibility of this interference pattern, defined as (P_max-P_min)/(P_max+P_min) where P_max and P_min are the value of P(x) at the largest maximum and its neighboring minimum respectively, is a decreasing a function of Γ/ as we show in fig:4(b). As expected <cit.>, the impact of Γ in the visibility scales roughly as η^2. The study of quantum dynamics in a nonharmonic potential in the presence of noise, which we have performed using the numerical method presented in this paper, is relevant for current efforts to prepare largely delocalized macroscopic quantum states of large masses <cit.>. We remark that, feasibility-wise, purely quartic potentials are not ideal since the time scale needed to generate the interference pattern shown in fig:4, that is t /η≈ 1.56, is for η≫ 1 much larger than the average collision time with a single gas molecule at ultra-high vacuum <cit.>. This is one of the main reasons motivating our recent proposal <cit.>, which is also analyzed with the numerical method presented in this paper, where we use a double-well potential such that the inverted harmonic term exponentially speeds up the dynamics <cit.>. § CONCLUSIONS In this paper we have presented a numerical method that can simulate nonlinear open quantum dynamics, even for potentials in which the quantum state expands several orders of magnitude in phase space while exhibiting relevant features at very small sub-Planck scales <cit.>. This regime is of particular interest for designing, optimizing, and understanding protocols that generate macroscopic quantum states by letting a massive particle evolve in a nonharmonic potential <cit.>. We have demonstrated the power of this method using the dynamics of an initially highly-localized state in a quartic potential. We have shown how in this potential the state position variance grows by several orders of magnitude, and yet its Wigner function exhibits negative features on a scale below the initial zero-point fluctuations. Properly describing such small scales using a regular grid in the original frame would require an impracticable amount of points, a challenge that we overcome by the introduction of the Liouville frame. Our numerical method should be applicable to a broad class of interesting quantum mechanical problems. While any potential () can be considered, the number of derivatives considered in eq:Wigner_FPE_flowing must be finite to allow for a numerical evaluation. Introducing a cutoff to the order of the potential in () should yield accurate results. Other types of noise and decoherence beyond the ones considered in this paper (e.g. stochastic force-gradient) can also be incorporated. While we have considered both time-independent potentials and decoherence rates, the numerical method is inherently time dependent, see eq:exp-propagate, which means that time dependence could be introduced with the corresponding modifications. An advantageous feature of studying quantum mechanics with the Wigner function is that the classical limit can be easily taken, namely taking ħ=0 such that =0 in eq:Wigner_FPE. In this classical limit, the Wigner function in the Liouville frame is only driven by dissipative dynamics. Finally, while we have focused on a one-dimensional problem, the method could be generalized to higher spatial dimensions. In conclusion, the numerical method presented in this manuscript relies on a crucial element: the description of Wigner function dynamics in the Liouville frame eq:Wigner_FPE_flowing. We emphasize that this frame proves to be highly valuable not only in practical terms but also from a conceptual standpoint, as it clearly unveils the impact of quantum physics in the mechanical motion of a particle. We would like to thank Christoph Dellago, Lukas Einkemmer, Daniele Giannandrea, Max Innerbichler, Talitha Weiss and the Q-Xtreme synergy group for helpful discussions. This research has been supported by the European Research Council (ERC) under the grant agreement No. [951234] (Q-Xtreme ERC-2020-SyG) and by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. [863132] (IQLev). PTG was partially supported by the Foundation for Polish Science (FNP). § DETAILS ON THE NUMERICAL METHOD In this appendix we detail all the steps we use to solve eq:Wigner_FPE_flowing numerically. §.§ Explicit expression for the PDE The first step is to obtain an explicit expression for e^- t + e^ t. In order to find how any operator transforms under e^- t e^ t, one can apply the transformed operator to an arbitrary function (,) and identify which operator produces the same result. It is useful to recall that by virtue of the Liouville theorem we know how e^± t acts on an arbitrary function (,), namely e^± t f(,) = f((,,∓ t), (,,∓ t)). In our case, the operators that appear in + are , and derivatives with respect to . For and , making use of eq:arbitrary_liouville one finds that these operators transform according to e^- t e^ t = (,,t) and e^- t e^ t = (,,t). Similarly, one finds that the derivative with respect to transforms according to the chain rule as e^- t e^ t = 1 + 1. where we introduce the following shorthand notation n = . [^n(,,-t)]^n|_=(,,t) = (,,t) and n = . [^n(,,-t)]^n|_=(,,t) = (,,t). Note that these are scalar functions of , and t, and they are the derivatives with respect to initial conditions of the classical trajectories starting from the point ((,,t),(,,t)) propagated backwards in time for a time t. Explicitly, for n=1 they correspond to the following limit 1 = lim_ε→ 0((,,t),(,,t)+ε,-t) - ((,,t),(,,t),-t)/ε = lim_ε→ 0((,,t),(,,t)+ε,-t) - /ε. For higher order derivatives one finds expressions corresponding to multiple applications of the chain rule. Namely, for the second order derivative with respect to one has e^- t[^2]^2 e^ t = 2 + 2 + 1^2 [^2]^2 + 1^2 [^2]^2+ 2 11[^2]∂, whereas for the third order derivative one has e^- t[^3]^3 e^ t = 3 + 3 + 3 12[^2]^2 + 12[^2]^2 + 21 + 12[^2]∂ + 1^3 [^3]^3 + 1^3 [^3]^3 + 3 1^21[^3]^2∂ + 11^2 [^3]∂^2. We only consider potentials () for which fifth and higher order derivatives vanish, and therefore only derivatives up to third order will appear in eq:Wigner_FPE_flowing. For potentials where higher order derivatives are relevant, one could extend our approach to include them. Substituting Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) into eq:Wigner_FPE_flowing yields the explicit equation that we need to solve numerically. It has the following form, [ (,,t)]t = ∑_n,m=0^n+m ≤ 3_nm(,,t) [^n+m (,,t)]^n ∂^m, where the explicit expressions for the coefficients are given by _00(,,t) = _10(,,t) = 1 + ħ^2/2^22 - ħ^2/12^(3)() 3 _01(,,t) = (,,t) 1 + ħ^2/2^22 - ħ^2/12^(3)() 3 _20(,,t) = ħ^2/2^21^2 - ħ^2/4^(3)() 12 _02(,,t) = ħ^2/2^21^2 - ħ^2/4^(3)() 12 _11(,,t) = ħ^2/2^211 - ħ^2/4^(3)() 12 + 12 _30(,,t) = - ħ^2/12^(3)() 1^3 _03(,,t) = - ħ^2/12^(3)() 1^3 _21(,,t) = - ħ^2/4^(3)() 1^2 1 _12(,,t) = - ħ^2/4^(3)() 1^2 1. In order to simplify notation, here and hereafter we use ^(i)() for the i-th derivative of evaluated at . Also, note that we use and as a shorthand for (,,t) and (,,t) to simplify the expressions, but they still depend on , and t. §.§ Discretization of the PDE Now that we have an explicit expression for the equation we need to solve, we need to discretize it to allow for numerical simulation. In order to do so we describe in a regular grid which contains N=N_× N_ points which we denote by (_i,_i). Then, we denote the values of in each of these grid points by _i,j=(_i,_j). Next, we express the derivatives with respect to and in eq:Wigner_FPE_flowing_FULL_appendix in terms of finite difference schemes. In particular, we use a second-order centered finite difference scheme, which we list below for the first, second and third order derivatives. First, for the first order derivatives they read [_i,j] = _i+1,j-_i-1,j/2h_, [_i,j] = _i,j+1-_i,j-1/2h_. Next, for the second order derivatives one has [^2_i,j]^2 = _i+1,j+_i-1,j-2_i,j/h_^2, [^2_i,j]^2 = _i,j+1+_i,j-1-2_i,j/h_^2, [^2_i,j]∂ = _i+1,j+1+_i-1,j-1 -_i-1,j+1-_i+1,j-1/4h_ h_. Finally, the expressions for the third order derivatives are given by [^3_i,j]^3 = _i+2,j-2_i+1,j+2_i-1,j-_i-2,j/2h_^3, [^3_i,j]^3 = _i,j+2-2_i,j+1+2_i,j-1-_i,j-2/2h_^3, [^3_i,j]^2 ∂ = _i+1,j+1 + _i-1,j+1 - 2_i,j+1 +2_i,j-1 - _i+1,j-1 - _i-1,j-1/2h_^2h_, [^3_i,j]∂^2 = _i+1,j+1 + _i+1,j-1 - 2_i+1,j + 2_i-1,j - _i-1,j+1 - _i-1,j-1/2h_ h_^2. After substituting all the derivatives in eq:Wigner_FPE_flowing_FULL_appendix by their finite difference versions [see Eqs. (<ref>)–(<ref>)], the right hand side of the equation is given by a linear combination of _i,j with different indices i,j. Explicitly, one has [_i,j(t)]t = ∑_α𝒟_(i,j),(α)_α(t) where α runs over the following 13 indices (i,j),(i± 1,j),(i,j± 1),(i± 1,j± 1),(i± 1,j∓ 1),(i± 2,j) and (i,j± 2). Collecting the values of _i,j in the N-dimensional vector (t) with components _k=iN_ +j(t) = _i,j(t) indexed by k=0,1,…,N-1 allows us to write eq:Wigner_FPE_flowing_discrete as [ (t)]t = (t) (t), where (t) is a N× N matrix. From this equation one can derive an expression to propagate the solution in time given by (t + Δ t) = exp∫_t^t+Δ t(t') dt' (t) ≈exp(t) Δ t (t) where the approximation assumes that Δ t is small enough such that (t') varies slowly enough between t and t+ Δ t. The entries of the (t) matrix can be found by inspection after replacing the derivatives in eq:Wigner_FPE_flowing_FULL_appendix by their finite difference versions [see Eqs. (<ref>)–(<ref>)]. For instance the matrix entry corresponding to the index (i,j),(i+1,j) reads 𝒟_(i,j),(i+1,j)(t) = _10(_i,_j,t)/2h_ + _20(_i,_j,t)/h^2_ - _30(_i,_j,t)/h^3_ - _21(_i,_j,t)/h_ h^2_. Notice that each row of (t) will only have 13 entries different from zero, which means that (t) will be sparse. This is due to the fact that finite differences only relate points with up to second order neighbours. One could have chosen higher-order finite differences, in which case there would me more nonzero entries in each row of (t). However, we found that increasing the finite differences from second to fourth order didn't yield any significant improvement in the accuracy of our solution. Finally, note that in order to fully define (t) one needs to specify the boundary conditions. We use periodic boundary conditions since they provide a more stable simulation than zero-value boundary conditions. In particular, we identify the right and top edges of the grid with the left and bottom edges respectively. Explicitly, we identify i=N_ with i=0, and j=N_ with j=0. §.§ Efficient computation of the 𝒟 matrix As one can see in eq:example_D, obtaining the numerical value for the different entries of the (t) matrix requires evaluating all _mn(_i,_j,t) [see Eqs. (<ref>)–(<ref>)] in each point of the grid. In turn, this requires the values of (_i,_j,t) and (_i,_j,t) as well as the derivatives n and n [see eq:inverse-deriv] up to n=3 at every grid point (_i,_j) and for all instances of time considered in the finite differences approach. Since an analytical formula for the classical trajectories is generally not available for nonharmonic potentials we evaluate them numerically. We obtain (_i,_j,t) and (_i,_j,t) by propagating in time the classical equations of motion eq:classical_ODE with each grid point (_i,_j) as initial condition. To ensure stability over long integration times we use a symplectic method <cit.>. In particular, we use the 4-th order method described in <cit.>. To obtain the derivatives of the inverse mapping n and n, we use an approach consisting of two steps. First, we compute the derivatives of the direct mapping as solutions to differential equations, which allows us to benefit from the properties of the symplectic method used above. Second, we use these values to compute n and n through the relation between the direct and inverse mapping. Using these steps is more efficient than a direct numerical evaluation of these derivatives in terms of limits such as the one shown in eq:inverse_derivative_limit. In the following we describe these two steps in detail. By taking derivatives with respect to and in eq:classical_ODE one can obtain the equation of motion for the derivatives we need. Note that we use and as a shorthand for (,,t) and (,,t) respectively. Specifically, taking the derivative with respect to on eq:classical_ODE yields the differential equations for ∂_(,,t) and ∂_(,,t) ∂_t [] = 1/[], ∂_t [] = -^(2)() []. The initial conditions are given by ∂_(,,0) = 1 and ∂_(,,0)= 0. They stem from the fact that, at time t=0, (,,0)= and (,,0)=. Similarly, taking the derivative with respect to yields a similar equation for ∂_(,,t) and ∂_(,,t), ∂_t [] = 1/[], ∂_t [] = -^(2)() [], with initial conditions ∂_(,,0) = 0 and ∂_(,,0)= 1. By taking more derivatives, one can obtain equations for the higher order derivatives. The second order derivatives with respect to initial conditions fulfill ∂_t [^2]^2 = 1/[^2]^2, ∂_t [^2]^2 = -^(3)() []^2 -^(2)() [^2]^2, ∂_t [^2]^2 = 1/[^2]^2, ∂_t [^2]^2 = -^(3)() []^2 -^(2)() [^2]^2, ∂_t [^2]∂ = 1/[^2]∂, ∂_t [^2]∂ = -^(3)() [][] -^(2)() [^2]∂, with all the initial conditions being zero. The third order derivatives fulfill ∂_t [^3]^3 = 1/[^3]^3, ∂_t [^3]^3 = -^(4)() []^3 -3 ^(3)() [][^2]^2 -^(2)() [^3]^3, ∂_t [^3]^3 = 1/[^3]^3, ∂_t [^3]^3 = -^(4)() []^3 -3 ^(3)() [][^2]^2 -^(2)() [^3]^3, ∂_t [^3]^2∂ = 1/[^3]^2∂, ∂_t [^3]^2∂ = -^(4)() []^2[] -^(3)() [^2]^2[] -2 ^(3)() [][^2]∂ -^(2)() [^3]^2∂, ∂_t [^3]∂^2 = 1/[^3]∂^2, ∂_t [^3]∂^2 = -^(4)() [][]^2 -^(3)() [][^2]^2 -2 ^(3)() [][^2]∂ -^(2)() [^3]∂^2, again with all the initial conditions being zero. Note that and appear explicitly in all the equations. Similarly, ∂_, ∂_, ∂_ and ∂_ appear in the equations for the second and third order derivatives, and the second order derivatives appear in the equations for the third order derivatives. This means that in order to solve the equations for higher order derivatives, the values for all the lower derivatives are needed as an input. Even more, not only the values at each time t being considered are needed, but also the values at the 4 intermediate time steps in the 4-th order method <cit.> that we use. In order to be memory efficient, we do not use a separate solver for each equation, but rather use a single solver for all the equations that correctly uses all the previously computed values in the right sequence. Finally, we need to relate these derivatives to the derivatives of the inverse map n and n. For the first order derivatives, the key observation is that the Jacobian matrix of the map (,,t) ≡[ [ (,,t)] [ (,,t)]; [ (,,t)] [ (,,t)], ]. is by construction the inverse of the Jacobian matrix of the inverse map (,,t) = [ ∂_ 1; ∂_ 1 ]. Using this fact, we compute (_i,_j,t) for each point in the grid at each time step, and then obtain by inverting the matrix. Explicitly, we use the following formula (_i,_j,t) = ^-1(_i,_j,t). One can show that the determinant of both and is constant and equal to one, and therefore, computing this inverse is straightforward. Similar relationships exist for higher order derivatives, which we derive below. In order to simplify the expressions in the following, we will define the vector = (,) and the vector function (,t) = ((,t),(,t)). Finally, we will define a new set of variables = (,) which are related to through the classical trajectories as = (,t) or equivalently = (,-t). Using this notation, we can express the Jacobian matrices discussed above as ^i_j = [_i]_j and ^i_j = [_i]_j, and their relationship of being the inverse of each other as ∑_k ^i_k ^k_j = δ_ij. Now, to derive a relation for the second order derivatives, we start by defining the Hessian tensor and inverse Hessian tensor respectively as ^i_jk = [^2_i]_j ∂_k and ^i_jk = [^2_i]_j ∂_k where i,j,k can be either 1 or 2. Next, we expand the following expression using the chain rule 0 = [^2 _i]_j∂_k = _k∑_l [_i]_l[_l]_j = ∑_l [_i]_l[^2_l]_j∂_k + ∑_l,m[^2_i]_l ∂_m[_l]_j[_m]_k. Then, using the properties of the Jacobian matrices, we can rewrite the expression above as 0 = ∑_l ^i_l ^l_j,k + ∑_l,m^i_l,m^l_j ^m_k or ^i_j,k = - ∑_n,l,m^n_l,m^i_n ^l_j ^m_k. One can then use this expression to obtain the values of in terms of (which we compute by solving the differential equations described above) and the values of that we already computed. For the third order derivatives one can proceed in a similar fashion. One defines the tensors ^i_jkα = [^3_i]_j ∂_k ∂_α and ^i_jkα = [^3_i]_j ∂_k ∂_α and takes yet another derivative with respect to _α in eq:second-order-trick. Then, proceeding in a similar way, one finally arrives at the expression ^i_j,k,α = -∑_n,l,m,β^n_l,m,β^i_n ^l_j ^m_k ^β_α - ∑_n,l,m^n_l,m^i_n ^l_j,k^m_α + ^l_j,α^m_k + ^l_k,α^m_j . In summary, our numerical approach to solve eq:Wigner_FPE_flowing_FULL consists of the following steps for each time step Δ t. First, propagate in time the classical trajectories, and its derivatives with respect to initial conditions, for each point in the grid. Second, use these derivatives to compute the corresponding derivatives of the inverse map. Third, use all these newly computed values to generate the matrix (t). Finally, use eq:exp-propagate to compute at the new time step in terms of the values at the previous time step. Repeating this procedure allows us to propagate in time. We implemented all these steps by developing our own simulation code in , and .
http://arxiv.org/abs/2306.11612v1
20230620154144
Visual Analysis of Large Multi-Field AMR Data on GPUs Using Interactive Volume Lines
[ "Stefan Zellmann", "Serkan Demirci", "Uğur Güdükbay" ]
cs.GR
[ "cs.GR", "cs.IR", "cs.PF" ]
Introduction An empirical study of using radiology reports and images to improve ICU mortality prediction * July 31, 2023 ============================================================================================ We propose an interactive implementation of Weissen­böck 's <cit.> dynamic volume lines (DVLs). DVLs visualize ensemble volumes in 1D as a set of polylines. While ensemble or multi-field volume rendering may suffer from visual clutter and self-occlusion, DVLs present a viable alternative for visual exploration or can augment an existing 3D volume visualization. A fundamental problem with DVLs and similar plots is that many cells of the volume map to only a few pixels of the output viewport, leading to overdraw. The number of cells can be several orders of magnitude higher than the number of pixels that the line segments of the polylines project to, resulting in a linear mapping of cells to pixels that compresses regions where the data is not as interesting. Weissenböck  concentrate on the visual analytics aspects of DVLs and have proven their efficacy to this end, yet the authors' work focused on smaller structured-regular volumetric data sets (on the order of 64^3 cells). When scaling to larger volume sizes and unstructured or hierarchical grid types, interactively computing DVLs becomes a challenge we address in this paper. Another aspect that remains unexplored by Weissenböck 's work is that the spatial arrangement (and hence the local variation) of the volume ensemble changes when the alpha transfer function of an ensemble member is updated. However, interactive transfer function updates are an essential aspect of scientific volume visualization, and the data structures and algorithms involved in computing and updating DVLs must be carefully chosen not to prohibit this type of interaction. We, therefore, concentrate on the performance and interactivity aspects of computing dynamic volume lines on the GPU. More specifically, we contribute * an extension of dynamic volume lines for AMR volumes where the cell size depends on the refinement level, * a GPU implementation that allows to interactively update the volume lines in the presence of user-editable transfer functions per ensemble member, and * an application that allows interaction with the 3D view and the 1D plot through brushing and linking. An overview of the visualizations our system supports is given in <ref> (here exemplified using a multi-field data set). § BACKGROUND AND RELATED WORK This section reviews related works on large-scale volume visualization and adaptive mesh refinement (AMR) data. Additionally, we provide a background summary of the dynamic volume lines method by Weissenböck  <cit.> as our main related work on the visual analytics side. §.§ Large-Scale Volume Data Our paper concentrates on large volume data. While our prototype does include a 3D rendering component, in this section, we focus on data representation more than on the rendering side. Although large structured volumes are still commonplace in some areas <cit.>, unstructured or hierarchical representations are ubiquitous in the computational sciences. While unstructured meshes <cit.> are pretty standard, many codes use adaptive mesh refinement (AMR) <cit.> to concentrate the computation on the relevant regions in space. The resulting data can be block-structured, overlapping grids, Octrees, or similar hierarchies. Recent challenges with AMR visualization include smooth interpolation in 3D <cit.>, GPU acceleration structures <cit.>, and time-dependent data <cit.>. A common approach for representing AMR data is the one adopted by Wald  <cit.>, where AMR cells “snap” to the logical grid; that hypothetical uniform grid has a resolution that when resampling the volume, the finest AMR cells occupy exactly one logical cell. Each AMR cell is unambiguously defined by its lower corner on the logical grid, and its refinement level L. By defining 0 to denote the finest level, the cell size can be computed as C_w = 2^L. This representation omits the AMR hierarchy itself. One common way to organize volumes is through space-filling curves (e.g., Morton <cit.> or Hilbert codes <cit.>). In 3D rendering, the main incentive for that, as in the works cited here, is to build acceleration structures. Generally, space-filling curves cluster cells in 1D that are also nearby in 3D. §.§ Dynamic Volume Lines We extend the dynamic volume lines (DVL) method by Weissenböck  <cit.> to support large-scale AMR data. We provide a summary of their method in this section. DVLs present volumes as 1D plots where the x-dimension maps to cell IDs and the y-dimension maps to intensity. The method is restricted to structured-regular volumes, i.e., all cells are cubes/voxels of the same size. One way to assign x-values is in a row- or column-major order. This approach loses spatial locality, as cells are only grouped if they are neighbors on the same row (or column); yet when they are adjacent in the vertical or depth direction, they are likely to be further apart in 1D due to the row (or column) sized stride. Space-filling curves can provide better proximity-preserving mappings. Weissenböck  use Hilbert curves. Cells are represented by their centroids, which are quantized, e.g., to 20-bit per dimension, so that a 64-bit bitmask can represent them. The Hilbert codes represent the quantization grid cell that the centroids map to. An obvious problem when plotting volumes in 1D is that there are several orders of magnitude more cells than pixels in the x-dimension. Using a linear mapping is wasteful because it can result in homogeneous or empty regions represented as straight lines that convey no useful information. Weissenböck 's objective is to compare ensembles of volumes. Interesting features are determined by comparing corresponding points of the ensemble. The authors propose scaling the cells along the x-axis using per-cell importance. However, their focus is on grayscale volumes and not on alpha transfer functions as we do. The difference between the maximum and minimum intensity of the whole ensemble gives the per-cell local variation of the ensemble: V_h = max_∀ m ∈ M(I(m,h)) - min_∀ m ∈ M(I(m,h)), where h is the cell's Hilbert code, M is the ensemble of volumes, and I(m,h) is the intensity of ensemble member m and the AMR cell that corresponds to h. From that, the authors obtain the local importance scale for the x-coordinate: f(h) = (V_h/max(V_h))^P. P is a user-defined parameter to control the steepness of the resulting curve; a minimum importance is enforced (set to 0.025 by the authors). Computing a prefix sum of floating point values over the per-cell importance and quantizing that on the 1D grid given by the width of the plot area provides x positions to plot the cells as line segments. This nonlinear mapping compresses regions with low data variation, devoting more screen space to regions with high data variation. The concept of exploring spatial or volume data in 1D also inspired other works. Franke  <cit.>, e.g., use 1D plots to conserve neighborhood relations of geospatial regions. Zhou  <cit.> use the minimum spanning tree of a circuit graph over structured-regular volume ensembles to generate similar 1D plots as the volume lines our paper focuses on. In contrast to using Hilbert codes directly—and similar to DVLs with their importance-based nonlinear mapping—Zhou 's approach is also data-driven. § METHOD We extend Weissenböck 's dynamic volume lines to support multi-field AMR data. In contrast to Weissenböck , we assume that the intensities I(m,h) come from an RGBα transfer function. They do not focus on interactive parameter updates, such as transfer function, exponent P, and minimum importance; they mention that Hilbert code computation takes several seconds for 64^3 cell data sets. While Weissenböck  focused on volumes a couple of Megabytes in size, we target Gigabyte-sized data. §.§ Extension to AMR To compute DVLs for AMR data, we need to extend the Hilbert code and x-coordinate generation to support non-uniformly sized cells. First, we note that Weissenböck  assume that cells have uniform size; even if the cells were non-uniform, the Hilbert codes do not reflect this because they only provide the cells' order and not their spacing. To account for coarser cells to also span a wider region of space as in a 3D rendering, we need to consider each cell's size when computing the nonlinear x-axis scale. For that, we extend <ref> to include the AMR cell width as follows: f(h) = (V_h/max(V_h) 2^L_h)^P, where L_h∈ℕ_0 is the AMR level of the cell corresponding to Hilbert code h, and 0 is the finest level. Another sensible choice would be to not scale the cells by their (uniform) width but by their volume. We deliberately apply the cell-size dependent scale when computing the importance, not the x-positions themselves, because the order of operations is not commutative. We compute the floating point prefix sum over the importance values: F(h) = ∑_i=0^hf(i), to obtain x-coordinates xf_1 = F(h-1)/F(max(h))× W, xf_2 = F(h)/F(max(h))× W for plots of size W pixels. §.§ Projecting Cells to 1D We are now able to compute pairs (xf_1,xf_2) of x-coordinates per AMR cell; we note that these x-coordinates have sub-pixel accuracy, and although we apply the nonlinear mapping using cumulative importance, in general, a multitude of x-coordinates will project to single pixels. We now discuss how to map these coordinate pairs to obtain x-coordinates per horizontal pixel x ∈ W and how to obtain “y-values” for these. Given such pairs (x,y), we can draw line strips with control points per pixel in the x dimension. For that, we create a set of W bins—one for each pixel in the plot's x-dimension—whose values we initialize to 0. We then project the pairs (xf_1,xf_2) to integer coordinates in the range [0,W-1], iterate over these, and increase the overlapping bins by the value of the corresponding AMR cell. We also maintain a per-bin counter that we increment whenever we increase the bin value. After all the pairs (xf_1,xf_2) are processed, we iterate over the bins and divide each by its bin counter, obtaining the average intensity value of all the AMR cells that project to the bin. This procedure borrows from the basis function method by Wald  <cit.>, only that we use a box-shaped basis function instead of the tent-shaped basis used by Wald. Pairs (xf_1,xf_2) that span multiple bins (hence multiple pixels in the x-dimension) will noticeably turn the plot into a step function. As for our data, many line segments will map to single bins only; we do not consider this to be an issue, and we here choose simplicity over generality. §.§ Interactive Transfer Function Updates We extend Weissenböck 's method to support interactive RGBα transfer functions that apply to 1D and 3D rendering alike. The requirement that transfer function updates be interactive implies that (re)generating volume lines must also be interactive. Transfer functions are applied on two occasions: once when computing the importance (cf. <ref>), which requires intensities from the transfer function, and once per bin, after dividing the bin values by their basis weights. We normalize the input (field) intensity and compute RGBα values to apply the transfer function. The alpha value determines the height of the bins and y-values of the polyline at these positions. The RGB value (or, alternatively, a uniform color from a global map) is used to colorize the polylines. §.§ Computing the Maximum Local Ensemble Variation The term max(V_h) from <ref>, the maximum of the local ensemble variations for each Hilbert code h, needs to be recomputed whenever an ensemble member's transfer function changes. This can be implemented on GPUs using parallel over all cells or a kernel atomically updating a single value in GPU main memory. To avoid this costly operation, we compute this value using the transfer functions and field data ranges, resulting in a more conservative, yet in practice very close approximation to max(V_h). We compute ranges [i_m,j_m] where i_m,j_m ∈ [0,N-1] and i_m ≤ j_m for each ensemble member m ∈ M and transfer function size N. These allow us to iterate only over the transfer function values present in the data. That way, we compute the global range [i,j]: i = min_∀ m ∈ M(i_m), j = max_∀ m ∈ M(j_m), and from that V_a = max_∀ m ∈ M(A(m,a)) - min_∀ m ∈ M(A(m,a)) ∀ a ∈ [i,j] as an approximation to V_h. In <ref>, the term A denotes a lookup to the transfer function to retrieve the alpha value. §.§ Brushing and Linking We connect the DVL and 3D views using brushing and linking. When selecting regions of interest (ROIs) in the 1D plot, the corresponding cells in the 3D view are highlighted (cf. <ref>). We use the ROIs' first and last Hilbert codes as selection ranges for that. In the 3D shader, the ROIs also manifest as Hilbert codes and not as lists of cells in world space. This compact representation comes at the expense of transforming the center of the cell that we are sampling to Hilbert space to test it against the ROIs. In the case of structured-regular volumes, finding the cell bounds is simple; since the cells have the same size, the corresponding Hilbert code implicitly allows us to derive their (uniform) size. For AMR data, the cell centroid's Hilbert code is not sufficient; instead, when we check if a sample falls inside an ROI, we must explicitly locate cells to know the ROI's exact size. § GPU IMPLEMENTATION WITH CUDA We implement what we call interactive volume lines (IVLs)—the GPU-accelerated version of dynamic volume lines—using NVIDIA CUDA. The CUDA kernels involved are shown in <ref>, corresponding to the algorithm phases from <ref>. We now discuss how to assemble these building blocks and identify their bottlenecks. The Hilbert codes and the order of the 1D AMR cells never change once they are established. What does change in user interaction is the spacing between cells that we need to recompute interactively. We compute the term V_a from <ref> on the CPU since the transfer function color map and other parameters are passed to our application through the host in any case and because the amount of computation needed does not necessitate running this operation in parallel. With V_a computed, execution transitions to the GPU. A kernel with one thread per AMR cell ( in <ref>) computes the importance from <ref>. The kernel loops over each field, assigning each cell its transfer function and cell-size-dependent importance from <ref>. This requires exclusive read and write accesses (EREW) only. For <ref>, we use the algorithm from the CUB library ( in <ref>). GPU prefix sums can be realized with EREW accesses <cit.>. We run another kernel with one thread per cell ( in <ref>) to determine the subpixel x-coordinates xf_1,xf_2 from the prefix sum array and update the bins and weights (cf.<ref>). The number of bins is much smaller than the number of AMR cells for typical data sizes. We use CUDA atomic operations for the projection; hence, this kernel is not EREW. Due to the input size and the atomics, this is the most costly kernel of the algorithm. The and kernels (cf.<ref>) are EREW. They divide the bin values by their weight (cf.<ref>) and apply the transfer function to obtain y-coordinates. Both kernels use W (number of bins) threads. We finally run a kernel ( in <ref>) rasterizing the polylines using CUDA surfaces. We implemented modes to draw the IVLs as polylines (<ref>b) or as bar charts (<ref>c). § PROTOTYPICAL USER INTERFACE To implement a prototype, we started with the open-source code of Zellmann  <cit.> and added multi-field support and a 1D user interface with Dear ImGUI <cit.>. We show the user interface demonstrating brushing and linking in <ref>. We note that the application is at an early stage and does not implement all the features proposed by Weissenböck  <cit.>, such as mouse-over handling or support for the histogram heatmap and functional boxplots. Fundamentally, the computations required are of the same order as those required for the IVLs, so implementing them remains an engineering exercise. § EVALUATION We evaluate our method on an Intel Xeon system with 64 GB RAM and an NVIDIA A6000 GPU. We use the , , , and fields of the Molecular Cloud data set by Seifried  <cit.>, which was simulated with FLASH <cit.>. Technically, the data is multi-field and not an ensemble; the fields are correlated. The data set spans four AMR levels with a total of 35.8 M cells. If not noted otherwise, we set P=1 (cf.<ref>) and the minimum importance to 0.025. GPU memory for the whole data set—including auxiliary data (our application uses OptiX to accelerate 3D rendering)—is reported by to amount to 3.6 GB. We present kernel execution times in ms. in <ref>. The algorithm is bottlenecked by the kernel (projection from cells to bins using atomics), executed once per field. We test if the performance of the kernel depends on the input parameters. While keeping the other parameters fixed, we vary P in 0.0-5.0 (<ref>, left), and the minimum importance in 0.0-0.25 (<ref>, right). We observe that there is a measurable, yet very subtle (1-2%) trend that parameters that favor higher IVL compression lead to slightly faster execution times. We count the 's to determine their costs, and report the bin minima and maxima as well as quartiles for the same test from before (cf.<ref>). We observe that when P and the minimum importance increase, the number of 's per bin becomes more uniform. We never encountered single threads that run extraordinarily long. In fact, during development, we tested if parallelizing the kernel using one per field lowered total execution time, but found that it fully occupies the GPU at all times. We finally compute the difference between the exact max(V_h) used by Weissenböck 's <cit.> vs. our approximation from <ref>, using a sweep of 107 randomly chosen different transfer function configurations. We report the results in <ref>. § CONCLUSIONS AND FUTURE WORK We presented interactive volume lines, a high-performance variant of Weissenböck 's <cit.> dynamic volume lines. For non-trivial volume data (e.g., unstructured or AMR), visual analytics performed on the whole set of cells becomes a complex data handling task. We presented a carefully crafted GPU implementation of this algorithm. For large-scale volumes, as demonstrated in this paper, it does matter if an operation is performed on the whole set of cells, on bins of a 1D grid in screenspace, or on the RGBα transfer function array, and which type of write accesses are performed. The GPU control flow we eventually developed resulted in a balance between faithfully recreating the algorithm and avoiding severe implementation bottlenecks. Our paper points to future work. One conceivable optimization is to make the kernel that the algorithm is bottlenecked on become hierarchical over the input cells to perform the projection in more local memory regions. This is one of many interesting similarities between optimizations used for 3D volume rendering (e.g., acceleration structures like the one by Wald  <cit.>) and optimizations that apply to IVLs. In fact, we found that on the engineering side, similar abstractions apply in 1D and 3D alike (e.g., basis functions used for interpolation, 1D bins that resemble pixels, etc.). One less obvious future work lies in the algorithm's GPU memory consumption: 1D and 3D rendering share neither the data itself nor other auxiliary data structures that accelerate rendering. Instead, the whole input data is replicated in memory. It would be interesting to explore if the two rendering modes could share some or even all the data to overcome this limitation. Another open question (that this paper does not seek to answer) is how useful volume lines are for large volumes; our intuition is that they work best for smaller data, to reduce noise and outliers in the 1D plots. A formal evaluation is out of scope here. We note though that an obvious extension would be a zoom interaction that, when selecting a ROI through brushing and linking, also zooms in on the 1D plot and devotes more screen space to the selected AMR cells. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—grant no. 456842964. The TAC Molecular Cloud is courtesy of Daniel Seifried. We are grateful to NVIDIA, who kindly provided us with the hardware we used for the evaluation. abbrv-doi
http://arxiv.org/abs/2306.02177v1
20230603191134
Towards Coding Social Science Datasets with Language Models
[ "Christopher Michael Rytting", "Taylor Sorensen", "Lisa Argyle", "Ethan Busby", "Nancy Fulda", "Joshua Gubler", "David Wingate" ]
cs.AI
[ "cs.AI" ]
Researchers often rely on humans to code (label, annotate, etc.) large sets of texts. This kind of human coding forms an important part of social science research, yet the coding process is both resource intensive and highly variable from application to application. In some cases, efforts to automate this process have achieved human-level accuracies, but to achieve this, these attempts frequently rely on thousands of hand-labeled training examples, which makes them inapplicable to small-scale research studies and costly for large ones. Recent advances in a specific kind of artificial intelligence tool - language models (LMs) - provide a solution to this problem. Work in computer science makes it clear that LMs are able to classify text, without the cost (in financial terms and human effort) of alternative methods. To demonstrate the possibilities of LMs in this area of political science, we use GPT-3, one of the most advanced LMs, as a synthetic coder and compare it to human coders. We find that GPT-3 can match the performance of typical human coders and offers benefits over other machine learning methods of coding text. We find this across a variety of domains using very different coding procedures. This provides exciting evidence that language models can serve as a critical advance in the coding of open-ended texts in a variety of applications. § INTRODUCTION The analysis of textual data–from sources like open-ended survey responses, social media posts, and legislative transcripts–has become increasingly important across many disciplines. Traditionally, researchers quantitatively analyzing these text have trained research assistants (mostly undergraduate students) to code the material by assigning numbers and/or categories to text segments. However, such human coding is slow and expensive. Given variability in experience and perception among coders, researchers hire multiple people to evaluate the same texts when possible and then calculate intercoder agreement as a measure of confidence in the coding process. At times, even this repeated coding is not feasible, and researchers rely on a single human coder. While this approach works for small amounts of text, it becomes impractical as a means to to analyze the texts available in an increasingly information-rich world. As a result, many scholars seek automated alternatives. Dictionary-based methods <cit.> work in cases where clearly defined sets of words indicate the presence of particular content but struggle with nuance and generalization <cit.> One solution to this problem uses supervised machine learning (SML) models to code text in the place of humans, such as naive bayes, random forests, and SVMs <cit.>. Unfortunately, all of these require large datasets for training, which typically must be hand-generated by human coders, failing to eliminate the time and expense of using human coders <cit.>. SML methods also require large datasets with a sufficient sample size to train, test, and validate a SML procedure. Unsupervised methods exist - such as structural topic modeling <cit.> - but these still require significant amounts of data and extensive modeling and validation steps. Most importantly, they do not allow researchers to intentionally code specific themes and topics. We propose that state-of-the-art artificial intelligence tools, known as language models (LMs), provide a powerful alternative to current techniques for coding texts in the social sciences, as has been done in labeling in other domains and methodologies including stance detection, psychology, and synthetic dataset generation <cit.>. We describe these tools and the application of one - GPT-3 <cit.> - to various coding tasks in political science. We show that GPT-3 performs coding tasks at or exceeding the level of human coders, even when it is given three or fewer labeled examples. We also find that GPT-3 performs comparably to SML procedures, with a fraction of the time and cost of those approaches. § LANGUAGE MODELS In the most basic sense, LMs are a conditional probability distribution p(x_n|x_1,⋯,x_n-1) over tokens or words. LMs generate novel sequences of text by repeatedly sampling from this distribution. Crucially, LMs can be given initial inputs that reduce the probability of some output statements and increase the probability of others. Given the initial input of “Will you please”, a LM might assign high probability to “go” as the next term, and low probability to “fruit”. Changing the context to “Will you eat” switches those probabilities. The use of LMs in social science has recently seen much progress and promise <cit.>. LMs can serve as useful tools in coding texts for at least two reasons. First, LMs are created and trained on massive amounts of human created statements. This means the models come already set up with an extensive understanding of human texts. Second (and relatedly), LMs have few-shot capabilities or the capacity to learn complicated tasks with only a handful of examples. This can almost entirely eliminate the need for hand-coded training data, providing advantages even over SML methods. For our application here, we use GPT-3, one of the largest existing LMs. This language model was released by OpenAI in 2020, has 175 billion parameters, and was trained on more than 45 terabytes of text. In automated content analysis, others have considered different, custom-modified LMs such as BERT <cit.>, BART <cit.>, RoBERTa <cit.>, XLNet <cit.>, and ELMo <cit.>. However, these all require extensive fine-tuning and a similar number of labeled examples as SML methods. As such, we explore GPT-3 as a coding tool with only few-shot learning methods (and no fine tuning) to determine if it provides a more efficient automated coding tool that is more accessible to most social science researchers. § METHODOLOGY To use GPT-3 to code texts, we provide it with a specific prompt designed to teach GPT-3 the coding process. This prompt varies from application to application, as the coding method depends on the specific concepts being coded. Throughout these applications, our goal is to give GPT-3 as little guidance as possible to demonstrate its flexibility and efficiency in learning how to act as a coder. In providing GPT-3 with these prompts, we discovered that the LM responded quite similarly across various versions of our guidance, and that it required only two or three coded examples to perform well on these tasks. For additional information on the process of engineering these prompts, see the Online Appendix. After giving GPT-3 these prompts and observing how it codes a set of data, we compare that coding to a corresponding set of codes generated by humans. This allows us to directly compare the performance of GPT-3 to human coders. In the case of our last application, we also compare our results to a SML procedure. We make these comparisons based on coding agreement as well as efficiency (in terms of time and cost to code with other techniques). We construct our prompts by providing instructions, categories (if necessary), exemplars (labeled examples of the task), and then the text to classify. We then compute GPT-3's probabilities for the next token over its vocabulary and select the token with the highest probability as the model's coding choice. For color-coded examples of prompts, see Figure <ref>. We evaluate GPT-3's coding performance using various intercoder agreement measures between GPT-3's codes and the codes generated by humans we hired to code the same texts. These are as follows: §.§ Intraclass correlation (ICC) Intraclass correlation measures inter-coder agreement among human coders using numerically ordered, (quasi-) continuous values in their coding (e.g., rating a text by some characteristic on a 1-5 scale). ICC scores are between -1 and 1 and are typically interpreted as follows: <0.5 = poor inter-coder agreement, 0.5-.75 = moderate agreement, 0.75-0.9 = good, and >0.9 = excellent <cit.>. §.§ Joint probability of agreement For tasks with un-ordered, categorical codes, we use two different measures. The first, joint-probability of agreement, measures the probability of any two coders agreeing. In the 2-coder case, where one of the coders is ground truth, this reduces to raw accuracy. Joint probability agreement ranges from 0 to 1. Between two coders, it is calculated as follows: 1/N∑_i=1^N1(y_1,i = y_2,i), where N is the number of instances being coded, and y_1,i, y_2,i are the first coder's and the second coder's respective codings of instance i. In the case of K coders, the joint probability agreement is the mean of the pairwise agreements. §.§ Fleiss' kappa Fleiss' kappa measures the degree to which the proportion of agreement among coders exceeds the agreement of fully random coders <cit.>. Used specifically to quantify intercoder agreement for categorical data, this measure ranges from -1 to 1. When κ = 0, it means that the two raters agree at a rate not better than chance. κ < 0 means increasing agreement worse than chance, and κ > 0 means increasing agreement greater than chance. § EXPERIMENTS We consider GPT's capacity to serve as a coder using data from four datasets: Pigeonholing Partisans (PP), New York Times Headlines (NYT), Congressional Hearings (Congress), and The Guardian Populism (TGP). We chose these datasets to maximize differences in coding tasks as a means of exploring GPT-3's limits. These four applications vary in the difficulty of the coding task, the domain (or topic) of the coding, the structure of the texts, and measurement of the coded variable (ordinal, categorical, binary, etc.). §.§ Pigeonholing Partisans (PP) We first consider the ability of GPT-3 to act as a coder with data on Americans' stereotypes of Republicans and Democrats <cit.>. These data, collected in 2016, asked individuals to list four words or phrases that described typical supporters of the Democratic and Republican Parties.[More methodological details can be found in published discussions of this work. See <cit.>.] This procedure is common in psychological studies of stereotypes <cit.>, and allows survey takers to describe partisans in their own words This dataset is too small for other kinds of automated coding and an ideal way to consider how well GPT-3 can classify texts without extensive training sets. To evaluate how well GPT-3 can serve as a coder on these kinds of short, open-ended texts, we recruited 2873 human coders through the survey platform Lucid <cit.> to code a total of 7675 descriptions of partisans. Each description was coded at least three times by a random set of coders, who were given minimal instructions for coding the texts.[These texts include those created by human respondents in the original data as well as texts created by GPT-3 and discussed in other, published work <cit.>. That work indicates that human respondents cannot distinguish between the two kinds of statements.] As such, the coders in this study should be considered "lightly trained" rather than rigorously instructed on the coding. Coders rated the texts along five dimensions: (1) positivity (general positive/negative valence), (2) extremity (extreme or moderate quality of the words), and whether the text mentioned (3) character or personality traits, (4) government or policy issues, or (5) social groups. Each of these domains is important to the theoretical ideas of the original work on partisan stereotypes <cit.>. After the human coding process was complete, we asked GPT-3 to complete a series of coding tasks on all 7675 texts directly analogous those completed by humans. Next, we examined how closely GPT-3 follows individual human coders and human coding in the aggregate, along with how closely humans followed each other. To that end, we calculated ICC scores with these data (Fig. <ref>). As coders are randomly assigned to texts and not all texts are scored by the same coders, we use ICC1k, which accounts for this structure <cit.>. Our focus here is on the increase or decrease in ICC when GPT-3's codes are added to the three human codes. If GPT-3 improves the reliability of the coding, ICC should improve. If it does not offer this benefit, the ICC score should stay the same or decrease. We also compare adding GPT-3's scores to adding simulated scores to ensure that the addition of another coder by itself does not drive what we observe: (1) a coder who codes all texts as 0 (lacking the attribute), (2) a coder who codes all texts as 1 (containing the attribute), (3) a coder who codes randomly, and (4) a coder who codes all texts randomly, but with the same overall distribution as GPT-3's predictions. We also consider the ICC values when comparing GPT-3's codes to the average of the human coders (rather than individual coders separately). The statistics in Figure <ref> suggest that adding GPT-3 as a coder adds a great deal to reliability for two measures (positivity, groups), slightly increases reliability of the coding for two others, (extremity, issues), and reduces reliability in one (traits). Notably, this last area is where human coders correlated the least with each other (correlations between human coders on this domain ranging from 0.07 to 0.08) and may represent a fundamentally challenging task. There is also a stark difference between adding GPT-3 and adding each of the simulated coders. We conclude that the boost in ICC from GPT-3 is not due to simply adding another coder. Furthermore, since adding GPT-3's outputs to the human outputs generally either increases or maintains ICC across each attribute, we conclude that GPT-3 achieves human or better performance at this task. Importantly, achieving this level of performance required neither coding a large-scale dataset (on the order of tens of thousands or more) nor a large, labeled set of training data for the language model. §.§ Comparative Agendas Project (CAP) For a different application of GPT-3 as a coder, we during to the Comparative Agendas Project (CAP) system of coding. CAP provides a coherent framework for documenting media and government attention to various policy issues in a comprehensive set of policy domains <cit.>. CAP datasets aim to be comprehensive, transparent, and replicable <cit.>, with many housed at the CAP website (www.comparativeagendas.net). More than 200 scholars have used CAP to test a vast range of empirical political science theories across more than a dozen countries <cit.>. The CAP master codebook moves beyond the simple coding of the PP data, spanning at least 21 major categories (with others added for some specific applications). In order to succeed here, GPT-3 must produce a high probability for one of a large, unordered, pre-specified set of tokens that corresponds to the specific content of the input data. Prior efforts to automate coding in the CAP framework have met limited success <cit.>. Sebok and Kacsuk <cit.> are able to achieve an 80%+ F1 score on average across categories, but this is reported after culling over 40% of their dataset due to difficulty of classification. We, on the other hand, provide scores given full coverage of the dataset. Reported performance in various approaches is substantially lower than this (accuracies near or below 50%) for dictionary methods, less efficient SMLs, corpora with less training data, or in specific hard-to code categories, which upper limit our average accuracy exceeds. Again, the highest performing outcomes are achieved by setting rejection thresholds (for ambiguous texts or cases where humans or models disagree) and either sacrificing coverage or targeting human coders to uncertain cases <cit.>. We achieve our without dropping cases, using multiple models, human disambiguation of difficult cases, and extensive labeled training data. To account for class imbalances and differences in baseline probabilities of different tokens, we normalize the probability distributions in a manner similar to <cit.>. We estimate GPT-3's bias towards a category as the total weight given to each category over a balanced validation set, divide each category probability by GPT-3's bias towards it, and normalize to sum to 1. We found that this produced modest accuracy boosts of 4-5%. If a small validation set is available, we recommend this calibration technique; however, results were qualitatively the same without this calibration. We consider two data sources that have previously been coded using the CAP framework - coding of U.S. Congressional hearing summaries and the New York Times front page. We conducted our coding with GPT-3 separately for each of these applications. §.§.§ CAP: Congressional Hearing Summaries (Congress) The Congressional Hearing corpus contains the Congressional Information Service summary of each U.S. Congressional hearing from 1946 to 2010. These summaries were read by human coders and assigned to CAP classifications. We hired and trained three human coders for this application, providing them with the same instructions outlined in the CAP codebook. This allows us to compare how different human coders and GPT-3 compare to one another (which is not possible with the original data, given that it lacks scores from multiple coders). We gave GPT-3 the full summary text, making the coding task is highly comparable between the humans and GPT-3. All results are reported for n=326 texts, which constitutes 16 texts for each category minus 10 for incompleteness in the human codes. We used a random subset of the dataset of over 10,000 texts for this application. Figure <ref> presents our comparison of GPT-3's and the humans' codes. Both our intercoder agreement metrics tell the same story, and imply a finding that holds across metrics: GPT-3 correlates with each human just as well as or better than the humans correlate with each other. Note that the highest joint agreement (.63) and highest Fleiss' kappa (.61) both occur between GPT-3 and Human 2. Despite there being no real ground truth for this task, we visualize “accuracy” statistics based on the original dataset's single coder as provided by CAP (Figure <ref>). The lack of ground truth is validated by a great deal of human disagreement, as the figure makes clear. We see the accuracy for each coder, with categories sorted in order of GPT-3's accuracy. Interestingly enough, GPT-3 seems to do better at categories that humans do better at, and worse at the categories that humans fail at. Overall, the accuracies were 60% for GPT-3, compared to 63%, 66%, and 55% for the three human coders respectively. The high joint agreement and Fleiss' kappa between GPT-3 and the human coders, as well as the similar accuracies across categories, demonstrate GPT-3 performance on-par with humans on this dataset. Given the efficiency gains from using GPT-3, such as lower costs in training coders and scalability to a large number of texts, we suggest that this gives additional evidence in favor of the usefulness of LMs as coders. §.§.§ CAP: New York Times Front Page Dataset (NYT) The second CAP dataset we use is the New York Times Front Page Dataset, generated and contributed by Amber Boydstun <cit.>. The dataset includes 31034 front page New York Times headlines from 1996 - 2006, along with the policy category label assigned by trained human coders. The categories are adapted for media use, and so include 28 primary classification categories. For this application, we randomly sampled 20 texts from each of the 28 categories to be coded by four human coders and GPT-3. All results are reported for the correspondent set of n=560 texts. The original human coders were instructed to read the headline and the first three paragraphs of the article. In our work, GPT-3 is only provided the headline, because the full article text is not available in the public data. To control for this difference in available information, we also hired four human coders complete an identical classification task to GPT-3, considering only the article headlines. Since the structure of the NYT data is the same as the Congress data, we use the same kind of analyses. For both joint agreement and Fleiss' kappa (Figure <ref>), GPT-3 agrees with the humans about as much as they agree with each other. GPT-3's total accuracy was 55%, compared to 57%, 59%, 51%, and 45% for the four humans respectively. We also notice a strong trend between GPT-3's accuracy and the humans accuracy per category (Figure <ref>). Unlike Congress, however, there are 3 categories for which the humans all perform better than GPT-3: “International Affairs and Foreign Aid,” “Government Operations,” and “Death Notices.” On the other hand, GPT-3 performs better than humans at some other categories: “Environment,” “Health,” and “Labor.” Overall, these results again demonstrate that GPT-3 generally achieves on-par performance with humans. §.§ The Guardian Populism (TGP) For our final application, we consider how GPT-3 codes a multifaceted concept - populism. While disagreement exists about the meaning of this term, many scholars have gravitated towards a definition that populism is a discourse that describes politics as a struggle between the virtuous will of the common people and some evil, conspiring elite <cit.>.[This approach is sometimes called the "ideational" approach to populism] Coding for populism requires a process of marking the presence of a reference to the common people and an evil elite. As such, existing studies have primarily relied on extensively trained human coders that are instructed on how to holistically code an entire text, examining it for references of both of these components (for an example of such a coding process, see <cit.>). Here we draw on a large dataset of short statements coded for populism. In the Fall of 2018, The Guardian created a series of articles on populism. At the end of one article, readers were invited to participate in a related survey on populism - over 20,000 individuals from more than 100 countries completed this survey. One question on this study asked respondents to discuss who or what was responsible for a pressing political problem in their country; two intensively trained human coders evaluated 4,000 of these texts and indicated if they did or did not contain populism. The process of training these coders involved initial instruction on a set of unrelated texts, repeated sessions to correct mistakes and clarify the coding process, and a review of the human codes <cit.>. Unlike the preceding studies, then, this application involves comparisons to highly trained human coders. These data also allow for a comparison to SML methods, as about 16,000 texts were not coded by the human coders. As discussed below, we employ a SVC method to code the full set of texts and compare the performance of this technique to coding by GPT-3. We therefore compare the coding produced by GPT-3 on the set of human coded texts and in comparison to the SML approach. In each case, the coders (human or otherwise) generated a code of 1 when the text contained a populist statement and 0 when it did not. To be regarded as populist, the text needed to contain both a reference to the virtuous or good people and some kind of malicious elite group. [For more details on the human coding process, see other work explaining the codebook in more detail, such as <cit.> and <cit.>] We begin by comparing GPT-3's coding to the two human coders. As before, we calculated ICC scores to measure agreement between the coders. In contrast to the Pigeonholing Partisans data, the same two coders and GPT-3 coded all of the texts. We therefore use ICC3k which is designed for these kinds of comparisons <cit.>. For these comparisons, We had GPT-3 code a random sample of 1,300 of the 4,000 texts coded by humans. [ADD FIGURE HERE] Figure [FILL IN] shows the ICC statistics with GPT-3, the human coders, and the same types of simulated coders show in Section <ref>. With these calculations, we find that GPT-3 performs well, although not quite as well as a thoroughly trained coder. The ICC statistic for the two human coders was 0.81, indicating high levels of agreement. Adding GPT-3 as a coder reduces this somewhat to 0.77, but this still indicates good agreement between the human coders and GPT-3. In contrast, adding one of the simulated coders dramatically reduces the ICC statistics. We take this as evidence that GPT-3 creates codes that are generally comparable to highly-trained human coders, with far less expense and training. To compare GPT-3's performance to a supervised baseline, we fit a bag-of-words SML model on the populism data, using 3000 instances for training and 1000 instances for validation at a time. With this approach, the SML coding matched the human populism codes with an accuracy of 86 percent. Meanwhile, with only 4 coded examples, GPT-3 matched the human populism codes 79 percent of the time. While the SML baseline outperforms GPT-3 by about 7 percentage points, it does so at the cost of 3000 labeled examples. Given the drastically lower costs of coding with GPT-3 - in the case, the requirement of hiring, training, and supervising coders to classify 4,000 texts - we again see this as evidence of the value of GPT-3 as a coding tool for the social sciences. § ETHICS AND BIAS Our results suggest that GPT-3 can automate specific coding tasks comparably to human coders and SML coding methods. However, much work remains to bring this possibility to full fruition. For example, LMs reflect and even amplify pathological human biases contained in their training data <cit.>, raising concerns about their use for coding. Much work has aimed to quantify and reduce this bias <cit.>. However, while LMs exhibit bias, it is a known, invariant, and quantifiable property, whereas individual humans' biases are typically unknowable and far more difficult to quantify. We submit that the ability to recognize and actively compensate for the coder's probable biases is more important than the magnitude of the biases themselves. Conversely, if a LM can be conditioned or fine-tuned into holding specific biases rather than others, then it could emulate specific heterogeneous groups of coders for a richer, more diverse, and representative coding than what we present in this paper. In that sense, we suggest that bias in coders is an omnipresent problem in coding for the social sciences. Here again, LMs provide a way to evaluate and account for those problems. We encourage other researchers interested in and employing LMs in their coding to use this tool to improve the accuracy and inclusivity of their coding and not simply their efficiency. § CONCLUSION With four dramatically different sources of data, we have demonstrated that LMs can be used to code social science datasets more efficiently and as accurately as existing human or SML techniques. Fine-grained analysis shows that GPT-3 can match the performance of human coders on average across small and large datasets; with both ordinal and categorical codes; and on tasks of varying complexity. In some cases, it even outperforms humans in increasing intercoder agreement scores, often with no more than 3 exemplars. We suggest that these results indicate the promise of LMs (and other tools like them) for research in the social sciences. Our analyses are a first step in this direction, but tools like GPT-3 offer low-cost ways to process and evaluate large text corpora from various sources. They also allow researchers to perform these tasks while still using their theoretical and substantive knowledge of the topic at hand to guide the machine learning tools. As such, we view this as a productive synergy of human and computer components to generate an outcome more accurate and efficient than either element on its own. Given the turn of the social sciences towards promising new terrains of text and other complex data, LMs and other related tools offer great promise for nearly every domain of the social sciences. § APPENDIX §.§ Prompt Engineering One important part of this project is the way that we provide information or the context to GPT-3 to help it learn the coding process. As noted in the text, our overarching goal was to make this instruction as minimal and flexible as possible to evaluate GPT-3's potential without dramatic changes to the LM. In doing so, we learning a number of important lessons about giving GPT-3 information about the coding scheme and process. In this section, we seek to explain our prompt engineering protocol so that our results can be replicable and generalizable to other datasets and domains. We include both decisions made without conducting any experiments and those made by conducting experiments. Some elements of prompt engineering seem to matter a great deal, and some seem to matter not much. Of all the sections of this paper, we spent the most time on this one, and ran the most experiments to fill it. Despite this, and the fact that slight changes to individual prompts on formatting cause significant changes to the probability distribution over tokens, we found that in the aggregate, prompt engineering tends to not make much of a difference in GPT-3's performance as a coder. In this process, one has to be mindful of where the prompt ends and what next token is being modeled. Since generative language models sample one token at a time, we needed to be able to sample a unique first token (usually, a unique first word) for each category we attempt to model. For example, “very positive” and “very negative” both start with the token “very,” so it would be impossible for us to compare the two categories with a single token sample. Fortunately, all of our categories started with unique first tokens, but will not be true for all future applications of LMs as coders. Another choice impacting results was the presentation of categories in the question format of the PP data. Specifically, GPT-3 performed significantly worse when asked to respond to a question with the tokens “yes” or “no” than when the choice was between substantive alternatives, such as “extreme” vs “moderate” or “positive” vs. “negative”. For the other three attributes, we found that restating the objective after the “yes” or “no” (e.g., “Yes, mentions personality or character traits”) substantially helped. These were the only prompt variations attempted for the PP dataset. Other elements seemed to have minimal impact, like the number and type of exemplars. While we know that more labeled training data significantly improves SML performance <cit.>, it was unclear ahead of time whether more labeled exemplars to GPT-3 will achieve the same. In theory, more exemplars could more firmly teach the model the format of the task, and every marginal quality exemplar could help the model refine its understanding of the distributions of categories that the examples belong to. As shown in Figure A.<ref>, we find that one exemplar performs much better than none, but there is little gain in accuracy achieved by providing more than 2 or 3 exemplars. We also conducted extensive experiments testing different classes of exemplars (more or less difficult to classify, in the spirit of active learning); this also seemed not to matter (See Appendix <ref> for more details). We also tried many variations on the prompt format, including: surrounding categories in quotes; using slashes, pipes, and other delimiters to separate exemplar headlines from their respective categories; providing lists of example headlines for each category in parentheses right next to the category; new lines in specific places making boundaries between exemplars clearer; and other general rephrasing. None of these changes resulted in a marginal accuracy less than 50% or greater than 57%. This demonstrates a relative stability of the information retrieval process, allaying some concerns that minor changes in wording or punctuation will radically alter coding accuracy. For all of our final prompts used, please refer the following section. §.§ Prompts For Each Task §.§.§ Pigeonholing Partisans * Positivity: Are the following descriptions of PARTY positive or negative? -agreeable, reasonable, understanding, cooperative: Positive -angry, bigoted, racist, homophobic: Negative * Groups: Do the following descriptions of PARTY mention social groups? -Christian, privileged, young, white: Yes, mentions social groups. -apathetic, agreeable, pro-environment, political: No, doesn't mention social groups. * Traits: Do the following descriptions of PARTY mention personality or character traits? -accepting, tolerant, intellectual, charitable: Yes, mentions personality or character traits. -black, young, female, poor: No, doesn't mention personality or character traits. * Extremity: Are the following descriptions of PARTY extreme or moderate? -angry, racist, close-minded, homophobic: Extreme -people, hopeful, educated, agreeable: Moderate * Issues: Do the following descriptions of PARTY include government or policy issues? -aging, religious, accepting, patriotic: No, doesn't include government or policy issues. -abortion, medical marijuana, gun control, anti-sexism: Yes, includes government or policy issues. §.§.§ CAP * Congressional Hearings: Using only the following categories """ Macroeconomics Civil Rights Health Agriculture Labor Education Environment Energy Immigration Transportation Law and Crime Social Welfare Housing Domestic Commerce Defense Technology Foreign Trade International Affairs Government Operations Public Lands Culture """ Assign the following congressional hearing summaries to one of the categories: Extend defense production act provisions through1970. -> Defense FY90-91 authorization of rural housing programs. -> Housing Railroad deregulation. -> Transportation To consider Federal Reserve Board regulations and monetary policies after February 2016 report on monetary policy. ->' * New York Times Headlines Using only the following categories """ Macroeconomics Civil Rights, Minority Issues, and Civil Liberties Health Agriculture Labor Education Environment Energy Immigration Transportation Law, Crime, and Family Issues Social Welfare Community Development and Housing Issues Banking, Finance, and Domestic Commerce Defense Space, Science, Technology and Communications Foreign Trade International Affairs and Foreign Aid Government Operations Public Lands and Water Management State and Local Government Administration Weather and Natural Disasters Fires Arts and Entertainment Sports and Recreation Death Notices Churches and Religion Other, Miscellaneous, and Human Interest """ Assign the following headlines to one of the categories: IRAN TURNS DOWN AMERICAN OFFER OF RELIEF MISSION -> International Affairs and Foreign Aid In Final Twist, Ill Pavarotti Falls Silent for Met Finale -> Arts and Entertainment In Times Sq., a Dry Run for New Yearś 2000 -> Arts and Entertainment House Panel Votes Tax Cuts, But Fight Has Barely Begun ->' §.§ Exemplar Types Experiments We also explored whether some exemplars were better or worse at “teaching” the categories to the model. We considered that for a given category, an instance could be a better or worse exemplar. We might define this by a quantity we'll call its margin: the difference between (1) the probability the model assigns to the correct category and (2) the highest probability of the probabilities for all the wrong categories. Thus, “prototypical" exemplars would have high positive margin (model guesses right), “ambiguous" exemplars would have margins with very low absolute values (model torn between multiple categories), and “tricky" exemplars would have margins with very high negative values (model guesses wrong). In theory, prototypical exemplars could teach the model about the proper distribution of texts belonging to a category, ambiguous exemplars could teach the model about the boundaries between the distributions of each category, and tricky exemplars could correct the model's prior on categories by flagging common mistakes made in coding texts from that category's distribution. To answer this question empirically, we first randomly sample 90 candidate exemplars from each category. We then code each with the model given a set of 4 exemplars sampled randomly once and then held constant specifically for this task. Then we sort them by their margin and construct one set of each: prototypical, ambiguous, and tricky exemplars. Finally, we perform 5 trials where we classify 4 instances from each category using an increasing number of these sets of exemplars and measure performance. The results, in Figure A.<ref>, demonstrate no discernible signal as to which kind of exemplar is best to present to the model in the context window. This is one bit of evidence that this dimension, of the prototypicality vs. ambiguity vs. trickiness of exemplars, is not at all determinative of a model's performance on a coding task, a dimension which is very important for active learning.
http://arxiv.org/abs/2306.03125v1
20230605180001
Symmetries and Selection Rules: Optimising Axion Haloscopes for Gravitational Wave Searches
[ "Valerie Domcke", "Camilo Garcia-Cely", "Sung Mook Lee", "Nicholas L. Rodd" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "astro-ph.IM", "gr-qc", "hep-ex" ]
The relation of cosmic environment and morphology with the star formation and stellar populations of AGN and non-AGN galaxies G. Mountrichas1, G. Yang2,3, V. Buat4,5, B. Darvish6, M. Boquien7, Q. Ni8, D. Burgarella4 and L. Ciesla4 Received 2 May 2023 / Accepted 5 June 2023 ============================================================================================================================= § INTRODUCTION Gravitational wave (GW) experiments have begun to probe the GW spectrum over a vast range, from the Gigaparsec wavelengths probed by the CMB <cit.> to thousands of kilometers, covered by current ground-based interferometers which operate in the 100 Hz range <cit.>, yielding fundamental insights into cosmology, astrophysics, and particle physics. Reaching even higher frequencies poses a significant experimental challenge, but would represent a unique opportunity to probe possible extensions of the Standard Models of particle physics and cosmology <cit.>. A cosmological source of GWs produced at a temperature T_* could generate a stochastic GW background at frequencies of f ≳ 1 kHz (T_*/10^10 GeV), and leave a signature from modifications to the Standard Model at the highest temperatures. Unfortunately, probing relics of a possible high-temperature phase of the early Universe is fundamentally challenging. Experimental sensitivity to GWs can be expressed in terms of the strain h. As the energy density in GWs scales as ρ∼ h^2 f^2 M_ Pl^2, at higher frequencies even greater reach in terms of h is required to reach energy densities below the current bounds on the total energy in radiation in the early Universe derived from BBN and CMB observations. Instead, exotic astrophysical events sourcing transient signals appear to be a more promising medium-term target <cit.>. For example, the merger of two equal mass objects yields GWs at f ∼ 1 kHz (M_⊙/m), so that sources such as primordial black holes with m ≪ M_⊙ could populate the high frequency landscape. In recent years, significant progress has been made in understanding the sensitivity of electromagnetic GW detectors in this frequency regime. In a flat spacetime perturbed by a gravitational wave, g_μν = η_μν + h_μν, the usual expressions for electrodynamics in flat spacetime receive corrections of the schematic form ∼ h F^2, yielding a graviton-two-photon vertex. As long appreciated, this interaction can lead to photon-GW mixing <cit.>. More generally, however, a GW in the presence of an electromagnetic background will induce an electromagnetic response, in close analogy to the signal from axion (scalar) dark-matter arising from the coupling a F F̃ (φ F^2). Exploiting the considerable experimental efforts to search for an electromagnetic response from wave-like dark matter, it has been shown that these same instruments can be used as GW telescopes <cit.>. Largely motivated by the QCD axion, dark matter searches focus on signals of a MHz or above, and are therefore naturally suited to look for high-frequency GWs. In this paper, we will continue the study of the sensitivity of axion haloscopes to GWs, with a particular focus on instruments operating in the “low-mass” magnetoquasistatic regime, and sensitivity in the MHz-GHz window; experiments already operating in this range include ABRACADABRA <cit.>, ADMX SLIC <cit.>, BASE <cit.>, SHAFT <cit.>, and WISPLC <cit.>. These devices feature a strong static magnetic field which in the presence of an axion – or a GW – sources a small, oscillating induced magnetic field which is captured by a suitably placed pickup loop. Many of the existing instruments are effectively prototypes, with a sensitivity that can be improved significantly by increasing the volume of the magnetic field, and by reading out the magnetic flux induced in the pickup loop resonantly. By combining both of these improvements, the goal of the DMRadio collaboration is to reach the QCD axion prediction for neV≲ m_a ≲μ eV <cit.>. In view of the expected progress, it is timely to consider how synergies in axion and GW searches can be optimally exploited, in particular in view of different detector geometries currently proposed for axion searches. Reference <cit.> first proposed the use of low-mass axion haloscopes as GW detectors, and demonstrated that a toroidal magnetic field – as employed by ABRACADABRA, SHAFT, and the upcoming DMRadio-50L – could detect a passing GW. Here, we generalise that analysis to additional detector geometries, with a particular focus on the solenoidal magnetic field used by ADMX SLIC, BASE, WISPLC and which as been moreover proposed for DMRadio-m^3. We provide analytical expressions for the effective current which the GW sources, the resulting induced magnetic field, as well as for resulting magnetic flux for various pickup loop geometries. Armed with these results, we will bootstrap the expected sensitivities to GW signals from axion searches. A further improvement over Ref. <cit.> is a careful treatment of the different timescales involved, in particular the potentially short duration of the GW signal. Whilst the GW sensitivity for a solenoidal magnetic field is a practical result, as for the toroidal magnetic field, the calculation remains involved, and ultimately it becomes inefficient to compute the GW interaction with all possible magnetic field geometries. Motivated by this, we derive a series of symmetry based selection rules, which determine the parametric sensitivity to a GW signal depending upon the symmetries of the experimental magnetic field and the pickup loop used to read out the signal. From these results, we will demonstrate that configurations with a high degree of symmetry can kill the leading order sensitivity to a GW, even though they may be desirable to maximise the axion sensitivity. An analogue of this was already observed in Ref. <cit.>, where it was shown that if the flux from a toroidal magnetic field is read out through a circular pickup loop, then the leading order GW sensitivity, expected at O[(ω L)^2], vanishes, while sensitivity at O[(ω L)^3] remains. Here ω is the angular frequency of the GW, L is a characteristic length scale for the experiment, and in the magnetoquasistatic regime of interest for low-mass axion haloscopes, ω L ≪ 1. We show that if both the external magnetic field and the pickup loop have cylindrical symmetry, i.e. if they are invariant under azimuthal rotations and reflections in the z coordinate, any orientation of the pickup loop which is sensitive to the axion suffers from a cancellation of the leading order term (∝ (ω L)^2) for the GW signal. This symmetry is commonly exhibited by axion haloscopes, which make use of solenoidal or toroidal magnetic fields. To recover the dominant scaling, the cylindrical symmetry must be broken, for instance through the placement or geometry of the pickup loop. The latter can be most easily achieved by modifying the pickup loop to span only a fraction of the azimuthal angle, with the optimal GW sensitivity obtained when the cylindrical symmetry for the pickup loop is maximally broken (for instance, with a figure-8 configuration as in Ref. <cit.>). For existing experiments, as the largest axion signal is obtained for detectors with full cylindrical symmetry, this explains the observation in Ref. <cit.> that the optimal axion and GW sensitivities cannot be simultaneously obtained for a haloscope based on a toroidal magnetic field, and furthermore demonstrates that this conclusion is generic. Nevertheless, we find that modifying the pickup loop geometry (or including several different pickup loops) allows one to obtain sensitivity to both the axion and GW signal, in a manner that at worst reduces the axion sensitivity by an O(1) amount.[This same approach would also allow for discrimination between a GW and axion signal. Of course, we note that there are many ways to distinguish these signals, the most important being that in the accessible parameter space the GW signal will be transient, whereas that from dark matter is persistent.] We illustrate the power of symmetry arguments by determining the leading power in (ω L) sensitivity for a range of different detector geometries without explicit computation, in view of determining the optimal geometries for GW searches. For the most relevant cases, we provide the computation to confirm our results. At the outset, we can already provide an intuitive argument as to why cancellations in highly symmetric detectors might be expected. To do so, rather than contrasting GW and axion electrodynamics, as we will in the remainder of the paper, let us consider a simpler comparison: a scalar versus a pseudoscalar. In particular, consider first the induced magnetic field arising from the interaction of a toroidal magnet, B = B_0 ê_ϕ, with a pseudoscalar via the interaction, g a F F̃. If we consider the induced magnetic field in the z direction at the center of the toroid, as measured by the ABRACADABRA collaboration, we find B^a_z ∼ g (∂ a) B_0 L. The consistent transformation of this result under parity, which can be confirmed directly, is critically reliant on the pseudoscalar nature of the axion. Indeed, if we ask what the induced field would be for a scalar interaction, g φ F^2, there is no expression we can write consistent with parity and the cylindrical symmetry of the instrument. An explicit computation confirms that B^φ_z = 0. This argument can be formalised into symmetry based selection rules which determine the geometries that are sensitive to scalar versus pseudoscalar coupling – indeed, there are configurations where B_a=0 whilst B_φ≠ 0 – and we undertake that exercise in App. <ref>. The general lesson, however, is that highly symmetric detectors impose symmetry constraints on the induced fields that can be so restrictive that the measurable signal vanishes. This is true also for GWs, and we will determine an appropriate set of selection rules to determine the interplay between signals and geometry. We can actually determine an additional general lesson by comparing the scalar and pseudoscalar interaction. As is well known, the axion interaction generates an effective current proportional to ∂_ν (a F̃^νμ) = (∂_νa) F̃^νμ, so that the interactions depends only on a derivative of the axion, as expected for a pseudo-goldstone boson. The equivalent expression for a scalar is ∂_ν (φ F^νμ) = (∂_νφ) F^νμ - φ j^μ, where j^μ = ∂_ν F^μν is the current that generates the leading order fields in the laboratory. Accordingly, for the scalar there is an additional contribution to the effective current localised at the boundary of the magnetic volume, which turns out to be generic: it will be present also for the GW, although it has so far been overlooked in the literature. This contribution can be interpreted as an effective current at the boundary of the magnetic volume, determined by the component of the effective magnetisation vector (introduced in Ref. <cit.> for GWs) parallel to the boundary surface. In the remainder of this paper we will flesh out these ideas for the GW signal, and we organise our discussion as follows. Section <ref> lays out the theoretical framework for our work, reviewing the relevant aspects of electrodynamics in a spacetime perturbed by a GW. Several points, such as a discussion of the symmetry properties of the induced magnetic field and response matrix formalism are presented here for the first time in this context. This sets the stage for deriving the GW sensitivity of axion haloscopes with solenoidal magnetic field configurations in Sec. <ref>. The results are then generalised in Sec. <ref> where we derive symmetry principles which allow us to determine the parametric scaling of the GW sensitivity for various detector geometries without explicit computation. The symmetry arguments will then enable us to draw general conclusions about the optimal strategy for axion and GW searches in axion haloscopes. Many details of our analyses are deferred to appendices. Appendix <ref> reviews Maxwell's equation in curved space time. Within it, we provide a careful derivation of the main equations governing the interaction of a GW with a background electromagnetism (EM) field, the derivation of the effective surface current, and an explanation of why the GW effects we consider scale at lowest order as (ω L)^2. In App. <ref> we study scalar and axion electrodynamics, with a focus on sharpening an analogy to the GW case. We will explain how our GW selection rules extend to these spin-0 waves, and the consequences for various detector geometries. In App. <ref> we summarise the symmetry properties of the cylindrical magnetic field configurations employed by axion haloscopes, and demonstrate that they can be decomposed into a solenoidal and toroidal component. Appendix <ref> expands our discussion of the response matrix formalism used to describe the detector response to a passing GW. In App. <ref> we summarise the explicit analytical expressions for all components of the effective current induced by a GW, up to order (ω L)^3 and for both toroidal and solenoidal external magnetic field configurations. These expressions may be used as input for full detector simulations, or for detailed numerical calculations of the relevant GW effects. In App. <ref> we discuss in detail the bootstrapping of axion search results to establish GW sensitivity, carefully taking into account the different time scales involved in the possible signals and detectors. The appendix further discusses details of several possible sources for high-frequency GWs. Finally, App. <ref> is dedicated to the possibility of using an external electric instead of magnetic field for GW detection, and demonstrates how our symmetry arguments extend to this case. § GRAVITATIONAL WAVE ELECTRODYNAMICS To begin with, we review the general formalism used to compute the magnetic flux induced by a GW passing through a lumped-element circuit axion haloscope. We will review the discussion of Ref. <cit.> (see also Ref. <cit.>), pointing out an additional contribution to the induced magnetic flux due to effective surface currents at the boundary of the magnetic volume, which was previously overlooked. We then extend this approach to account for the transformation properties and symmetries of the various quantities, in particular the induced magnetic field, under rotations and reflections. The axion haloscopes targeting the magneto-quasistatic regime (m_a ≲μ eV) generally have a high degree of cylindrical symmetry, and we will study the impact of this on the GW signal systematically. Doing so will allow us to develop a systematic approach to the geometries of the external background magnetic field and pickup loop, and resolve fundamental questions such as determining the optimal geometry for GW and axion searches. §.§ Proper detector frame Throughout this paper we will work in the proper detector frame,[This is in contrast to the transverse traceless (TT) frame, in which coordinate distances are set by the geodesics of free-falling test masses, and a rigid instrument and experimental magnetic field no longer have a simple description.] in which coordinate distances to the origin match the proper distance, and thus coincide with those measured by ideal rigid rulers. As a consequence of this, up to non-inertial forces such as those associated with the rotation of the Earth (which can be neglected at high frequencies <cit.>), the effect of GWs is simply given by a small Newtonian force proportional to their amplitude. Throughout this paper, we will assume that these GW forces do not mechanically deform the experimental setup, in particular the static electromagnetic fields applied in the experiment remain static in the presence of a GW. Critically, this implies that in the proper detector frame the experimentally generated magnetic field coincides with that of flat spacetime, see App. <ref> for details. This assumption is in particular valid for GW frequencies below the mechanical resonance frequencies of the setup. At frequencies around and above the lowest mechanical resonance ω_0 ∼ v_s/L, with v_s denoting the speed of sound in the material, the Newtonian GW force is no longer negligible <cit.>. We expect this to impact part of the parameter space relevant for the experimental setups discussed here, and we leave a quantitative analysis to future work. Interestingly, in the case of microwave cavities, it was demonstrated that this effect can enhance the GW sensitivity <cit.>. Expanding the metric as g_μν = η_μν + h_μν with η_μν denoting the flat metric with sign convention (- + + +), in the proper detector frame the GW at the position r can be expressed as <cit.>,[ This expression is equivalent to Eq. (S5) in Ref. <cit.>, as can be shown using the completeness relation k̂_ik̂_j + Û_iÛ_j +V̂_iV̂_j = δ_ij in the third line of Eq. (<ref>).] h_00 = ω^2 e^-ıω t F( k· r) r_m r_n ∑_A = +, × h^A e^A_mn (k̂), h_0i = 1/2ω^2 e^-ıω t [ F( k· r) - ı F^'( k· r) ] [k̂· r  r_m δ_ni - r_m r_n k̂_i] ∑_A = +, × h^A e^A_mn (k̂), h_ij = - ıω^2 e^-ıω t F^'( k· r) [ | r|^2 δ_imδ_jn + r_m r_n δ_ij - r_n r_j δ_im - r_m r_i δ_jn ] ∑_A = +, × h^A e^A_mn (k̂), where F(ξ) = [e^ıξ - 1 - ıξ]/ξ^2, h^+, × denotes the amplitude of the two GW polarisations, and the polarisation tensor e_ij^+, × (k̂) for a given direction of GW propagation k̂ = sinθ_h ê^ϕ_h_ρ + cosθ_h ê_z can be defined as e_ij^+ = 1/√(2)[ Û_i Û_j - V̂_i V̂_j ], e_ij^× = 1/√(2)[ Û_i V̂_j + V̂_i Û_j ], V̂ = ê_ϕ^ϕ_h, Û = V̂×k̂. Here θ_h and ϕ_h denote the azimuthal and polar angle of the GW, and ê^ϕ_h_ρ, ê_ϕ^ϕ_h and ê_z denote unit vectors in the radial, angular and vertical direction for a polar angle ϕ_h, with both coordinate systems defined with origin at the center of the experiment. In particular, note that h_μ ir_i =0, and consequently ds^2=g_μν dx^μ dx^ν=η_μν dx^μ dx^ν for dx^μ = (0, dr r̂ ). From this we see that coordinate distances to the origin coincide with the corresponding proper distance, a defining characteristic of the proper detector frame <cit.>. In this work, we will limit ourselves to the regime of ω L ≪ 1, as appropriate over most of the range covered by lumped-element circuit instruments.[For the result when taking ω L ∼ 1 for the axion induced signal in these instruments, see Ref. <cit.>.] We can therefore treat ω L as a perturbative parameter, and will do so often, for instance it will be implicit in our use of the Biot-Savart law and used throughout our discussion of the implications of the symmetry transformations. Further, as F(ξ) = - 12 + O(ξ), it follows from Eq. (<ref>) that h_μν in the proper detector frame has a leading order contribution at O[(ω L)^2]. The absence of any contribution at O[ω L] is a consequence of working in a freely falling reference frame assumed to be rigid, as we demonstrate in App. <ref>. An immediate implication of this scaling is that for a GW incident on an electromagnetic field that is static in the proper detector frame, the leading order electromagnetic response induced will scale as O[(ω L)^2].[By gauge invariance, the same must also be true in the TT frame, where h ∝ e^-ıω t and therefore has contributions at all orders in ω L. This implies there must be a detailed cancellation of the linear frequency contribution, and the need to keep track of this highlights the advantage of working in the proper detector frame. For additional discussion, see Ref. <cit.>.] This demonstrates that the optimal observables for the GW one can construct will also be at O[(ω L)^2]. As outlined in the introduction, one of the primary goals of the present work is to understand the role symmetry plays in the GW interactions. In particular, we will be studying the interaction between a GW and detectors with a high degree of symmetry. Existing axion instruments tend to have full cylindrical symmetry, that is, invariance under rotations about ê_z with an angle φ, R_z(φ), and arbitrary reflections, P_α, with α = x,y,z. Therefore, it is worthwhile already to characterise the transformation of the GW polarisations and proper detector frame components when these transformations are applied to the position and incident direction at which we evaluate these quantities. To begin with, e_ij^A (P_αk̂) = σ [P_α]_ik e_kl^A (k̂) [P_α]_lj, e_ij^A (R_z k̂) = [R_z]_ik e_kl^A (k̂) [R_z]_lj. Here, [P_α]_ij is a 3 × 3 matrix corresponding to the reflection of the α-component, and [R_z]_ij is similarly the matrix describing the rotation about ê_z.[We emphasise that e_ij^A transforms under general rotations as a tensor only up to gauge transformations <cit.>. From the definitions in Eq. (<ref>), however, the polarisation tensors are true tensors under rotations about ê_z.] We have further introduced σ= +1 (σ = -1) for the A=+ (A = ×) polarisation, which keeps track of their different transformations under the reflections. From these results, we conclude h_00( P_α r, P_α k) = σ h_00( r,k), h_0i(P_α r, P_α k ) = σ [P_α]_ij h_0j ( r,k), h_ij(P_α r, P_α k) = σ [P_α]_ik h_kl( r,k) [P_α^T]_lj, and h_μν transforms as a regular tensor under rotations about the z-axis. §.§ Effective current induced by GWs The interaction of GWs with electromagnetic fields can be effectively described as an additional current augmenting Maxwell's equations in a flat spacetime. Specifically, ∂_ν F^μν = j^μ +j_ eff^μ, ∂_ν F_αβ+∂_α F_βν+∂_β F_να =0, where j^μ is the electromagnetic current in the absence of the GW (i.e. the ordinary currents in flat spacetime) whereas the effective current can be written as j_ eff^μ≡∂_ν( - 1/2 h F^μν + F^μαh^ν_α - F^ναh^μ_α), with h ≡h^μ_μ. This is derived in the App. <ref> (see also Refs. <cit.>), where we also discuss why the second equation in Eq. (<ref>) – the homogeneous Maxwell's equations – are unaffected by the presence of the GW. Throughout this paper we will be working to linear order in h, so that F^μν as it appears on the right-hand side of this equation contains only the background fields. In further analogy to EM, one can define <cit.> P_i ≡ - h_ij E_j + 1/2 h E_i + h_00 E_i - ϵ_ijk h_0 j B_k, M_i ≡ - h_ij B_j - 1/2 h B_i + h_jj B_i + ϵ_ijk h_0j E_k, so that j_ eff^μ = ( -∇· P, ∇× M + ∂_t P). This final formulation is reminiscent of polarisation and magnetisation vectors for EM in a medium. Hence, the task of calculating the electromagnetic fields induced by a GW is equivalent to performing standard EM calculations in such media. If we consider the leading order effect in O[(ω L)^2], we can already see a simplification when the external field is purely magnetic: the spatial part of the current will be generated only by M, and be sourced by h_00 and h_ij but not h_0i, as the time derivative acting on P ensures it will be higher order. Up to this point, we have not specified the geometry of the background fields or the pickup loop that will be used to measure the induced fields. The majority of the axion haloscopes in consideration exploit solely an external magnetic field with full cylindrical symmetry. For such magnetic field configurations, as shown in App. <ref>, it is possible to decompose the background magnetic field into a solenoidal and toroidal piece, B( r) = B^( r) + B^( r). We will further assume the field depends only on ρ, namely | B( r)| = B(ρ) for an unspecified function B. The benefit of this decomposition is that B^( r) ∝ê_z and B^( r) ∝ê_ϕ, and each component has a well-defined set of transformations under reflections, which we keep track of through a parameter η_α, defined through B (P_α r) = η_α P_α B ( r). In addition to the partial reflections, we will track the transformation under the full parity transformation P = P_x P_y P_z, B (P r) = η P B ( r), where consistency requires η = η_x η_y η_z. For each pair of magnetic field configuration and spatial reflection, explicit values of η_α are summarised in Tab. <ref>. Each detector configuration is usually associated uniquely with either B^ or B^. This is certainly true for the existing and planned axion haloscopes we consider. Accordingly, we will suppress the superscripts (s) or (t) moving forward. Combining the transformation properties of the magnetic field with Eq. (<ref>), the spatial part of the induced current – which fully determines the induced magnetic field – then obeys j_ eff( P_α r, P_α k) = - ση_α P_α j_ eff ( r, k), and j_ eff( R_z r, R_z k) = R_z j_ eff. We emphasise this transformation holds for both the M and P contributions to j_ eff separately. Before we move on to consider the fields generated by j_ eff, we note that from the above discussion we can see the presence of a boundary contribution that has previously been overlooked (for instance, in Ref. <cit.>). Recall that at the interface of two bodies with different values of the magnetisation vector M, Maxwell's equations predict a surface current proportional to n̂×Δ M, where n̂ is the unit vector normal to the surface. For the external magnetic fields considered in this work, the GW effective magnetisation M in Eq. (<ref>) sharply drops to zero at cylindrical surfaces, where the external magnetic fields vanish. If this occurs at a radius ρ=R, the GW generates an effective surface current given by j_ eff^(t,s) = ±δ(ρ -R) ê_ρ× M, which must be accounted for. As already noted, such a contribution does not occur for an axion, but does for a scalar coupled to EM. (For the axion, such a contribution will not occur as Δ M∝Δ E∝n̂ at any surface. Explicit calculations are provided in App. <ref>.) §.§ The induced magnetic field The effective current induced by the GW will source an induced magnetic field, B_h, which is determined by the Biot-Savart law,[This approach is valid only to O[(ω L)^3], see Ref. <cit.> for a discussion. ] B_h( r^', k) = 1/4π∫_V_Bd^3 r  j_ eff( r, k) × ( r^' - r )/| r^' - r|^3, where V_B denotes the detector volume filled by the external magnetic field. Here and throughout, we use r^' to indicate the position where we evaluate the induced field, and the pickup loop inserted to measure that field will integrate over this variable, whereas r is where we evaluate the effective current. Under the assumption that the integration region V_B is invariant under P_α – which it is for the cylindrically symmetric detectors we consider – the Biot-Savart law Eq. (<ref>) together with Eq. (<ref>) implies the transformation of the induced magnetic field as, B_h (P_α r^',P_α k) = ση_α P_α B_h( r^', k), and correspondingly B_h (R_z r^', R_z k) = R_z B_h( r^', k). The result in Eq. (<ref>) will be a key tool in studying the implications of detector symmetry for the associated GW signal, which we consider in detail in Sec. <ref>. We now have all the ingredients to compute the effect of GWs on the observable used in low-mass axion haloscopes, namely the induced magnetic flux through a suitably placed pickup loop, Φ_h = ∫_A_ℓ d^2 r^' B_h( r^') ·n̂^'( r'), with A_ℓ the surface enclosed by the pickup loop, and n̂^'( r') the unit normal vector to that surface. Our symmetry arguments will also depend on the transformation of the pickup loop under reflections, which in direct analogy to the transformation of the magnetic field we will trace using n̂'(P_α r') = κ_α P_αn̂'( r'), with the different possible values collected in Tab. <ref>. In Sec. <ref>, we will provide explicit expression of Φ_h for solenoidal geometries and review how to set constraints on GW signals by recasting the results of axion experiments. Rather than employing Eqs. (<ref>) and (<ref>), one can also work directly with the vector potential, A_h( r^' ) = 1/4π∫ d^3 r j_ eff( r)/| r^' - r|, Φ_h = ∫_ℓ d r^'· A_h ( r^'), where ℓ is the closed curve describing the pickup loop. This approach reduces the dimension of the flux integration by one, thereby simplifying several analytical calculations. Nonetheless, the symmetry based arguments are often more intuitive when expressed in terms of the magnetic field. We will use both formalisms as needed. Moreover, we introduce the response matrix, D^mn( k), to study the dependence of the flux on the polarisation. The response matrix exploits the observation that the flux in Eq. (<ref>) is a linear functional of the GW, which in turn is linear in the polarisation tensors, e^A_mn. Hence, there must exist a matrix D^mn( k) such that Φ_h =e^- ıω t D^m n( k) ∑_A h^A e^A_m n(k̂). A more detailed discussion of the response matrix and explicit expressions for D^mn( k) are provided in App. <ref>. Using the response matrix, for a given GW wave vector k and polarisation A, we can construct pattern functions D^mn( k) e^A_mn which encode the angular response of a detector, i.e. describe the antenna pattern relevant to determine the magnitude of the induced magnetic flux. This is analogous to the formalism introduced for the response of interferometers to GWs, see Ref. <cit.>. We emphasise that Eq. (<ref>) results from the fact that the magnetic flux is linear in the metric perturbation associated with GWs. Although more details can be found in App. <ref>, let us here provide two remarks on the general properties of D^ij. First, as we see from the form of Eq. (<ref>), the lowest order frequency contribution to D^ij( k) occurs at O[(ω L)^2].[For interferometers, the observable is the GW strain and the calculation can be performed in the TT frame, where the antenna pattern function at leading order in frequency depends only on the direction of the GW, k̂.] This is a generic consequence of the use of proper detector frame. Secondly, the matrices D^ij are not unique, but D^ij( k) → D^ij( k) + c^i k^j + c^j k^i with constants c^i,j gives rise to the same magnetic flux since the polarisation tensors are transverse with respect to k. § THE GW SENSITIVITY OF SOLENOIDAL DETECTOR GEOMETRIES Having established the general framework of how a GW interacts with an experimental magnetic field, we now put it to use for the explicit case of solenoidal instrument. Previous work, see Ref. <cit.>, focused on a toroidal geometry for the external magnetic field, which is used for the axion searches performed by ABRACADABRA <cit.> and SHAFT <cit.>. This is modelled as B = B_maxR/ρ[ Θ(R+a - ρ) - Θ(R - ρ) ] ê_ϕ, where Θ(x) is the Heaviside step function, which ensures the external magnetic field only exists on R < ρ < R+a. However, another magnetic field geometry that is being pursued by axion haloscopes is a soleneoidal field, B = B_0 Θ(R - ρ) ê_z. Among the detectors making use of a solenoidal field, we will first focus on instruments where the induced flux is read out through a vertical pickup loop as depicted in Fig. <ref> and implemented in the ADMX SLIC <cit.> and BASE <cit.> experiments. Moreover, the planned WISPLC <cit.> and DMRadio-m^3 <cit.> experiments are also planning to implement a related configuration.[Let us briefly comment on several of the differences between these experiments. ADMX SLIC experiment has a single rectangular pickup loop at a fixed polar angle ϕ_ℓ. The BASE experiment relies instead on many such pickup loops placed symmetrically in the horizontal plane, whereas DMRadio-m^3 will use a full toroidal sheath. For the WISPLC experiment, the current design features a pickup loop at fixed ϕ_ℓ, but which is located outside the region of the external magnetic fields.] (Although other future instruments will use a toroidal magnetic field, for instance DMRadio-50L.) Therefore, as a straightforward generalisation of Ref. <cit.>, we first calculate the expected magnetic flux from the incoming GW for a solenoidal magnetic field and different locations of the pickup loop. Armed with an understanding of how the GW interacts with a detector for two explicit cases, in the next section we will then generalise our discussion for general geometries. Before proceeding, we note that in the main text we will only consider the interaction between GW and laboratory magnetic fields. The rationale for this is that axion haloscopes exclusively make use of magnetic fields, as larger energy densities can be built up in magnetic than electric fields. Further, the axion interaction with a magnetic field is controlled by ∂_t a, which for dark matter is much larger than ∇ a, which the electric field couples to (for a review, see App. <ref>). As the GW is both relativistic and couples differently than the axion, this final consideration does not apply, and therefore for completeness we briefly discuss the interaction with an electric field in App. <ref>. §.§ The GW signal for a solenoidal magnetic field Consider first the flux Φ_h(r) caught by a rectangular pickup loop at fixed polar angle ϕ_ℓ, radially ranging from [0,r] with height l, and positioned symmetrically about z=0. This scenario is equivalent to that depicted in Fig. <ref> with r_1=0 and r_2=r. We can then derive the equivalent result for an arbitrary width with r_1 < r_2 ≤ R from Φ_h(r_2) - Φ_h(r_1). Further, we will consider the case where r ≤ R, with R the radius of the detector, and r ≥ R separately. For the solenoidal magnetic field in Eq. (<ref>), we can calculate the effective current, induced magnetic fields, and flux using the formalism of Sec. <ref>. As already discussed, we will study the problem perturbatively in ω L ≪ 1. Complete expressions for all components of the current to order O[(ω L)^3] are provided in App. <ref>. Here, we state the results for the flux, which we write as a series in ω as Φ_h = Φ_h^(2) + Φ_h^(3) + ⋯, with Φ_h^(n) denoting the flux at O[(ω L)^n]. Explicitly, when r ≤ R we have Φ_h^(2) = e^-ıω t/144 √(2)ω^2 B_0 l r ( 30R^2 - 13 r^2 ) s_θ_h( h^+ c_θ_h s_ϕ_h - ϕ_ℓ + h^× c_ϕ_h - ϕ_ℓ), and Φ_h^(3) = - ı e^-ıω t/2304 √(2)ω^3 B_0 l r^2 [ h^+ c_θ_h s_2(ϕ_h - ϕ_ℓ){ 3 l^2 - 2 r^2 + 57 R^2 + ( l^2 - 22 r^2 + 27 R^2) c_2θ_h} + 2 h^×{ (l^2 + 2 r^2 + 18R^2 + [l^2 - 14 r^2 + 24 R^2] c_2θ_h )c_2(ϕ_h - ϕ_ℓ)+ 6(5r^2 - 12R^2) s_θ_h^2 }], where we employ the shorthands c_x ≡cos x and s_x ≡sin x. The factor of ı in Φ_h^(3) indicates that this contribution enters with a π/2 phase shift in time as compared to Φ_h^(2). If the loop instead is placed outside the solenoidal magnetic field, extending over ρ^'∈ [r_1,r_2] with r_2> r_1 > R, then the leading order flux is Φ_h^(2) = 5 e^-ıω t/48 √(2)ω^2 B_0 l R^4 r_2 - r_1/r_1 r_2 s_θ_h( h^+ c_θ_h s_ϕ_h - ϕ_ℓ + h^× c_ϕ_h - ϕ_ℓ), and Φ_h^(3) = ı e^- ıω t/2304√(2)ω^3 B_0 l R^4/(r_1 r_2)^2 [ 2 h^×{ - 72 (r_1 r_2)^2 lnr_1/r_2 s_θ_h^2 + c_2(ϕ_h - ϕ_ℓ)( (r_2^2 - r_1^2) (l^2 + 2R^2 + (l^2 - 8R^2)c_2θ_h ) + 24 (r_1 r_2)^2lnr_1/r_2) } + h^+ c_θ_h s_2(ϕ_h - ϕ_ℓ){ (3 l ^2 + R^2) (r_2^2 - r_1^2) + 60 (r_1 r_2)^2 lnr_1/r_2 - ((13R^2 - l^2 )(r_2^2 - r_1^2) + 12 (r_1 r_2)^2 lnr_1/r_2) c_2θ_h}]. These are the appropriate results one would use for WISPLC. Consider the symmetry of the above expressions. Firstly, observe that Φ_h^(2)≠ 0, Φ_h^(3)≠ 0, and in fact both polarisations contribute at each order. However, with a single pickup loop at ϕ=ϕ_ℓ, the considered configuration manifestly breaks azimuthal symmetry. If we were to restore this symmetry – through an array of pickup loops arranged symmetrically in ϕ as for BASE, or with a coaxial arrangement as for DMRadio-m^3 – then all trigonometric functions with the argument ϕ_h-ϕ_ℓ will vanish, and we would conclude Φ_h^(2) = 0 and Φ_h^(3)∝ h^×, the h^+ contribution also vanishing. Cylindrical symmetry hence induces a cancellation of the leading order sensitivity. This same conclusion was reached for a toroidal magnetic field in Ref. <cit.>. There, it was shown that breaking the azimuthal symmetry by using a non-circular pickup loop restored sensitivity at O[(ω L)^2], exactly as we find in Eqs. (<ref>) and (<ref>). In particular, Ref. <cit.> showed that by using a magnetometer or figure-8 configuration of the pickup loop, the GW flux was given by Φ_8^ = e^-ıω t/3√(2)ω^2 B_max r^3 R ln(1+a/R) s_θ_h( h^× s_ϕ_h - h^+ c_θ_h c_ϕ_h). We can imagine implementing a similar pickup loop configuration in the solenoidal case. For instance, we can take Eq. (<ref>) and combine the contribution from ϕ_ℓ∈ [0,π) with a π phase shift to that from ϕ_ℓ∈ [π,2π). (For BASE, we can conceive of implementing this by changing the orientation of the winding of the loops on, for instance, 0≤ϕ_ℓ<π.) Doing so, we obtain,[This is an oversimplification. One cannot simply add the contribution from a set of differentially spaced loops, as this neglects the mutual inductances between the loops. Instead, one would need to model the full sheath and compute the induced current density across across it. We thank Joshua Foster for discussions on this point. ] Φ_8^ = e^-ıω t/36 √(2)ω^2 B_0 l r ( 30R^2 - 13 r^2 ) s_θ_h( h^× s_ϕ_h - h^+ c_θ_h c_ϕ_h). The results in Eqs. (<ref>) and (<ref>) are very similar – if we take B_0=B_max and set all spatial scales to L, the two only differ by a factor of ≃ 2 – suggesting that the change in geometry has only a minor impact. In Sec. <ref>, we will explain the origin of these cancellations in the instruments with full cylindrical symmetry in terms of the symmetry transformations of the various quantities introduced in Sec. <ref>, and in particular explain how the appearance or absence of different terms can be understood without an explicit calculation. Such arguments will allow us to determine optimised detector geometries that maximise the GW sensitivity. We emphasise that the analytic results above all assume the height of the solenoidal magnet, H, is parametrically larger than all other scales. That we had to assume this is not unique to the GW signal, the equivalent axion flux (discussed in the next subsection) also only has an analytic form for parametrically large H. In Fig. <ref> we compare the leading order flux from the analytic result in Eq. (<ref>), to the exact result determined numerically, where we use the currents in App. <ref>, and with the bands denoting the uncertainty in the integration.[As a point of caution, we note that the analytic results must be treated carefully as there can be branch cuts in the integration.] In particular, we compute the non-trivial part of the flux, captured by I^(2)_+,× in Φ_h^(2) = e^-ıω tω^2 B_0 (h^+ I^(2)_+ + h^× I^(2)_×). We take all dimensions to match those of ADMX SLIC, and show results for the physical H and H →∞. Good agreement is observed in the large H limit, whereas for finite H the analytic results overestimate the amplitude of the flux. This can be understood from considering the integral over z that appears in Eq. (<ref>): schematically, we have ∫_-H/2^H/2 dz [A + (z-z')^2]^-3/2 = 2/A - 4/H^2 + O(H^-4), so that the first term neglected will suppress the flux amplitude. §.§ Strain sensitivity from recasting axion limits and projections Combining the above results, we now determine the GW sensitivity of axion haloscopes making use of a solenoidal external magnetic field. In detail, we recast the constraints on the axion photon coupling from BASE <cit.> and ADMX SLIC <cit.>, as well as the future instruments WISPLC <cit.> and DMRadio-m^3 <cit.>, as expected sensitivities to the amplitude of GWs. All of these instruments are designed with the goal of detecting a coherently oscillating axion dark matter background field, which takes the schematic form a(t) = √(2 ρ_ DM)/m_asin (m_a t), with ρ_ DM the local dark-matter density, and m_a the unknown dark-matter mass. In the presence of a magnetic field, the axion background generates an effective current j_ eff = (∂_t a) B. For the configuration in Fig. <ref> the current induces the following magnetic flux, Φ_a = 1/4 (∂_t a) B_0 l (r_2^2 - r_1^2) + O(H^-2), where the result for finite H can again be determined numerically. Our goal is to derive sensitivity to the GW strain, h, by reinterpreting results on , established assuming a signal flux as in Eq. (<ref>) (modified for the specific experimental configuration). To do so, we compare Φ_a and Φ_h, but further we account for the fact that in all expressions derived so far the axion and GW are treated as persistent monochromatic waves, when this is not the case for either one. The dark-matter axion is indeed persistent, but has a coherence time of τ_a = 2 π Q_a/m_a, with a quality factor of Q_a ∼ 10^6, indicating a highly coherent signal. The GW signal on the other hand is model dependent (specific examples of superradiance and primordial black hole (PBH) mergers are discussed in App. <ref>), but can be described as lasting for a duration T_h with coherence time scale τ_h, centered at a frequency f, so that the signal has a quality factor Q_h = τ_h f. For resonant instruments, one must also account for the quality factor and coherence time of the instrument, given by Q_r and τ_r = Q_r/f. The ultimate strain sensitivity is determined from considering the interplay of each of these scales, together with the experimental run time, an analysis of which we provide in App. <ref>. The end result is that rather than simply matching the GW and axion fluxes, we instead arrive at Φ_h( h^+, h^×; ϕ_h, θ_h) = R_c Φ_a(), where the coherence ratio R_c accounts for the difference in coherence between the signals. As defined, R_c > 1 implies the GW signal is harder to detect than a naive matching of the flux would suggest, as for a fixed Φ_a we want to probe the smallest h ∝Φ_h values possible. Here, we will restrict our attention to a single case, where we take T_h = τ_h, and imagine a resonant instrument that spends a time T_m ≫τ_a, τ_h, τ_r scanning each axion mass. In this case, the coherence ratio can be computed to be R_c = ( T_m/τ_h)^1/4( Q_a/Q_h)^1/4{[ 1 Q_r < Q_a, Q_h,; (Q_a/Q_r)^1/4 Q_a < Q_r < Q_h,; Q_r/Q_h Q_h < Q_r < Q_a,; (Q_a/Q_r)^1/4Q_r/Q_h otherwise. ]. This expression is derived in App. <ref>, however, let us briefly describe the physical origin of each term. The first factor encodes the suppression that arises as the GW signal does not persist for the full time the instrument scans this frequency. The quarter scaling follows our assumption that T_m is the largest scale, implying that the sensitivities have entered the asymptotic scaling regime consistent with the Dicke radiometer equation <cit.> (see also Ref. <cit.>). The (Q_a/Q_h)^1/4 factor arises as signals that are more coherent are easier to discover, and was used in Refs. <cit.>. More coherent signals are narrower in the frequency domain, and therefore can generally be teased out over a smaller amount of background. In addition, for Q_a < Q_r there is a slight penalty to the axion signal of (Q_a/Q_r)^1/4 as the full axion signal is not resolved by the instrument. Finally, there is a strong penalty of Q_r/Q_h that applies to the GW signal whenever Q_r > Q_h. When this occurs, the GW fails to fully ring up the resonance response of the instrument, which strongly decreases the power it deposits, explaining the linear scaling of this factor, as opposed to the quarter scaling of all others. Using Eq. (<ref>) – and its generalisations in App. <ref> – for any given instrument, we can translate GW signals defined by their amplitude h, duration T_h, and coherence time τ_h, to an effective signal strength h/ R_c. We can then compare h/ R_c to the equivalent plane wave sensitivity shown in Fig. <ref>, which is derived simply from matching the flux; explicitly, Eq. (<ref>) assuming R_c=1. For comparison, Fig. 1 in Ref. <cit.> employed R_c=(Q_a/Q_h)^1/4 = 10^3/4 for their adopted Q_a=10^6 and Q_h=10^3. As explained in App. <ref>, this is only appropriate when comparing persistent signals (which PBHs are not) and in the case where Q_a is not smaller than both Q_r and Q_h. For the results in Fig. <ref>, the frequency range is fixed by the corresponding axion mass range, and therefore falls into the MHz band. The left figure demonstrates the estimated sensitivity of three existing instruments: ABRA, BASE, and ADMX SLIC (the reach for SHAFT is comparable to ABRA, see Ref. <cit.>). Note that BASE and ADMX SLIC perform a resonant search strategy for the axion, and therefore at present have a deeper sensitivity, but narrower frequency coverage than ABRA which completed a broadband search. For ABRA and BASE, the cylindrical symmetry of these instruments suppresses the leading order flux, for reasons we demonstrate in the next section. For this reason we also show the sensitivity the instruments could obtain if they implemented a figure-8 style geometry, as given in Eqs. (<ref>) and (<ref>). On the right we show the sensitivity for future instruments, in particular DMRadio and WISPLC. For DMRadio, we show the projected reach of the 50L, m^3, and GUT variants of these instruments, in each case assuming they have adopted a figure-8 style readout. We assumed a solenoidal magnet for m^3, but toroidal for 50L and GUT. Beyond this, the only difference to these DMRadio projections and those derived in Ref. <cit.> is the use of the H →∞ results and the treatment of the coherence ratio, which here we take as R_c=1, leaving it for the signal predictions.[In detail, for 50L we took r=a=R = 0.11 m, for GUT r=a=R = 0.64 m, and for m^3, R=0.8 m, r_1 = 0.34 m, r_2 = 0.64 m, l=1.4 m. For WISPLC, we used R = r_1 = 0.063 m, r_2 = 0.19 m, l = 0.57 m. ] For WISPLC, we have repurposed their anticipated axion sensitivity with a resonant detection strategy, and assumed a single pickup loop employing Eq. (<ref>) (more precisely, WISPLC has two individual pickup loops <cit.>). For these cases, we chose the angular direction of GW and relative pickup loop direction which maximise the GW sensitivity. In computing all fluxes, we have adopted the H →∞ results; cf. Fig. <ref>. Further, as the GW signal depends on the incident direction, in each case we have simply taken one of the two polarisations, and chosen the incident direction to maximise the signal, although performing an angular average instead would only minimally impact the results. For comparison, on the right we also draw the expected effective signals h/ R_c coming from superradiance or PBH binaries as benchmarks for various parameter choices. Details are given in App. <ref> and effective signals with different parameters for superradiance and PBHs are given in Figs. <ref> and <ref> respectively. Example values of R_c for different instruments and two different benchmark signals are provided in Tab. <ref>, with f_* denoting a suitable detector reference frequency. The first benchmark (B1) is a GW signal of duration 1 s, with a coherence limited only by its finite duration. The second (B2) is an example of a persistent highly coherent signal. In both cases, we assume the frequency spectrum of the signal to be centered around the detectors resonant frequency. These two benchmarks are loosely inspired by the properties of GWs from primordial black hole mergers and from superradiance, respectively. Further discussion of each is provided in App. <ref>. The advantage of these simplified signals is to facilitate comparisons across different detector concepts for astrophysical signals. This is an alternative to the comparison of sensitivity curves in terms of noise spectral densities (see e.g. Ref. <cit.>) which requires detailed knowledge of the relevant noise sources. A further alternative is the use of the characteristic strain, h_c∼ Q_h^1/2 h, as advocated e.g. in Ref. <cit.>. However, in realistic cases the timescales mentioned above enter into the estimation of the characteristic strain, and hence again detailed knowledge of signal and detector properties becomes necessary for comparisons. Of course, the method used here of defining simplified benchmarks for comparisons also has its drawbacks. In particular, we do not cover searches for a stochastic GW background. The coherence ratio R_c can also be computed for more realistic signals. In the right panel of Fig. <ref>, we present two benchmark GW signals arising from either superradiance or PBH inspirals for illustration with the details of each signal being discussed in App. <ref>. For those examples we adopt the following simplified experimental setups: we have chosen Q_r = 10^4 and T_m = 1 ks for superradiance while Q_r = 10^4 and T_m = 1 ms for PBH. In the PBH case, the expected signal depends on the PBH masses and the curve presented in the figure corresponds to the maximal signals for a given frequency. Figure <ref> demonstrates that axion haloscopes can place competitive bounds on GWs in this frequency range. Due to the challenges mentioned above in comparing different sensitivity estimates, we refrain here from including other detector concepts in the this figure. This is, however, a very active field and other concepts such as bulk accoustic wave devices <cit.>, levitated sensors <cit.>, interferometers <cit.>, other electromagnetic GW detectors <cit.> and indirect detection methods <cit.> have reached, or are expected to reach, similar sensitivities. As evident from Fig. <ref>, a further increase in sensitivity is needed to reach possible astrophysical (or cosmological) signals. Our work should be seen as part of the quest of paving a possible path towards this. Technically the ADMX SLIC scan strategy involved averaging 10,000 individual 32 ms scans at each frequency, which we here treat as a single 320 s scan. § SELECTION RULES FOR GENERAL DETECTOR GEOMETRIES In Sec. <ref> we studied in detail the interaction of a GW with a solenoidal magnetic field, adding to the existing results where the wave interacts with a toroidal field <cit.>. In this section we seek to generalise these results with a symmetry based study of a broader class of magnetic fields and pickup loops. We will consider detectors with both solenoidal and toroidal magnetic fields (using Eqs. (<ref>) and (<ref>)), but in each case we will consider all possible directions for the pickup loops: designed to measure the induced magnetic field in the ê_z, ê_ϕ, and ê_ρ directions. A central goal of our analysis is to identify, by symmetry alone, what is the leading power in (ω L) of the GW flux, Φ_h, for a given detector. To do so, we will derive three selection rules that hold for the interaction of a GW with a cylindrical instrument, that allow us to study the more general case and identify promising detector geometries without explicit calculation. The results are catalogued in Tab. <ref> supplemented by the outcome of explicit computations, but let us briefly summarise the key findings. The leading contribution we expect to Φ_h is at O[(ω L)^2], however, we have already seen for the BASE experiment and for ABRA that the flux at this order vanishes. These are two examples of a general result: instruments with full cylindrical symmetry (of both the magnetic field and pickup loop) designed to search for axions have a leading power sensitivity of at most O[(ω L)^3]. This is a consequence of two observations. Firstly, as we will demonstrate, detectors with azimuthal symmetry are only sensitive to one of the two GW polarisations h^+ or h^×. They are also only sensitive to either a scalar or axion, as under parity the scalar transforms as h^+, whereas the axion transforms as h^× (see App. <ref>). Secondly, azimuthal symmetry enforces that only the h^+ contribution can enter at O[(ω L)^2]. Consequently, instruments with full cylindrical symmetry which can detect scalars coupled to EM may also detect h^+ GW flux at O[(ω L)^2], but no leading order contribution can appear for axion experiments which employ full cylindrical symmetry to enhance the axion signal. If the cylindrical symmetry of the pickup loop is broken, the leading power sensitivity to the GW can be restored, at the cost of an O(1) factor to the axion signal. §.§ Three selection rules for the interaction of a GW with a cylindrical detector Let us now derive three general results regarding the form of Φ_h when we have a magnetic field and pickup loop with full cylindrical symmetry. [linewidth=1.5pt, roundcorner=6pt] Selection Rule 1: For an instrument with azimuthal symmetry, Φ_h ∝ h^+ at O[(ω L)^2]. We emphasise that this statement is coordinate dependent, and holds for the definition of h^+ adopted in Eq. (<ref>). In general, one can convert h^+ into h^× by a π/4 rotation around the propagation direction of the GW. The coordinate independent statement is that for geometries with azimuthal symmetry, only a single polarisation appears at leading order. Proof: At O[(ω L)^2], the effective current j_ eff receives a contribution only from M (as ∂_t P∝ O[(ω L)^3]), which itself depends on h_00 and h_ij, defined in Eq. (<ref>). Moreover, as both F( k· r) and F^'( k· r) are constant at leading order, h_00 and h_ij depend only on the GW direction through the polarisation tensors in Eq. (<ref>), to which the induced magnetic field and the flux are proportional. Equivalently, the leading order response matrix in Eq. (<ref>) is independent of the GW direction, explicitly D^mn( k) = D^mn_(2) + O[(ω L)^3], where D^mn_(2) depends on ω^2 but not k̂. These results hold in general. We now invoke azimuthal symmetry, which implies that the result cannot depend on the azimuthal angle associated with the direction of the GW, so that Φ_h(k̂) = Φ_h(R_z(φ)k̂) for an arbitrary angle φ. Combined with the leading order response matrix, we find Φ_h (k̂) = 1/2π∫_0^2π dφ Φ_h (R_z(φ)k̂) = e^- ıω t D_(2)^mn∫_0^2π dφ ∑_A h^A e_mn^A(R_z(φ) k̂) + O[(ω L)^3] = e^- ıω t/2√(2) h^+ sin^2 θ_h D^mn_(2)[ -1 0 0; 0 -1 0; 0 0 2 ]_mn + O[(ω L)^3]. By explicit computation, the h^× contribution has vanished, completing the proof. As a corollary, we note that at leading order an azimuthally symmetric detector can only depend on the incident GW direction through sin^2 θ_h. [linewidth=1.5pt, roundcorner=5pt] Selection Rule 2: For an instrument with azimuthal symmetry, the flux is proportional to either h^+ or h^×, but not both. This holds to all orders in (ω L). We reiterate that we derive all fluxes from the Biot-Savart law, which is valid only to O[(ω L)^3] (cf. footnote <ref>). Proof: Consider a GW of wave vector k incident on the detector. By the azimuthal symmetry, we can rotate k into the xz-plane without loss of generality. The configuration is now invariant under a P_y reflection, in particular P_y k = k. If the pickup loop is azimuthally symmetric, the flux will receive a contribution at r^' and P_y r^'. We now compare the contribution at these points, by evaluating the magnetic field transverse to the pickup loop, which has unit normal vector n̂'( r^'). n̂'(P_y r^') · B_h (P_y r^', k) = [κ_y P_y n̂'( r^')] · [ση_y P_y B_h ( r^', k)] = σκ_y η_y n̂'( r^') · B_h ( r^', k). Here we used the transformation properties of B_h and n̂' given in Eqs. (<ref>) and (<ref>). The various values of η_y and κ_y are summarised in Tab. <ref>, and recall h^+ and h^× transform with σ=± 1. If σκ_y η_y = -1, then the contributions of the flux at positions r^' and P_y r^' cancel, and hence the total flux vanishes when integrated over an azimuthally symmetric pickup loop. This uniquely selects the GW polarisation σ which can be measured in a detector with azimuthal symmetry, completing the proof. In Tab. <ref>, we summarise which polarisation survives for which geometry from this argument. Together with the first selection rule, this enables us to identify geometries which are potentially suitable to pick up the O[(ω L)^2] component of the induced flux. Let us work through several explicit examples, each assumed to be azimuthally symmetric, referring to the table for visualisations. ∘ Toroidal magnet with a horizontal pickup loop. Consider first a configuration with B_0 ∝ê_ϕ and n̂^'∝ê_z, as for instance used by ABRA. For these choices, the flux in Eq. (<ref>) transforms with σκ_y η_y = - σ. Accordingly, by selection rule 2 the h^+ polarisation (σ=+1) cannot contribute, only h^× will (σ=-1). Concretely, for ABRA this implies that any azimuthally symmetric pickup loop will receive no contribution proportional to h^+ for all possible positions z of the pickup loop. Selection rule 1 further implies that the leading order contribution can only occur at O[(ω L)^3]. Both of these results were observed by explicit calculation in Ref. <cit.>. ∘ Solenoidal magnet with an array of vertical pickup loops. Inspired by BASE we next consider a setup with B_0 ∝ê_z and n̂^'∝ê_ϕ, so that in Eq. (<ref>) σκ_y η_y = - σ, and only h^× can contribute. Explicitly, the flux generated by h^+ from a pickup loop at ϕ_ℓ will exactly cancel the flux the plus polarisation generates in a pickup loop at -ϕ_ℓ. Thus, even though the geometry has changed significantly, our polarisation selection rule applies identically to the ABRA-type configuration. This explains the cancellation for azimuthally symmetric solenoidal detectors observed in Sec. <ref>. Although the leading order GW flux vanished in the two instances above, using our selection rules we can straightforwardly conceive of azimuthally symmetric geometries where the cancellation does not occur, we simply require κ_y η_y = +1. One instance is the following example. ∘ Solenoidal magnet with a horizontal pickup loop. Taking B_0 ∝n̂^'∝ê_z, we have σκ_y η_y = + σ in Eq. (<ref>). As a result, the h^× contributions to the flux vanishes to all orders in (ω L) whereas the h^+ contribution survives, and with it a contribution to the flux of order (ω L)^2. In this case, one might worry about the feasibility of separating the tiny induced field from the large background magnetic field. Any detection strategy would exploit the AC nature of the GW flux, as opposed to the (ideally) DC static field, and potentially also the angular dependence of the GW. (For further discussion, see also Ref. <cit.>.) The two selection rules derived so far allow us to understand an important consequence for the detection of GWs with axion haloscopes. In particular, by selection rule 2, the flux in an azimuthally symmetric detector sensitive to the h^× or σ = -1 component has no dependence on h^+. But then by selection rule 1, such an instrument will not have the optimal sensitivity to the GW, since the O[(ω L)^2] contribution will vanish. This is an important observation for axion haloscopes, because as shown in App. <ref>, the pseudoscalar axion field transforms with σ=-1, and therefore to be sensitive to the axion one is forced into a configuration where the leading GW flux vanishes. The only way of evading this conclusion is to break the azimuthal symmetry. One could break this maximally by introducing a figure-8 configuration, as done in Ref. <cit.>. This would revive a contribution from h^+ at O [(ω L)^2], however as the axion induced magnetic field B_a has no angular dependence, its contribution will vanish. Hence, to detect both axion and GW, one could use a pickup loop with an opening angle smaller than 2 π, which avoids a complete cancellations. For this purpose, we present the results for Φ_h^(2) with a pickup loop with an arbitrary angle in the next subsection. Further discussion of these points is provided in App. <ref>, where we also demonstrate that a scalar, φ, which couples as φ F^2 transforms with σ=+1, and so can also be understood through our selection rules. [linewidth=1.5pt, roundcorner=6pt] Selection Rule 3: For an instrument with full cylindrical symmetry, Φ_h will contain only even or odd powers of ω. Proof: If the instrument has full cylindrical symmetry, then the flux will receive a contribution from r' and P r'=- r', where again P = P_x P_y P_z is the complete parity transformation. Let us thus consider the property of the induced magnetic flux under r' → P r'. We start by writing the GW in the proper detector frame as power series in ω, h_μν = ∑_n = 2^∞ h^(n)_μν with h_μν^(n)∝ω^n. From Eq. (<ref>), we can see the components transform as follows, h_00^(n)(P r, k) = (-1)^n h_00^(n)( r,k), h^(n)_0i(P r, k ) = (-1)^n P_ij h^(n)_0j ( r,k), h^(n)_ij(P r, k) = (-1)^n P_ik h_kl^(n)( r,k) P_lj^T. This is identical to the transformations studied in Eq. (<ref>), but with σ→ (-1)^n. Either by proceeding through the same steps as used to derive Eq. (<ref>) or by exploiting the above analogy, we find, B_h^(n)(P r',k̂) = η (-1)^n P B_h^(n)( r',k̂), where we have decomposed the induced field as B_h = ∑_n=2^∞ B_h^(n), with B_h^(n)∝ω^n. Consequently, the flux contribution from the point P r' will be determined by n'(P r^') · B_h (P r^', k) = κη∑_n=2^∞ (-1)^n [ n'( r^') · B_h^(n) ( r^', k̂)]. The values of η and κ were given in Tab. <ref>. Clearly κη = ± 1. If κη = + 1, then for odd n, the flux from P r' and r' will cancel, whereas for κη = - 1 a similar conclusion is reached for even n, completing the proof. (Note this result is independent of σ.) The combination of our three selection rules show that instruments with full cylindrical symmetry have a highly restricted form of the induced GW flux. In Tab. <ref>, we apply these selection rules for all possible pickup loop orientations, and for both solenoidal and toroidal external fields, always assuming full cylindrical symmetry. (Similar results can be derived for a scalar and axion, which we study in the appendix and summarise in Tab. <ref>.) In the first line of each cell, we denote the surviving polarisation (h^+ or h^×), if even or odd powers of ω L contribute, and what is the leading order contribution. The second line provides the explicit leading order flux Φ_h. The flux is computed assuming a parametrically large H, and for all cases we took ρ' ∈ [0,r], with r ≤ R. For an array of pickup loops or a continuous pickup surface (which are required for cylindrical symmetry in the case of a vertical pickup loop) we integrate the induced flux over the azimuthal angle ϕ_ℓ. In the absence of detailed information on the detector setup and resulting inductance, this serves as a proxy to estimate the total flux, and most importantly, will drop out when recasting axion search results (see Sec. <ref>) since the same factor appears for the axion case. (See also the discussion in footnote <ref>.) The result for a horizontal pickup loop in a toroidal magnetic field was previously presented in Ref. <cit.>, our new results corrects this expression by a factor 1/3 which is due taking into account the contribution from the effective surface current previously overlooked. For the O[(ω L)^2] results in Ref. <cit.>, obtained from the use of a figure-8 pickup loop to break the azimuthal symmetry, the surface current contribution vanishes, leaving them unchanged. Such cases are considered explicitly in the next subsection. For each case in the table, the selection rules determine the leading order contribution to Φ_h without any explicit calculation being required, which achieves one of the central goals of this work. For example, take a solenoidal magnetic field with a radial pickup loop (n̂' ∝ê_ρ). As σκ_y η_y = σ in Eq. (<ref>), only h^+ can contribute from selection rule 2. However, as κη = -1 in Eq. (<ref>), only odd powers of ω contribute by selection rule 3, and therefore the leading order contribution is at O[(ω L)^3]. If, however, the loop was moved up or down in the vertical direction, breaking the cylindrical symmetry, selection rule 3 would no longer hold, and we would have a contribution at O[(ω L)^2] as allowed by selection rule 1. Explicitly, if we place the loop at z' ∈ [-H/2,-H/2+l], and then expand the flux assuming for simplicity H ≫ l ≫ r,a,R, we find Φ_h = e^-ıω t/3√(2) h^+ ω^2 B_0 π r^2 l^2 sin^2 θ_h, so that a leading order contribution has been resurrected. Consider also the case of a toroidal magnet with a radial pickup loop. Now σκ_y η_y = -σ and κη = +1, so that only h^× and even orders of ω L will contribute. But by selection rule 1, for such a configuration the leading order contribution cannot occur until O[(ω L)^4], where we already expect corrections from our use of the Biot-Savart law. Again, placing the loop at the vertical bottom (or top) of the instrument would parametrically enhance the flux to Φ_h = - ı e^-ıω t/96 √(2) h^×ω^3 B_maxπ r^2 a R (a+2R) sin^2 θ_h. §.§ Increased GW sensitivity for pickup loops that break azimuthal symmetry All three selection rules above followed from the full azimuthal symmetry of the detector. When broken, the restrictions the rules impose are lifted, and in many geometries this allows for a parametric improvement in the GW flux. While the breaking can occur at the level of the magnetic field or pickup loop, the latter is far more practical. For instance, as discussed in Ref. <cit.>, an instrument could use multiple pickup loops, one for the axion, and another for the GW. With such a possibility in mind, here we compute the leading O[(ω L)^2] flux for various geometries in the case where the detector has a pickup loop that spans an opening angle ϕ_ℓ∈ [ϕ_1, ϕ_2] for a horizontal or radial readout, or a set of loops that span a fraction [ϕ_1, ϕ_2] of a toroid in the case of the vertical loop. As it can be readily restored, we fix ϕ_h = 0 throughout for ease of notation, and assume ρ^'∈ [0,r] for the radial expanse of the loop. We again employ the shorthand c_x = cos x and s_x = sin x. For a solenoidal magnetic field, the three results determined by the orientation of the pickup loop n̂' are as follows, ê_z: Φ_h = e^-ıω t/768√(2)ω^2 B_0 r^2 [ 12 h^× r^2 c_θ_h (c_2 ϕ_2 - c_2ϕ_1) - 3 h^+ r^2 (3 + c_2θ_h) (s_2ϕ_2 - s_2ϕ_1) + 8 h^+ (ϕ_2 - ϕ_1) ( 11r^2 + 14R^2 + 16 R^2 lnR/H) s_θ_h^2 ], ê_ϕ: Φ_h = e^-ıω t/144√(2)ω^2 B_0 r l ( 30R^2 - 13 r^2 ) s_θ_h[ h^+ c_θ_h ( c_ϕ_2 - c_ϕ_1 ) + h^× ( s_ϕ_2 - s_ϕ_1 ) ], ê_ρ: Φ_h = 5 e^-ıω t/48√(2)ω^2 B_0 r l ( 2 R^2 - 3 r^2 ) s_θ_h[ h^+ c_θ_h ( s_ϕ_2 - s_ϕ_1) - h^× ( c_ϕ_2 - c_ϕ_1 ) ]. In each case we only state the result to O[(ω L)^2] and leading order in 1/H. Observe that when ϕ_1=0 and ϕ_2=2π, only the result for n̂' ∝ê_z survives, consistent with Tab. <ref>. For a toroid, the analogous results are as follows (again working only to O[(ω L)^2]), ê_z: Φ_h = e^-ıω t/12√(2)ω^2 B_max r^3 R ln( 1 + a/R) s_θ_h[ h^+ c_θ_h (c_ϕ_2 - c_ϕ_1 ) + h^× (s_ϕ_2 - s_ϕ_1) ], ê_ϕ: Φ_h = - e^-ıω t/32√(2)ω^2 B_max r^2 R l ln( 1 + a/R) [ 4 h^× c_θ_h (c_2ϕ_2 - c_2ϕ_1) - h^+ (3 + c_2θ_h) (s_2ϕ_2 - s_2ϕ_1) ] + 3e^-ıω t/8√(2) h^+ ω^2 B_max (ϕ_2 - ϕ_1) r^2 a R l(a+2R)/H^2 s_θ_h^2, ê_ρ: Φ_h = - e^-ıω t/16√(2)ω^2 B_max r^2 R l ln( 1 + a/R) [ h^+ (3 + c_2 θ_h) (c_2 ϕ_2 - c_2 ϕ_1) + 4 h^× c_θ_h ( s_2 ϕ_2 - s_2ϕ_1 ) ]. Again taking ϕ_1 = 0 and ϕ_2 = 2π, there will only be a contribution n̂' ∝ê_ϕ, reproducing Tab. <ref>. That contribution only appears at O(1/H^2), and this is the only non-leading contribution in H →∞ we included. § DISCUSSION Both axions and GWs induce effective polarisation and magnetisation terms in Maxwell's equations. While the formalism for exploiting this effect to search for the axion has been in place for four decades <cit.>, the GW analogue and its synergies with axion searches remains nascent. Our work expands our understanding of this latter case, and by focussing on lumped-element circuits for axion detection (such as ABRADACABRA, SHAFT, BASE, ADMX SLIC, WISPLC and the DMRadio program), we estimate their sensitivity to current and future high-frequency GW searches, as shown in Fig. <ref>. We also expand the theoretical foundations of the interaction of GWs with instruments operating in the magnetoquasistatic regime, extending the earlier results of Ref. <cit.> in a number of ways. Most importantly, we have developed a symmetry based formalism that largely fixes the form of the leading GW signal in various instruments. We considered external magnetic fields with a cylindrical symmetry – toroidal or solenoidal fields – as used in all ongoing and planned axion haloscopes. We derived selection rules for the signal strength, which, based on symmetry alone, fix the leading power sensitivity in ω L, and hence parametrically determine the GW strain sensitivity without calculation. This allows one to immediately determine the impact of different geometries for the external magnetic (or electric) field and the pickup loop on the achievable GW strain sensitivity. As summarised in Tab. <ref>, highly symmetric detectors place strong restrictions on the form of the induced flux as a direct consequence of the tensor nature of the GW. These arguments can be extended to a scalar or pseudoscalar (axion) coupled to EM, as we show in App. <ref>. Taken together, we observe that in optimising the sensitivity to axions, existing instruments can often parametrically suppress the GW signal. Fortunately, however, the observed cancellation can quite easily be remedied by minimally breaking the instruments cylindrical symmetry, for instance by changing the position or shape of the pickup loop. We demonstrated this for different detector geometries, obtaining a parametric increase for the GW sensitivity. Our work provides several technical improvements on the formalism and initial studies of Ref. <cit.>. First, we include the contribution from effective surface currents induced by the GW, arising due to the change in effective magnetisation at the boundary of the static magnetic fields. These effects are generically of the same order as the effects obtained from the interaction of the GW with the magnetic field itself, and for instance modify some of the toroidal results of Ref. <cit.>, relevant for ABRACADABRA, SHAFT, and DMRadio-50L. Second, we give a thorough discussion and prescription of how to compute sensitivities for transient signals, focussing on resonant detectors. This is particularly important for high-frequency GWs, since the duration of the expected signals can be much shorter than the observation time. We include the effect of finite coherence time and finite duration of the signal, the scanning strategy and the quality factor of the instrument. Our prescription is based on bootstrapping the axion search results, allowing an immediate recasting of existing and upcoming axion searches in terms of GW searches. Third, we introduce linear response matrices describing the detector response to the GW signal. With this new formalism, we recover the results of Ref. <cit.>, but the alternative approach played a key role in revealing the symmetry properties of the detectors, facilitating the derivation of the selection rules mentioned above. With all this at hand, we provide analytical results for the effective current induced by a GW up to O[(ω L)^3] for a solenoidal magnetic field configuration. Much work remains. The symmetry based arguments introduced here can be deployed for the full set of signals axion haloscopes could detect, including, for instance, dark photons. Such arguments can help determine the full physics reach of the future axion dark-matter program, see Ref. <cit.>. A dedicated GW search will require a targeted data analysis strategy as well as a detailed detector simulation. While this is beyond the scope of the current paper, any such analysis can draw on the tools we have provided here. A further open question retains to the impact of the mechanical response of the detector to GWs, which may become relevant once the GW frequency lies above the lowest mechanical resonance mode, as discussed in Refs. <cit.>. We leave this to future work. The achievable strain sensitivities (see Fig. <ref>) we obtain by bootstrapping the axion searches still lie above any expected signals from astrophysical or cosmological sources. Nevertheless, the sensitivities are competitive with other experiments and proposals in this frequency regime. We aim with this work to join the worldwide effort of paving the way towards high-frequency GW detection. We thank Tael Coren, Jack Devlin, Sebastian Ellis, Joshua Foster, Jai-chan Hwang, Joachim Kopp, Stefan Ulmer, Jonathan Ouellet, Yotam Soreq, and Zhongyue Zhang for very helpful discussions, and Francesco Muia for comments on a draft version of this work. CGC is supported by a Ramón y Cajal contract with Ref. RYC2020-029248-I. The work of SML is supported in part by the Hyundai Motor Chung Mong-Koo Foundation Scholarship, and funded by the Korea-CERN Theoretical Physics Collaboration and Developing Young High-Energy Theorists fellowship program (NRF-2012K1A3A2A0105178151). § ADDITIONAL DETAILS OF GW ELECTRODYNAMICS In this first appendix, we provide a derivation of the key elements of GW electrodynamics required for the discussion in the main text. In the first two subsections we will provide a detailed derivation of Eqs. (<ref>) and (<ref>), which formed the foundation of the physical effect explored in the main text. After this, we describe the origin of the effective surface current contributions that had been missed in previous analyses, and lastly we explain why GW effects start at O[(ω L)^2] in the proper detector frame. §.§ External currents The external static fields, f_μν, upon which GWs interact to generate electromagnetic effects satisfy Maxwell's equations in flat spacetime ∂_ν f^μν =[j^μ]_FLAT, ∂_ν f_ αβ+∂_α f_βν+∂_β f_να = 0, where [j^μ]_FLAT is the external electromagnetic current sourcing the external fields. For a system of electrons following trajectories described by x_n^μ(u), with n indexing the various particles, the current is given by <cit.> [j^μ]_FLAT = ∫ d u ∑_n e δ^(4)(x_n(u)-x) d x_n^μ(u)/d u= ∑_n e δ^(3)( x_n(t)- x) d x^μ_n(t)/d t. For example, solenoidal and toroidal static configurations in which the electric charges are confined to cylindrical surfaces of constant radius lead to [j^μ]_FLAT∝δ(ρ-R). For the case of a solenoid infinitely extended in the z-direction j_FLAT^ = B_0 δ(ρ-R) ê_ϕ. Similarly, for the toroidal configurations in the limit of infinite height j^_FLAT = B_maxR/ρ[ δ(ρ-R)- δ(ρ-(R+a)) ] ê_z, because there are two cylindrical surfaces. Observe also that charge conservation directly follows from Eq. (<ref>) ∇· j_FLAT = ∑_n e ∇·[ δ^(3)( x_n(t)- x) ] d x^μ_n(t)/d t = - ∂/∂ t∑_n e δ^(3)( x_n(t)- x) = - ∂_t j_FLAT^0. Let us now discuss how the external currents are modified by the presence of GWs in the proper detector (PD) frame. A generalisation of Eq. (<ref>) to an arbitrary spacetime can be found by noticing that δ^(4)(x_n(u)-x) /√(-g) transforms as a scalar.[Recall, it is the spacetime volume d^4 x √(-g) rather than d^4 x that transforms as a scalar.] The current associated with a set of charges following spacetime trajectories x_n^μ(u) is thus given by <cit.> j^μ = ∫ d u ∑_n e 1/√(-g)δ^(4)(x_n(u)-x) d x_n^μ(u)/d u= 1/√(-g)∑_n e δ^(3)( x_n(t)- x) d x_n^μ(t)/d t . A calculation similar to that in Eq. (<ref>) shows that ∂_μ(√(-g) j^μ) =0. This is equivalent to ∇_μ j^μ =0, which can be proven employing the properties of Christoffel symbols. In the proper detector frame, the effect of a GW on the charged particles is described by a Newtonian force. As stated above, in this work we assume that the experimental apparatus is rigid, or more precisely, that such a Newtonian force does not alter the trajectories of the particles. In particular, this implies that in the proper detector frame we can use the same x_n^μ(t) that led to Eqs. (<ref>) and (<ref>). We conclude that [√(-g) j^μ]_PD = [j^μ]_FLAT. §.§ Maxwell's equations in the spacetime of a gravitational wave Maxwell's Equations in an arbitrary spacetime read ∇_ν( g^αμ F_αβ g^βν) = j^μ, ∇_ν F_αβ+∇_α F_βν+∇_β F_να=0. where the external current is defined by Eq. (<ref>). Due to the properties of the Christoffel symbols and the fact the electromagnetic tensor is antisymmetric, these equations can be cast as <cit.> ∂_ν( √(-g) g^αμ F_αβ g^βν) = √(-g) j^μ, ∂_ν F_αβ+∂_α F_βν+∂_β F_να =0. When considering a passing GW in the proper detector frame, the equations are equivalent to Maxwell's equations in a flat spacetime with the GW effects described by an effective current. To see this, we note that the expression in parenthesis to first order in h_μν = g_μν -η_μν is given by √(-g) g^αμ F_αβ g^βν = (1+h/2) F^μν -F^μαh^ν_α +F^ναh^μ_α+ O(h^2), where we employ √(-g) = 1+h/2 + O (h^2). This motivates us to define the following current effective current, j_ eff^μ≡∂_ν( - 1/2 h F^μν + F^μαh^ν_α - F^ναh^μ_α), which, together with Eqs. (<ref>) and (<ref>), leads to the following form for Maxwell's equations ∂_ν F^μν = [j^μ]_FLAT +j_ eff^μ, ∂_ν F_αβ+∂_α F_βν+∂_β F_να =0. This completes our justification of Eqs. (<ref>) and (<ref>). We combine Maxwell's equations in the presence of a GW with their flat space analogues in Eq. (<ref>) to isolate the induced fields, defined by F^h_ μν≡ - f_μν + F_μν. In particular, these fields satisfy the following equations ∂_ν F^h μν = j^μ_ eff, ∂_ν F^h_αβ+∂_α F^h_βν+∂_β F^h_να = 0. Alternatively, one may also write these equations in terms of a contravariant electromagnetic field strength tensor, g^αμ F_αβ g^βν- f^μν, in which case the second set of equations acquire a source term, as shown in Refs. <cit.>. In the absence of external currents, this leads to an ambiguity in the definition of the electromagnetic field. This is the well-known duality of Maxwell's equations in the absence of charges and currents. However, in contrast to the claims of those references, such an ambiguity does not arise here because of the proper detector external current. Furthermore, the derivation presented here shows that it is not necessary to define the electric and magnetic field vectors in curved spacetime in order to describe the effect of GWs propagating in external electromagnetic fields, as one can always work at the level of F^μν. For simplicity, in the main text and in the rest of the manuscript, we drop the FLAT subscript from the current in Eq. (<ref>). §.§ Effective surface currents Comparing the magnetic fields in Eqs. (<ref>) and (<ref>) with the currents in Eqs. (<ref>) and (<ref>), one can note that the currents close to the surface ρ=R are given by j = δ(ρ -R) n̂× B, with n̂ = - ê_ρ for the solenoid and n̂ = ê_ρ for the toroid. This can also be written as j = δ(ρ -R) K, with K = n̂× B, where K is often called the surface current density. The result in Eq. (<ref>) is a particular example of a phenomenon that takes place whenever there is discontinuity in the magnetic field across a surface. It is possible to prove (see, for instance, Ref. <cit.>) that at the interface of two bodies with different values of H = B - M_ eff, Maxwell's equations predict the existence of a surface current density, K = n̂×( H_2- H_1), where n̂ is the unit vector normal to the surface from medium 1 to 2. Note that Eq. (<ref>) is a particular case of Eq. (<ref>) when the magnetic field vanishes on one side of the surface ρ=R, and M_ eff=0 everywhere, i.e. there is no GW or (pseudo-)scalar field. In the presence of GWs or a (pseudo-)scalar field, the magnetisation does not vanish. For the toroid and solenoid cases of Eqs. (<ref>) and (<ref>), in addition to Eq. (<ref>), there is an effective current on the surface given by j_S, eff = δ(ρ -R) n̂× M_ eff|_ρ=R. For instance, as we show in App. <ref>, a scalar coupled to electromagnetism will generate M_ eff =-φ B_0, and therefore j_S, eff = -φ j. For axions, however, M_ eff = a E and j_S, eff = 0, as the tangential component of the electric field must be continuous across any boundary <cit.>. This will be found by an explicit calculation in App. <ref>. To obtain the effective surface for GWs, Eq. (<ref>) can be cast as û· j_S, eff = ±δ(ρ -R) (û×ê_ρ) · M_ eff|_ρ=R, with û an arbitrary unit vector, or in components j_S, eff,ϕ = ∓ δ(ρ -R) (ê_z )_i (- h_ij B_j - 1/2 h B_i + h_jj B_i ), j_S, eff,z = ± δ(ρ -R) (ê_ϕ )_i (- h_ij B_j - 1/2 h B_i + h_jj B_i ), where we assume the system is interacting purely with a magnetic field. §.§ (ω L) power counting in the proper detector frame The proper detector frame is the closest analogue to the inertial reference frame of the laboratory, and therefore allows for a simple description of the experimentally generated electromagnetic fields. As Eq. (<ref>) demonstrates, in the proper detector frame the leading order frequency contribution to h occurs at O[(ω L)^2]. Accordingly, when the GW interacts with static electromagnetic fields, the leading order gauge invariant contribution to the induced magnetic field and measurable magnetic flux will also be (ω L)^2, as seen explicitly in, for example, Refs. <cit.>. In this appendix we explain the physical origin of this scaling, and in particular justify the absence of any contribution at O(ω L). The starting point is that the proper detector frame corresponds to Fermi normal coordinates <cit.>, which are freely falling locally inertial coordinates defined along a geodesic, x_0. In effect, Fermi normal coordinates are the extension of Riemmann normal coordinates to an entire worldline. In terms of the specified geodesic, we can evaluate the metric at an arbitrary spacetime point as follows, g_μν(x) = g_μν(x_0) + (x-x_0)^α∂_α g_μν(x_0) + (x-x_0)^α (x-x_0)^β∂_α∂_β g_μν(x_0) + …. To wit, we can take x_0 to represent the worldline of the center of our detector, and then x could represent an arbitrary point in the detector where we wish to evaluate the impact of the GW. This implies that parametrically (x-x_0) ∼ L, where as throughout the main text L is a characteristic length scale of the instrument. Now, Fermi normal coordinates are locally flat, and therefore we have g_μν(x_0) = η_μν. Performing the usual linear decomposition of the metric for a GW propagating in flat space, g_μν = η_μν + h_μν, we can therefore identify the contribution from the GW with the derivative terms in Eq. (<ref>). Assuming there is no backreaction from the instrument on the GW, and in addition that the detector can be treated as rigid, if we have a monochromatic incident source of frequency ω, then ∂^n g_μν(x_0) ∼ω^n g_μν(x_0). In this language, the absence of an O(ω) term in the description of the GW is reduced to explaining why the single derivative contribution to Eq. (<ref>) must vanish. This is straightforward: Fermi normal coordinates are a locally inertial reference frame, so that all Christoffel symbols vanish along x_0, and therefore ∂_α g_μν(x_0) = 0. Accordingly, the GW in Fermi normal coordinates has a leading contribution at O(ω^2). The final ingredient is to transform from a freely falling frame to the non-inertial frame of a laboratory on the surface of the Earth, which define the proper detector frame. However, it can be shown that this transformation introduces contributions only at significantly lower frequencies than we consider here <cit.>, and therefore does not impact the above argument. § SCALAR AND AXION ELECTRODYNAMICS In this appendix, we expand on our discussion of scalar and axion electrodynamics. In particular, we will compare the well studied pseudoscalar axion interaction -14 a F^μνF̃_μν,[We define the dual field strength tensor as F̃^μν≡1/2ϵ^μναβ F_αβ, where ϵ^μνρσ is the ordinary totally antisymmetric symbol (with ϵ^0123=1).] to the scalar equivalent[We remain agnostic as to the UV details of this scenario, for a discussion, see e.g. Refs. <cit.>. Our focus is simply to study how such a coupling would differ from the conventional axion interaction.] L⊃ - 1/4φ F^μν F_μν. In the main text, we used this as a motivating example, as it involved several of the features we explored for GW signals, explicitly the importance of the symmetry of the detector and further the additional contributions we receive from the boundaries of the detector. Here we will provide a more complete discussion of the specific differences between axion and scalar electrodynamics, and then determine the flux scalar dark matter could induce in various lumped-element circuit instruments. In direct analogy to the axion, a scalar field that couples as in Eq. (<ref>) will modify Maxwell's equations. If we work perturbatively in the coupling g, writing F^μν = F^μν_0 + F^μν_a/φ + O(g^2), we have the equations of motion for the induced fields, ∂_ν F^μν_φ = ∂_ν (φ F_0^νμ) = (∂_νφ) F_0^νμ - φ j^μ, ∂_ν F^μν_a = ∂_ν ( a F̃_0^νμ) = (∂_ν a) F̃_0^νμ, The final results follow from ∂_νF̃^μν = 0 for the axion, and ∂_ν F_0^μν = j^μ for the scalar. The presence of a coupling to the current, which also must be included for the GW, is a novelty that does not arise for the axion. For instance, this interaction will give rise to an oscillating contribution to the fields generated by j^μ, as explored in Ref. <cit.>. (There have also been discussions of using axion haloscope inspired instruments to detect , see Refs. <cit.>.) For both cases, Eq. (<ref>) demonstrates that we can define an effective magnetisation and polarisation tensor as in Ref. <cit.>. In particular, we have M_φ^μν = φ F^μν and M_a^μν = a F̃^μν, which yields explicit polarisation and magnetisation vectors, P_φ = φ E, M_φ = -φ B, P_a = a B, M_a = a E, and, therefore, the following inhomogeneous equations for the induced fields, ∇· E_φ = - E·∇φ - φρ, ∇· E_a = - B·∇ a, ∇× B_φ = ∂_t E_φ - (∇φ) × B + (∂_t φ) E - φ j, ∇× B_a = ∂_t E_a + (∇ a) × E+ (∂_t a) B. From these, we can determine the induced fields a scalar or axion would generate for various laboratory field configurations. We now specialise to the situation where the background electric field vanishes, as is commonly employed in experiments. The effective currents that will then source magnetic fields are determined from Eq. (<ref>) as j^a_ eff = (∂_t a) B and j^φ_ eff = -∇× (φ B). From here, in analogy to Eq. (<ref>), we find the following transformation properties for these currents j_ eff^a(P_α r,P_α k) = η_α P_α j_ eff^a( r, k), j_ eff^φ(P_α r,P_α k) = - η_α P_α j_ eff^φ( r, k). In the language we introduced for gravitational waves, we see that the axion and scalar transform with σ=-1 and +1 respectively, being the spin-0 counterparts of h^× and h^+. According to selection rule 2 derived in Sec. <ref>, for a detector with cylindrical symmetry, there is only sensitivity to either h^× and or h^+.[We emphasise once more that this statement assumes the polarisations being defined as in Eq. (<ref>).] The proof of the selection rule only used Eq. (<ref>) which followed from Eq. (<ref>), that is directly analogous to the transformations in Eq. (<ref>). Accordingly axions and scalars must also obey selection rule 2, demonstrating that when full azimuthal symmetry is in place an instrument can only be sensitive to one of the two scalar waves. This is shown explicitly in Tab. <ref>, which summarises our symmetry based results for those geometries that are sensitive to an axion and scalar. This is the spin-0 analogue to Tab. <ref>. Next we will expand upon these claims by presenting the explicit results for several geometries (these same three cases were considered for the GW in Sec. <ref>). Toroidal magnet with a horizontal pickup loop. We first study an instrument with B_0 ∝ê_ϕ and where n̂^'∝ê_z. An explicit example of such a geometry is ABRACADABRA. The magnetic field in this case was already provided in Eq. (<ref>), and again the toroid has inner and outer radii given by R and R+a, and a height H which we take to be parametrically larger than both. The induced field in the z direction – the one the pickup loop will measure – for each case can be computed as, B^φ_z( r') = 0, B^a_z( r') = (∂_t a) B_0 R [ ln( 1+ a/R) - a(a+2R)/H^2]. As in the main text, r' = (ρ', ϕ', z') is the cylindrical coordinate system where the field is measured and integrated over by the pickup loop. The axion contribution here is only stated to O(H^-2), whereas the scalar result is exact: there is no scalar induced magnetic field in the z direction. This results from the azimuthal symmetry of the toroid, a toroid with a wedge in ϕ removed will have a non-zero B^φ_z( r'). Regardless, the existing and planned toroidal axion instruments operating in this range, such as ABRACADABRA, SHAFT, or DMRadio-50L would have an exactly vanishing sensitivity to a scalar dark-matter signal, as they would for the h^+ polarisation of a GW, both consistent with selection rule 2. Solenoidal magnet with an array of vertical pickup loops. Next we consider the primary configuration studied in the main text, where B_0 ∝ê_z and n̂^'∝ê_ϕ, as pursued by, for example, the BASE collaboration. We will compute the azimuthal component of the magnetic field, as measured by a vertical pickup loop, and again assume that the height of the solenoid is parametrically larger than the other scales. Adopting the magnetic field in Eq. (<ref>), to O(H^-2) we have[To facilitate comparison with the axion, in the scalar case we used the fact that the phase of the dark-matter wave will be ∼ m(t- v· x) to write ∇φ = - v (∂_t φ).] B^φ_ϕ( r') = 2 (∂_t φ) v B_0 z' R^2/H^2sinθ_φsin (ϕ'-ϕ_φ), B^a_ϕ( r') = 1/2 (∂_t a) B_0 ρ' [ 1 - 2R^2/H^2], with (θ_φ, ϕ_φ) the coordinates of the scalar field's velocity on the celestial sphere. If we compare the two results, we see that if the experiment measures the magnetic flux within a pickup loop symmetric in z' – as done in ADMX SLIC or BASE – then the axion flux will grow proportional to the height of the loop, whereas the scalar flux will exactly vanish as B^φ_ϕ( r') ∝ z'. Further, for any loop in a plane of constant ϕ', if we wrap it in a full circle in ϕ' as DMRadio-m^3 plans, then again while the axion flux increases the scalar flux will vanish, consistent with selection rule 2. It is straightforward to confirm that these results persist to all orders in H. In other words, solenoidal axion instruments will generically have exactly vanishing sensitivity to a scalar dark-matter signal, even though B^φ_ϕ( r') ≠ 0. As discussed in the main body, the key difference between these scalar and pseudoscalar axion results can be understood on basic symmetry grounds. In particular, consistent with Tab. <ref>, the azimuthal component of the magnetic field will flip sign under parity. For the axion interaction, this sign flip is produced by the pseudoscalar field itself, whereas for the scalar, the flip is generated by the z' in Eq. (<ref>), as P z' = - z'. (In the axion case, the equivalent dimensions were made up by ρ', which is invariant under parity.) Nevertheless, in spite of these differences, there are pickup loops that can be designed which would have sensitivity to both the scalar and axion. This is not the case for a background toroidal field. Similar arguments can be used to understand the remaining results in Tab. <ref>. Solenoidal magnet with a horizontal pickup loop. Finally, we consider a configuration which is optimally sensitive to a scalar and minimally sensitive to the axion. From selection rule 2, we simply need a configuration sensitive to the σ=+1 contribution, or the h^+ component for a GW. From Tab. <ref>, one possibility is a solenoidal magnetic field B_0 ∝ê_z, with a pickup loop that reads out the z component of the induced field, n̂^'∝ê_z. For such a configuration, we have B^φ_z( r') = - φ B_0 + 1/2 (∂_t φ) v B_0 ρ' sinθ_φcos (ϕ'-ϕ_φ), B^a_z( r') = 0. The scalar result is stated to leading order in 1/H, whereas the axion result exactly vanishes. Observe that in Eq. (<ref>) the first term for the scalar is proportional to φ rather than a derivative of the field, and arises from the unique current term for the scalar already visible in Eq. (<ref>). As discussed, similar terms arise for the GW. In the scalar case, these contributions are particularly simple. For instance, as -φ j∝ j, this contribution must be directly proportional to the background magnetic field, which is also produced by j, or B^φ∝ B_0. This implies that in Tab. <ref> such a contribution can only be measured when n̂' ∝ B_0, which only occurs for two of the six cases in the table. This leaves one case where a scalar contribution is expected by selection rule 2, but where there is no contribution from the surface current: a solenoidal magnet with n̂' ∝ê_ρ for the pickup loop. In this case, the magnetic field is generated purely from (∇φ) × B_0, and therefore must be proportional to sin(ϕ'-ϕ_φ) or cos(ϕ'-ϕ_φ). When the pickup loop has azimuthal symmetry, however, such terms vanish, explaining why we write Φ_φ=0 in the table, although it could be recovered by using a pickup loop that violates the rotational symmetry. § PARITY PROPERTIES OF EXTERNAL MAGNETIC FIELDS In this appendix, we will demonstrate that it is possible to decompose static cylindrically symmetric magnetic fields – those invariant under azimuthal and z-reflection symmetries – as a sum of a toroidal and a solenoidal piece, which as we will show take the form B^∝ê_ϕ and B^∝ê_z, respectively. In the idealised treatment of static laboratory magnetic fields, we usually imagine them dropping sharply to zero beyond the boundary of a well-defined region. According to the discussion in App. <ref>, a current flows on the surface of such a boundary, which sources the magnetic field inside. Azimuthal symmetry dictates that the current takes the form j∝δ(ρ-ρ_S(z)). Together with ∇· j = 0, this can be used to show that current can always be cast as j = j^+ j^, with j^ = C^/ρδ(ρ-ρ_S(z)) ( [∂_zρ_S(z)]ê_ρ+ ê_z ), j^ = C^(z) δ(ρ-ρ_S(z)) ê_ϕ, where C^ (z) is a function of z while C^ is a constant. In addition, z-reflection symmetry implies that C^ (-z) = C^ (z),ρ_S (-z) =ρ_S (z). Observe that the two terms in Eq. (<ref>) are conserved separately. As a result, we can similarly decompose the magnetic field sourced by j as B = B^+ B^, with ∇× B^ = j^ and ∇× B^ = j^. We will further discuss each case below, and justify that B^∝ê_ϕ and B^∝ê_z. From this we conclude that toroidal and solenoidal magnetic fields exhaust all the realistic configurations invariant under cylindrical and z-reflection symmetries, and further this justifies the symmetry transformations encoded in B (P_α r) = η_α P_α B ( r) and Tab. <ref>. Toroidal fields. As we now show, the magnetic field arising from the toroidal current takes a simple form, regardless of the function ρ_S(z). Due to the cylindrical symmetry, the magnetic field sourced by j^ points in the azimuthal direction, B^ = B(ρ,z) ê_ϕ. For a fixed z, the circulation of the magnetic field along a circular path is 2πρ B(ρ,z). According to Ampère's law, this must equal ∫ dρ∫ dϕ ρ j^_z = 2π C^. Hence B^ (ρ,z) = C^/ρΘ(ρ-ρ_S (z)) ê_ϕ. The 1/ρ dependence of the magnetic field can be alternatively derived by considering an arbitrary field of the form B = B(ρ,z) ê_ϕ, and then demanding that ∇· B = ∇× B = 0, which holds sufficiently far from the surface. For the realistic toroidal fields produced in the lab, the current flows up and then down again, so the total field is the sum of two pieces. Furthermore, we can usually neglect the z-dependence of ρ_S(z). In that case, the general result in Eq. (<ref>) reduces to Eq. (<ref>). Solenoidal fields. The solenoidal configuration does not lead to an expression as simple as Eq. (<ref>). Nonetheless, Eq. (<ref>) shows that C^ (z) and ρ_S(z) are even, and hence ∂_z C^(0) = ∂_zρ_S(0)=0. Since axion experiments often measure the induced magnetic flux away from the z-boundaries of the external magnetic field, to a reasonable approximation we can take ρ_S and ∂_z C^ as constants. In that case j^ reduces to Eq. (<ref>) and the magnetic field takes the form given in Eq. (<ref>). In general, however, this need not hold, and for instance B^ can develop a contribution ∝ê_ρ. § RESPONSE MATRIX In this appendix, we derive the response matrix introduced in Eq. (<ref>), and provide an explicit example. Let us first note that the effective current is a linear functional of the GW, which implies that there exists a tensor J^i_m n( r, k) such that j_ eff^i =e^- ıω t J^i_m n( r, k) ∑_A h^A e^A_m n(k̂). Following Eq. (<ref>), this gives Eq. (<ref>) Φ_h=e^- ıω t D^m n( k) ∑_A h^A e^A_m n(k̂), with D^m n( k)=∫_ℓ d r^'_i ∫_V_Bd^3 r/4 π J^i_m n/| r- r^ '|. As an explicit example, for the solenoidal magnetic field in Eq. (<ref>), we find J^ρ = 1/12 e^- ıω tω^2 B_0 Θ(R - ρ) [ ρ s_2ϕ - ρ c_2 ϕ 4z s_ϕ; - ρ c_2ϕ - ρ s_2ϕ - 4z c_ϕ; 4z s_ϕ - 4z c_ϕ 0 ], J^ϕ = 1/12 e^- ıω tω^2 B_0 Θ(R - ρ) [ ρ (3 + c_2ϕ) ρ s_2 ϕ 4z c_ϕ; ρ s_2 ϕ ρ( 3 - c_2 ϕ ) 4z s_ϕ; 4z c_ϕ 4z s_ϕ - 4 ρ ] -1/12 e^- ıω tω^2 B_0 δ(R - ρ) [ z^2 + 2 ρ^2 + ρ^2 c_2ϕ ρ^2 s_2ϕ 4z ρ c_ϕ; ρ^2 s_2ϕ z^2 + 2 ρ^2 - ρ^2 c_2ϕ 4z ρ s_ϕ; 4z ρ c_ϕ 4z ρ s_ϕ 5z^2 - ρ^2 ], J^z = 1/4 e^- ıω tω^2 B_0 Θ(R- ρ) [ 0 0 - ρ s_ϕ; 0 0 ρ c_ϕ; - ρ s_ϕ ρ c_ϕ 0 ] - 1/12 e^- ıω tω^2 B_0 δ(R- ρ) [ z ρ s_2ϕ - z ρ c_2 ϕ - ρ^2 s_ϕ; - z ρ c_2 ϕ - 2 z ρ c_ϕ s_ϕ ρ^2 c_ϕ; - ρ^2 s_ϕ ρ^2 c_ϕ 0 ], at O[(ω L)^2]. From here, considering the specific loop geometry described in Fig. <ref> with r_2=r and r_1=0, the response matrix is D^mn( k) =e^- ıω t/288ω^2 B_0 r l ( 13 r^2 - 30 R^2 ) [ 0 0 - s_ϕ_ℓ; 0 0 c_ϕ_ℓ; - s_ϕ_ℓ c_ϕ_ℓ 0 ]. Note that as shown in Eq. (<ref>), for a cylindrical symmetric setup, only the diagonal part of the response matrix matters, and therefore the zeros in the response function are consistent with the fact that we do not have any contribution at O[(ω L)^2] for the vertical loop. § EXPLICIT EXPRESSIONS FOR THE EFFECTIVE CURRENT In this appendix, we provide the analytic expressions for the current induced by a GW to O[(ω L)^3] for both a solenoidal and toroidal external magnetic field as in Eq. (<ref>) and Eq. (<ref>). Together with Eq. (<ref>) or Eq. (<ref>), these are used to calculate the induced magnetic field and flux which can then be measured by a pickup loop. To slightly simplify the expressions that follow, throughout this appendix we have taken ϕ_h=0, but this can be restored immediately by sending ϕ→ϕ-ϕ_h. Solenoidal magnet. The three components of the current j=(j_ρ, j_ϕ, j_z) are given as follows, where in each case organise the results by polarisation and power in ω, j_ρ e^ıω t = B_0 Θ(R - ρ) × [ 1/24√(2) h^+ ω^2 {ρ (3 + c_2 θ_h ) s_2ϕ - 8 z s_2 θ_h s_ϕ} -1/6√(2) h^×ω^2 {ρ c_θ_h c_2ϕ - 4 z c_ϕ s_θ_h} +ı/192√(2) h^+ ω^3 {[ (45 ρ^2 + 5 ρ^2 c_2ϕ -58z^2) s_θ_h + (ρ^2 + ρ^2 c_2ϕ-18z^2) s_3 θ_h] s_ϕ + 2 zρ (19 c_θ_h + 5 c_3 θ_h ) c_2ϕ} - ı/48√(2) h^×ω^3 { 4 zρ (1 + 2 c_2 θ_h c_2ϕ - 6 s_θ_h^2 ) + (5 ρ^2 + ρ^2 c_2ϕ -14z^2) c_ϕ s_2 θ_h}], j_ϕ e^ıω t = B_0 Θ(R-ρ) × [ 1/24√(2) h^+ ω^2 {ρ ( 3 + c_2 θ_h ) c_2ϕ + 14 ρ s_θ_h^2 - 8 z c_ϕ s_2 θ_h} + 1/3√(2) h^×ω^2 (ρ c_θ_h c_ϕ - 2 z s_θ_h ) s_ϕ - ı/192√(2) h^+ ω^3 { -10 z ρ c_3 θ_h c_2ϕ - 2 z ρ c_θ_h ( -26 + 26 c_2 θ_h + 19 c_2ϕ ) . . + c_ϕ[ ( 58z^2 + 51 ρ^2 - 5 ρ^2 c_2ϕ ) s_θ_h + ( 18z^2 - 17 ρ^2 - ρ^2 c_2ϕ ) s_3 θ_h] } + ı/48√(2) h^×ω^3 { 4 zρ (1 + 2 c_2 θ_h ) s_2ϕ + (-14^2 + ρ^2 + ρ^2 c_2ϕ) s_2 θ_h s_ϕ}] + B_0 δ(R-ρ) × [ 1/24√(2) h^+ ω^2 { - 8z^2 s_θ_h^2 + ρ( - ρ c_2θ_h (3 + c_2ϕ) + 8 z c_ϕ s_2 θ_h + 6 ρ s_ϕ^2 ) } - 1/3√(2) h^×ω^2 ρ ( ρ c_θ_h c_ϕ - 2 z s_θ_h ) s_ϕ - ı/192√(2) h^+ ω^3 { z c_θ_h( ρ^2 (-22 + 22 c_2 θ_h + c_2ϕ ) + 24z^2 s_θ_h^2 ) . + ρ( c_ϕ ( ( 6z^2 - 15ρ^2 + 5ρ^2 c_2ϕ ) s_θ_h + (- 18z^2 + 5ρ^2 + ρ^2 c_2ϕ ) s_3θ_h ) . . . + 7 z ρ c_3 θ_h c_2ϕ) } + ı/48√(2) h^×ω^3 { 2 zρ (-1 + 2 c_2 θ_h ) s_2ϕ + (-6 z^2 + ρ^2 + ρ^2 c_2ϕ) s_2 θ_h s_ϕ}], and j_z e^ıω t = B_0 Θ(R - ρ) × [ 1/4√(2) h^+ ω^2 ρ s_2 θ_h s_ϕ - 1/2√(2) h^×ω^2 ρ c_ϕ s_θ_h . + ı/3√(2) h^+ ω^3 ρ c_θ_h s_θ_h (z c_θ_h + ρ c_ϕ s_θ_h ) s_ϕ - ı/3√(2) h^×ω^3 ρ c_ϕ s_θ_h (z c_θ_h + ρ c_ϕ s_θ_h ) ] + ρ B_0 δ(R- ρ) × [ -1/24√(2) h^+ ω^2 { 2ρ s_2θ_h s_ϕ + z(3+c_2θ_h) s_2ϕ} + 1/6√(2) h^×ω^2 ( z c_θ_h c_2ϕ + ρ c_ϕ s_θ_h ) - ı/48√(2) h^+ ω^3 { z cosθ_h + ρ c_ϕ s_θ_h (2 ρ s_2θ_h s_ϕ + z (3+c_2θ_h) s_2ϕ} + ı/12√(2) h^×ω^3 (z c_θ_h + ρ c_ϕ s_θ_h ) (z c_θ_h c_2ϕ + ρ c_ϕ s_θ_h ) ]. In these expressions we employ the coordinate system introduced in Sec. <ref>. Observe that for the ϕ and z components, the current can be divided into two parts: one proportional to Θ(R-ρ) and the other proportional to δ(R-ρ). The latter can be understood as a contribution coming from the change in the magnetisation at the edge of magnetic field, see Eq. (<ref>). Toroidal magnet. Equivalent results can be derived for a toroidal magnet. Again, the results are divided into two parts: one proportional to Θ(R + a - ρ) - Θ(R-ρ) and the other to δ(R + a - ρ) - δ(R-ρ). j_ρ e^ıω t = B_max R/ρ[ Θ(R+a - ρ) - Θ(R-ρ) ] × [ 1/12√(2) h^+ ω^2 { 2z (3 + c_2θ_h) c_2ϕ + 6 z s_θ_h^2 - ρ c_ϕ s_2θ_h} + 1/6√(2) h^×ω^2 ( 8 z c_θ_hc_ϕ - ρ s_θ_h) s_ϕ + ı/192√(2) h^+ ω^3 { z ρ( - 9 c_ϕ (-11 s_θ_h + s_3θ_h ) + 5 c_3ϕ (5 s_θ_h + s_3 θ_h) ) + 2 c_θ_h( (30z^2 - 21 ρ^2 + (10z^2 + ρ^2) c_2θ_h ) c_2ϕ + (24 z^2 - 2ρ^2) s_θ_h^2 ) } + ı/48√(2)h^×ω^3 { 2 ( 5z^2 - 4 ρ^2 + (5z^2 - ρ^2) c_2θ_h) s_2ϕ + z ρ s_2θ_h (9 s_ϕ + 5 s_3ϕ ) }], j_ϕ e^ıω t = B_max R/ρ[ Θ(R+a - ρ) - Θ(R-ρ) ] × [ -1/6√(2) h^+ ω^2 { z (3 + c_2θ_h) c_ϕ + ρ s_2θ_h} s_ϕ + 1/3√(2) h^×ω^2 ( z c_θ_h c_2ϕ + ρ c_ϕ s_θ_h) - ı/64√(2) h^+ ω^3 { 4c_θ_h( 3z^2 + ρ^2 + (z^2 - ρ^2 ) c_2θ_h) s_2ϕ + z ρ s_3θ_h ( 5s_ϕ + s_3ϕ ) + zρ s_θ_h ( 9 s_ϕ + 5 s_3ϕ ) } + ı/16√(2)h^×ω^3 { ( 2 z^2 + ρ^2 ) c_2ϕ - c_2θ_h( ρ^2 + (- 2z^2 + ρ^2) c_2ϕ) + ρ (ρ + 4z c_ϕ^3 s_2θ_h) }] + B_max R [ δ(R+a - ρ) - δ(R-ρ) ] × [ -1/12√(2) h^+ ω^2 { z (3 + c_2θ_h) c_ϕ + ρ s_2θ_h} s_ϕ - 1/6√(2) h^×ω^2 ( z c_θ_h c_2ϕ + ρ c_ϕ s_θ_h) - ı/48√(2) h^+ ω^3 ( z c_θ_h + ρ c_ϕ s_θ_h) ( 2 ρ s_2θ_h s_ϕ + z (3 + c_2θ_h) s_2ϕ) - ı/12√(2)h^×ω^3 ( z c_θ_h + ρ c_ϕ s_θ_h) ( z c_θ_h c_2ϕ + ρ c_ϕ s_θ_h) ], and j_z e^ıω t = B_max R/ρ^2[ Θ(R+a - ρ) - Θ(R-ρ) ] × [ 1/12√(2) h^+ ω^2 { (z^2 - 2 ρ^2) (3 + c_2θ_h) c_2ϕ + 3z ρ c_ϕ s_2θ_h} + 1/6√(2) h^×ω^2 { 4(z^2 - 2 ρ^2) c_θ_h c_ϕ + 3z ρ s_θ_h} s_ϕ - ı/192√(2) h^+ ω^3 { 2z(2z^2 - 7ρ^2) (7 c_θ_h + c_3θ_h) c_2ϕ + 8 z ρ^2 c_θ_h s_θ_h^2 + ρ c_ϕ[ (50 z^2 - 9 ρ^2 + 5 (4z^2 - 9 ρ^2) c_2ϕ ) s_θ_h + ( 10 z^2 + 3ρ^2 + ( 4 z^2 - 9 ρ^2 ) c_2ϕ ) s_3θ_h] } + ı/48√(2)h^×ω^3 {ρ(14z^2 - 9 ρ^2 + (4z^2 - 9 ρ^2)c_2ϕ) s_2θ_h s_ϕ + 4z(2z^2 - 7ρ^2) c_θ_h^2 s_2ϕ}] + B_max R/ρ[ δ(R+a - ρ) - δ(R-ρ) ] × [ 1/24√(2) h^+ ω^2 { (z^2 + 2ρ^2) (3+c_2θ_h) c_2ϕ + 2z s_θ_h (- 4ρ c_θ_h c_ϕ + 3z s_θ_h) } + 1/3√(2) h^×ω^2 [ (z^2 + 2ρ^2) c_θ_h c_ϕ - z ρ s_θ_h] s_ϕ - ı/192√(2) h^+ ω^3 {ρ c_ϕ[ (8z^2 + 3 ρ^2 + 5(2z^2 + 3ρ^2)c_2ϕ ) s_θ_h. . + ( - 8 z^2 - ρ^2 + (2z^2 + 3 ρ^2) c_2ϕ ) s_3θ_h] + 2 z c_θ_h( (6z^2 + 7 ρ^2 + (2z^2 + 5ρ^2)c_2θ_h ) c_2ϕ + (8z^2 - 2 ρ^2) s_θ_h^2 ) } - ı/48√(2) h^×ω^3 {ρ(3 ρ^2 + (2z^2 + 3ρ^2) c_2ϕ ) s_2θ_h s_ϕ + 2z (z^2 + ρ^2 + (z^2 + 2ρ^2)c_2θ_h)s_2ϕ}]. § RECASTING DARK MATTER SENSITIVITY TO GW STRAIN SENSITIVITY Using the techniques described so far, the magnetic flux induced by a GW passing through various lumped-element detectors can be computed. To determine the sensitivity to these signals, that information then needs to be combined with various other properties of the signal, such as its duration and frequency profile, as well as the backgrounds characteristic of the individual detector. In this appendix, we present an alternative way to estimate the parametric sensitivity of instruments to a GW signal, which involves bootstrapping the known sensitivity to axion dark matter. The starting point is a calculation of the magnetic flux for axion dark matter and the GW. However, we cannot simply equate these fluxes, as there are two other properties of the signal that will determine their detectability: the signal duration, and the signal coherence. Roughly, the more coherent and longer a signal, the more straightforward it is to detect. Our approach will be to correct the detectability of the two fluxes with a coherence ratio, R_c, determined such that Φ_h = R_c Φ_a. As Φ_h ∝ h (and Φ_a ∝) this then determines the GW strain sensitivity. In most cases, the highly coherent nature of dark matter will lead to R_c > 1, and therefore suppress our sensitivity, but as we outline, there are cases where the coherence ratio can be less than unity. We emphasise that our approach should be viewed as a heuristic, a shortcut to exploit known axion sensitivities to determine the parametric sensitivity to a GW. For an individual experiment, the correct approach is always to determine the full sensitivity to the instrument. In what follows, we will firstly detail the relevant time scales that need to be combined to compute R_c, and following this we will explain how that computation is performed. The results can depend on the scan strategy adopted by the instrument, and in order to account for this we will then introduce a simple model for the DMRadio scan strategy. We will then outline how two specific examples, black hole superradiance and PBH mergers can be described using the formalism we have introduced. §.§ The relevant times scales for GW and dark matter signals With the motivations and caveats detailed, let us expand on the physics of the problem. We will consider a GW signal that lasts a time T_h and has a finite bandwidth, and therefore coherence time τ_h, although we will have τ_h ≤ T_h. The exact values of these parameters will depend on specific models – we will consider the cases of black hole superradiance and PBH mergers later in this appendix – but for now we will keep the discussion general. For instance, while we keep T_h arbitrary, by making it longer than the experimental run time T_exp we can account for a persistent signal. The GW signal will then be compared against axion DM, which is a persistent signal that is highly coherent, with a coherence time τ_a = 2 π Q_a/m_a ≃ 4 s (1 neV/m_a), specified in terms of an effective quality factor for the signal, which for DM is given by Q_a = 10^6. If the mean frequency of the GW is ω_h, we can analogously define a quality factor as 2 π Q_h = τ_h ω_h ≤ T_h ω_h. Thus far, we have introduced the scales associated with the two signals we wish to compare, {τ_h, T_h, τ_a}, however we must also account for the relevant time scales associated with the experiment we are searching for the signal in. In this work, we focus solely on instruments that pursue a resonant scan strategy: the instrument is tuned to resonantly enhance an angular frequency ω_m, interrogating this frequency for a time T_m, before moving onto the next frequency. The total run time for the instrument is then T_exp = ∑_m T_m, where the choice of {ω_m, T_m} defines an experimental scan strategy. The final relevant quantity for a resonant instrument is the quality of the resonant response at the frequency ω_m, which defines a timescale τ_r = 2π Q_r/ω_m. Note that τ_r and Q_r can vary as we change the resonant mass considered; cf. the resonant response of a microwave cavity to that in nuclear magnetic resonance (NMR). §.§ Exponential statistics and the coherence ratio We now need to combine the previously discussed timescales into a single coherence ratio. As mentioned at the outset, we cannot simply match the fluxes because certain signals are easier to detect than others. The formal way of capturing this is with statistical significance. As demonstrated in Ref. <cit.> (see also Ref. <cit.>), for axion dark matter searches, the signal and background are expected to be exponentially distributed, and based on similar arguments, we assume the GW signals discussed here are also. An estimate for the signal sensitivity for an exponentially distributed quantity is given by the signal to noise ratio, SNR∼ P_s/P_b, where P_s and P_b are the power associated with the signal (with s = a,h) and background. If we were instead performing a counting experiment, the analogous expression would be S/√(B) – a consequence of the Poisson likelihood – where S and B are the number of signal and background counts. For N independent measurements, the significance in both cases grows as √(N). By matching the significance for the GW and dark matter signals, we will quantify the notion of detectability, and then as P_s ∝⟨Φ_s^2 ⟩, we will determine R_c. The goal then is to compute P_s and P_b for each signal, and where relevant, the number of independent bins. As we do so, we need only keep track of factors specific to each signal, as any common factors will cancel when we compute the coherence ratio. Consider first the power associated with a general signal, with flux Φ_s, coherence time τ_s and duration T_s > τ_s. For a given resonant bandwidth, the longest time we can effectively interrogate the signal is given by T_m,s = min[T_m, T_s]. In principle, this need not be longer than τ_r, in which case we would not fully ring up the resonant cavity. Given this and that such resonant systems can be effectively reduced to the study of the simple harmonic oscillator, the recent analysis for NMR based instruments in Ref. <cit.> can be deployed. Using this, the differential signal power at the resonant frequency, ω_0 depends on the hierarchy of scales as follows,[Note that in the event T_m,s = T_s > τ_s, the first and third hierarchies here are unphysical, and only the remaining two need be considered, with the relevant comparison being between T_s and τ_r. This holds for all of the expressions that follow.] dP_s/dω(ω_0) ∝⟨Φ_s^2 ⟩{[ T_m,s^3 T_m,s≪τ_s, τ_r,; T_m,s^2 τ_s τ_s ≪ T_m,s≪τ_r,; T_m,sτ_r^2 τ_r ≪ T_m,s≪τ_s,; τ_s τ_r^2 τ_s,τ_r ≪ T_m,s. ]. What enters the significance is the integrated power. For the first three cases in Eq. (<ref>), where T_m,s, the signal will not be resolved in a single bin in the analysis; the frequency resolution in the discrete Fourier transform is set by Δω = 2 π/T_m,s, which must be narrower than the product of the signal and the instrument transfer function before the signal is resolved (for further details, see Ref. <cit.>). Accordingly, for the first three cases the width is simply 2 π/T_m,s. In the final case the signal becomes resolved, and therefore the integration range is set by the minimum of the signal and resonator widths, or min[ω_m/Q_s, ω_m/Q_r] = 2π min[1/τ_s, 1/τ_r]. Dropping the common factor of 2π, the total signal power is then, P_s ∝⟨Φ_s^2 ⟩{[ T_m,s^2 T_m,s≪τ_s, τ_r,; T_m,sτ_s τ_s ≪ T_m,s≪τ_r,; τ_r^2 τ_r ≪ T_m,s≪τ_s,; τ_r min[τ_s, τ_r] τ_s,τ_r ≪ T_m,s. ]. Note that min[τ_s, τ_r] ∝min[Q_s, Q_r], so in the resolved scenario the signal power can only be rung up to the minimum of the instrumental and signal Q-factors. The consideration of the background will be more straightforward. If we assume it is flat in frequency, then the total background power is just controlled by the width over which the signal is distributed, which we have already discussed. Lastly, for the case where the signal is resolved into multiple frequency bins, we will receive a √(N) enhancement to the SNR as discussed above. In particular, the number of bins is given by T_m,s/max[τ_s, τ_r], so we arrive at SNR/⟨Φ_s^2 ⟩∝ T_s ≡{[ T_m,s^3 T_m,s≪τ_s, τ_r,; T_m,s^2 τ_s τ_s ≪ T_m,s≪τ_r,; T_m,sτ_r^2 τ_r ≪ T_m,s≪τ_s,; τ_s τ_r^2 √(T_m,s/max[τ_s, τ_r]) τ_s, τ_r ≪ T_m,s. ]. For axion DM, Φ_s ∝ and T_m,s = T_m, so that from this result for the four regimes in Eq. (<ref>), our sensitivity would scale as ∝{T_m^-3/2, T_m^-1, T_m^-1/2, T_m^-1/4} for the four cases considered, as claimed in Ref. <cit.>. For a given resonant bandwidth, Eq. (<ref>) allows us to determine the relative SNR for an axion dark matter and GW signal, and therefore the appropriate coherence ratio, as R_c = √( T_a/ T_h). There is, however, one final factor that must be included. As the instrument executes its scan strategy in search of dark matter – scanning each ω_m for a time T_m – if the GW wave signal is sufficiently long and incoherent, then the signal will persist across multiple resonant bandwidths, and if there are M_h of these, the GW SNR receives a √(M_h) enhancement. Determining M_h requires knowledge of the exact scan strategy executed by the instrument, and a number of considerations enter into the determination of the optimal scan strategy, as discussed in, for example, Refs. <cit.>. Instead, we adopt a simplified approach. We assume the scan strategy is determined by a choice not to scan any putative dark matter mass more than once, so that ω_m+1-ω_m = max[ω_m/Q_a, ω_m/Q_r]. For a persistent GW signal, the width is approximately ω_m/Q_h, so that M_h ∼max[1, min[Q_a, Q_r]/Q_h]. For a transient signal, M_h can be reduced from this value, as the signal may not persist as the various bins are scanned, and that reduction must be accounted for. As a specific example, in the limit where τ_h = T_h, we will always have M_h=1. In summary then, we have R_c = 1/M_h^1/4√( T_a/ T_h), with M_h the number of resonant bandwidths the GW signal appears in as discussed in the previous paragraph, and T_a and T_h are determined by Eq. (<ref>). To build some intuition for this result, let us determine R_c explicitly in several cases. The complexity of the expressions above largely originate from the many scales and the possible hierarchies between them. As we will see, once several are fixed, the results simplify. After this, we will show how the formalism can be deployed for the specific examples of superradiance and PBH mergers. Persistent signal and a long interrogation time. For axion DM, the signal duration is effectively infinite (T_a = ∞), and often we take T_m ≫τ_a, τ_r. If we assume a similar hierarchy holds for the GW (T_h ≥ T_exp and T_m ≫τ_h, τ_r), then both signals are descsribed by the final line of Eq. (<ref>), and accounting for the additional factor of M, we have R_c = √(Q_a/Q_h)( max[Q_h, Q_r]/max[Q_a, Q_r])^1/4(1/max[1, min[Q_a, Q_r]/Q_h])^1/4 = {[ (Q_a/Q_h)^1/2 Q_a < Q_h < Q_r,; (Q_a^2/Q_r Q_h)^1/4 Q_a < Q_r < Q_h,; (Q_a/Q_h)^1/4 otherwise. ]. Given the highly coherent nature of the dark matter signal, in most cases we will be in the regime where Q_h < Q_a, and then we find R_c = (Q_a/Q_h)^1/4. This is exactly the scaling argued for in Ref. <cit.>, although we can now see that this result only holds for the particular set of assumptions we invoked in this paragraph.[In Ref. <cit.>, it was argued that a persistent signal of relativistic axions has a coherence ratio analogous to our R_c = (Q_a/Q_h)^1/4, which was claimed to hold for both resonant and broadband strategies. Even though that result essentially matched the more accurate experimental sensitivity determined in Ref. <cit.>, we emphasise that for a resonant experiment a deviation from the (Q_a/Q_h)^1/4 scaling should be expected whenever T_m,s is not the longest timescale, or if dark matter is the less coherent signal.] Transient signal of equal duration and coherence time. Next we consider a transient GW signal, but taking τ_h = T_h for simplicity. From Eq. (<ref>) we have (using T_m,h = min[T_m, T_h]) T_h = {[ T_m,h^3 T_m,h≪τ_r,; T_m,hτ_r^2 τ_r ≪ T_m,h. ]. = T_m,h min[T_m,h^2, τ_r^2]. We then need to compare this to DM. Let us again assume T_m is such that dark matter is well resolved (the more general case is straightforward), so that we can immediately read of T_a from the final line of Eq. (<ref>), and obtain R_c = √(τ_a/T_m,h)τ_r/ min[T_m,h, τ_r] ( T_m,a/max[τ_a, τ_r])^1/4, as in this case we will always have M_h=1. For T_h < T_m, this result is then exactly Eq. (<ref>) from the main text. §.§ A model for the DMRadio scan strategy Equation (<ref>) demonstrates explicitly that the coherence ratio can depend on the scan strategy of a resonant detector, which again is defined by the choice of {ω_m, T_m}. Therefore an explicit strategy is required if we are to compute the gravitational wave sensitivity in this limit. Here we detail an explicit, albeit simplified, strategy for the most sensitive proposed instrument we consider in our frequency range: DMRadio-GUT, as described in Ref. <cit.>. Considerable effort has been put into determining the optimal scan strategy for resonant instruments, for instance, see Refs. <cit.>. Here, however, we adopt a simpler approach. Working with the parameters forecast for DMRadio-GUT, the instrument will have Q_r = 2 × 10^7 > Q_a, so that the instrumental bandwidth is narrower than that expected for the axion. Therefore, a simple strategy that ensures no axion mass is overlooked, is to adjust the resonant frequency by the larger axion bandwidth, i.e. ω_m+1 = ω_m (1+1/Q_a). For the frequency range DMRadio-GUT will cover, m_a ∈ [0.4,120] neV this then defines the set of ω_m. Next we specify the integration time spent at each frequency, or T_m. To begin with, for the nominal scan rate for the instrument is <cit.> dν/dt = 41  kHz/ year( /10^-19  GeV^-1)^4 ( ν/100  kHz). The broad target for axion dark-matter instruments is the QCD axion line, where varies as a function of mass. In particular, two common targets are the KSVZ <cit.> and DFSZ <cit.> axion models, where || ≃ 1.62 × 10^-19  GeV^-1( m_a/100  kHz) ×{[ 1 KSVZ,; 0.389 DFSZ. ]. Over the full mass range, Eq. (<ref>) implies that it would take DMRadio-GUT roughly 37 days to reach the KSVZ prediction, and 4.4 years for the smaller DFSZ. Focusing on DFSZ, which is the target for the instrument, if we combine this result with the strategy for choosing ω_m already described, we find T_m ≃ 14.3 s ( 1  neV/m_a)^4. So in this simplified strategy very little time is spent at any particular axion mass, but this can be accounted for using the details of Eq. (<ref>) being utilised. §.§ Examples for high-frequency GW signals Superradiance. One possible nearly persistent, highly coherent GW source is axion superradiance (for a comprehensive review, see Ref. <cit.> and references therein) with the expected strain h ∼ 10^-18( α/l) ϵ(1 kpc/d) ( m_ BH/2M_⊙), where α = G m_ BH m_a with m_a being the axion mass, l is the orbital angular momentum of the decaying axions from the hosting black hole, d is the distance between the observer the black hole, and ϵ is the fraction of black hole mass accumulated in the axion cloud <cit.>. For a definite illustration, we choose d = 1 kpc, α / l = 0.5, α=0.1 and ϵ = 10^-3 for our discussion. The frequency of the signal is determined by the mass of the axion as f = m_a / π. Superradiance can generate highly coherent signals, with ḟ∼ 10^-20( α / 0.1 )^17 f^2 <cit.>, which corresponds to very long coherence times τ_h which here indicate the time scale over which the GW frequency changes by a factor O(1). The signal quality factor Q_h = τ_h f is correspondingly large, Q_h ∼∫ df f / ḟ∼ f^2 / ḟ∼ 10^20 (0.1/α)^17. Note that for resonant detectors, the number of observable cycles is limited by the detector bandwidth Δ f to ∼ f/ḟ Δ f. In accordance with our definitions above, we denote also in this case the intrinsic coherence of the GW signal with Q_h, whereas the detector bandwidth is accounted for by the detector parameters introduced above. The expected strain h as well as effective signals h/ R_c for different values of experimental run time T_m and the quality factor of the detector Q_r are depicted in Fig. <ref>. Because of its highly coherence nature, in principle we can have R_c ≪ 1 especially for high frequency regime. PBH mergers. As another benchmark signal, we consider PBH binary mergers. For simplicity, we will take both black holes to have the same mass, m_ PBH. In this case, we can determine the various time scales of the GW signal as a function of m_ PBH and the GW frequency f. The rotation frequency of the PBH binary is initially given by Kepler's law with the distance between the two BHs R and the total mass 2m_ PBH. Here, we take the rotation frequency of the binary as a free parameter determined by this initial condition. Due to the emission of GWs, the binary loses energy which leads to a reduction in R and a corresponding increase in frequency, ḟ = 48 · 2^2/3/5π^8/3( G m_ PBH/c^3)^5/3 f^11/3. The radius continues to decrease until the innermost circular orbit (ISCO) is reached, at which point the radius and rotation frequency are given by r_ ISCO = 12 G m_ PBH/c^2, f_ ISCO = c^3/24√(6)π G m_ PBH≃ 1.1 kHz( M_⊙/m_ PBH). This sets the maximum frequency of the GW signal as ∼ 2 f_ ISCO <cit.>. This increase in the frequency of the merger signal severely limits the coherence time τ_h of the signal. As above, the corresponding quality factor can be obtained as Q_h ∼∫ df f/ḟ∼ f^2/ḟ, and since the integral is dominated by its lower boundary, we have T_h ≃τ_h. In addition to setting the coherence time, Q_h is also the number of cycles the orbit will undertake until merger, explaining why the coherence decreases as the ISCO is approached. Following the same procedure described in the supplementary material of Ref. <cit.>, we can determine expected strain sensitivity h from PBH binary systems assuming an event per year. In Fig. <ref>, we plotted the effective GW signal, h/ R_c accounting for the suppression factor depending on the run time and the quality factor of the instrument. In the figure, the grey region is excluded as it corresponds to f > 2 f_ ISCO. § GW DETECTION WITH AN ELECTRIC FIELD As briefly discussed in the main text, in principle one can also search for the electromagnetic response when a GW passes through an electric field. For axion dark matter, any such effect will be suppressed by dark matter's non-relativistic velocity, as the coupling arises from ∇ a rather than ∂_t a. No similar suppression occurs for the GW, although it remains true that for a given volume, the largest laboratory magnetic fields will have an enhanced energy density compared to the largest electric fields. Nevertheless, for completeness we here demonstrate how the symmetry arguments apply in the case of an experiment with an electric field, providing parametric estimates for a single configuration. The example we consider is an instrument with a solenoidal electric field, E = E_0 ê_z. The exact details of the experimental electric field (such as the form of the electric field at the boundary) we will not consider. We will take all length scales in the problem to be L, and simply study the angular dependence and ω scaling of the results. For such a configuration, the leading O[(ω L)^2] contribution is given as Φ_h^(2)∼ e^-ıω tω^2 E_0 L^4 s_θ_h( h^+ c_ϕ_h - h^× c_θ_h s_ϕ_h - ϕ_ℓ). If we impose azimuthal symmetry on the pickup loop configuration (either as in BASE or DMRadio-m^3), the O[(ω L)^2] order vanishes as in the magnetic field case, and the O[(ω L)^3] order has only a h^+ contribution, Φ_h^(3)∼ e^-ıω t h^+ ω^3 E_0 L^5 s^2_θ_h. Compared to the equivalent magnetic field result studied in Sec. <ref>, the form is similar except for the appearance of h^+ rather than h^×. More generally, for an electric external field, it can be shown that the same arguments and selection rules given in Sec. <ref> apply with the exchange of h^×↔ h^+, as expected given that the electric field is a vector while the magnetic field is a pseudovector. Hence, changing from a magnetic to electric field will leave the leading power conclusions in Tab. <ref> unchanged. JHEP
http://arxiv.org/abs/2307.00053v1
20230630180005
Structure, Kinematics, and Observability of the Large Magellanic Cloud's Dynamical Friction Wake in Cold vs. Fuzzy Dark Matter
[ "Hayden R. Foote", "Gurtina Besla", "Philip Mocz", "Nicolás Garavito-Camargo", "Lachlan Lancaster", "Martin Sparre", "Emily C. Cunningham", "Mark Vogelsberger", "Facundo A. Gómez", "Chervin F. P. Laporte" ]
astro-ph.GA
[ "astro-ph.GA" ]
Hayden R. Foote [email protected] 0000-0003-1183-701X]Hayden R. Foote Steward Observatory, The University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA. 0000-0003-0715-2173]Gurtina Besla Steward Observatory, The University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA. 0000-0001-6631-2566]Philip Mocz Department of Astrophysical Sciences, Princeton University, Princeton, NJ, 08544, USA Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550, USA 0000-0001-7107-1744]Nicolás Garavito-Camargo Center for Computational Astrophysics, Flatiron Institute, Simons Foundation, 162 Fifth Avenue, New York, NY 10010, USA 0000-0002-0041-4356]Lachlan Lancaster Department of Astrophysical Sciences, Princeton University, Princeton, NJ, 08544, USA Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY, 10027, USA Center for Computational Astrophysics, Flatiron Institute, Simons Foundation, 162 Fifth Avenue, New York, NY 10010, USA 0000-0002-9735-3851]Martin Sparre Institut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Str 24/25, D-14476 Golm, Germany Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam, Germany 0000-0002-6993-0826]Emily C. Cunningham NASA Hubble Fellow Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY, 10027, USA Center for Computational Astrophysics, Flatiron Institute, Simons Foundation, 162 Fifth Avenue, New York, NY 10010, USA 0000-0001-8593-7692]Mark Vogelsberger Department of Physics, Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 0000-0002-1947-333X]Facundo A. Gómez Instituto Multidisciplinario de Investigación y Postgrado, Universidad de La Serena, La Serena, Chile Departamento de Astronomía, Universidad de La Serena, Av. Juan Cisternas 1200 Norte, La Serena, Chile 0000-0003-3922-7336]Chervin F. P. Laporte Departament de Física Quàntica i Astrofísica (FQA), Universitat de Barcelona (UB), c. Martí i Franquès, 1, 08028 Barcelona, Spain Institut de Ci`encies del Cosmos (ICCUB), Universitat de Barcelona (UB), c. Martí i Franquès, 1, 08028 Barcelona, Spain Institut d’Estudis Espacials de Catalunya (IEEC), c. Gran Capità, 2-4, 08034 Barcelona, Spain The Large Magellanic Cloud (LMC) will induce a dynamical friction (DF) wake on infall to the Milky Way (MW). The MW's stellar halo will respond to the gravity of the LMC and the dark matter (DM) wake, forming a stellar counterpart to the DM wake. This provides a novel opportunity to constrain the properties of the DM particle. We present a suite of high-resolution, windtunnel-style simulations of the LMC’s DF wake that compare the structure, kinematics, and stellar tracer response of the DM wake in cold DM (CDM), with and without self-gravity, vs. fuzzy DM (FDM) with m_a = 10^-23 eV. We conclude that the self-gravity of the DM wake cannot be ignored. Its inclusion raises the wake's density by ∼ 10%, and holds the wake together over larger distances (∼ 50 kpc) than if self-gravity is ignored. The DM wake's mass is comparable to the LMC's infall mass, meaning the DM wake is a significant perturber to the dynamics of MW halo tracers. An FDM wake is more granular in structure and is ∼ 20% dynamically colder than a CDM wake, but with comparable density. The granularity of an FDM wake increases the stars' kinematic response at the percent level compared to CDM, providing a possible avenue of distinguishing a CDM vs. FDM wake. This underscores the need for kinematic measurements of stars in the stellar halo at distances of 70-100 kpc. § INTRODUCTION The Large Magellanic Cloud (LMC) is the Milky Way's (MW) largest satellite galaxy, possessing an infall mass of ∼ 1-2 × 10^11 M_⊙, roughly 10% that of the MW <cit.>. The LMC is currently on its first infall to the MW <cit.>, and is inducing significant perturbations in the MW's dark matter (DM) halo, including the collective response, MW reflex motion about the MW/LMC barycenter, and a dynamical friction (DF) wake (, hereafter ; , see also for a recent review). The perturbations to the MW halo potential induced by the LMC have important, widespread effects on the kinematics of halo tracers, including stellar streams <cit.>, globular clusters and satellite galaxies <cit.>, and the halo stars in general (e.g. ; ). The LMC's infall also affects mass measurements of the MW <cit.> and even the shape and dynamics of the MW's stellar disk <cit.>. If the MW halo's response to the LMC depends on the microphysics of the DM particle, then this scenario presents a unique opportunity to constrain the nature of DM. In particular, the LMC's DF wake offers a promising test-bed, as the strength and density structure of DF wakes depends on the physics of the medium in which they form <cit.>. However, our limited ability to disentangle the response of halo tracers due to the wake specifically vs. other perturbations induced by the LMC (e.g. the LMC's tidal field and the MW's reflex motion) presents a barrier to using the wake as a DM laboratory. used a tailored suite of high-resolution N-body simulations of the MW/LMC interaction to show that the LMC creates three major responses in the MW's DM halo, work that was later expanded upon by <cit.>. These responses are: 1) the collective response, a large-scale overdensity which leads the LMC and arises primarily due to the shift of the inner halo relative to the outer halo; 2) the global underdensity, which surrounds the LMC’s DF wake; and 3) the DF wake itself. By “painting” a stellar halo onto the DM particles using weighted sampling, also explored the response of the stellar halo to the perturbations induced by the LMC. They found that there should be an observable stellar overdensity associated with the DM wake, which was tentatively detected by <cit.>. Further, they found that the velocities of stars in the wake converge near the LMC and diverge behind it, which leads to an enhancement in the component of the stellar velocity dispersion that is orthogonal to the wake. While this approach is effective at capturing the global behavior of the MW's DM halo in response to the LMC, it is unable to separate the effect of the DM wake from that of the LMC itself and other halo perturbations. In particular, even in the absence of a DM halo, the passage of a massive perturber such as the LMC would be sufficient to form a wake in the stellar halo <cit.>. If the LMC's wake is to be used as a DM laboratory, a more detailed understanding of the role of the DM wake's self-gravity in forming the stellar wake is required. A complementary study by <cit.> used a linear response formalism to study the effect of the LMC on the MW's dark and stellar halos. An advantage of linear response theory is that it allows turning off the self-gravity of the DM, giving insight into the DM wake's role in shaping the response of the stars. <cit.> reported that the DM wake's self-gravity enhanced the density of the DM wake by ∼ 10%, which hints that the stellar response to the wake is likely sensitive to the density field of the DM wake. This further suggests that the stellar response may also reflect changes in the wake structure owing to the nature of the DM particle. <cit.> and <cit.> argue that the behavior of fuzzy DM (FDM) DF wakes can vary significantly from those in CDM. FDM is an ultralight bosonic scalar field DM with particle masses of m_a ∼ 10^-22 eV (; see also , , and for reviews), with typical particle de Broglie wavelengths on the order of kpc. FDM exhibits characteristic density fluctuations on size scales comparable to the de Broglie wavelength of the particles, often called “granules,” which arise due to wave interference between the particles. In the context of DF, <cit.> and <cit.> show that FDM granules interact with the perturbing object to produce highly stochastic density fields in the wake, which can result in an oscillatory drag force if the perturber is moving slowly. To test these predictions using the LMC's DF wake, we must first understand whether such an FDM wake would affect the motions and distribution of halo tracers differently than a CDM wake. In this paper, we present a suite of windtunnel-style N-body simulations of the LMC's DF wake under three different assumptions for the DM model: CDM with self-gravity, CDM without self-gravity, and FDM with self-gravity. We aim to determine the extent to which self-gravity and the assumption of the DM model impact the structure and kinematics of the LMC's DM wake. Additionally, to quantify the effect of the DM wake on the distribution and velocities of halo tracers (halo stars, globular clusters, or satellite galaxies), we include a separate population of stellar tracer particles. This paper is organized as follows: In <ref>, we outline the setup of our windtunnel simulations, including the motivation for our setup, how we choose our initial conditions, and the specifics of each DM model we consider. In <ref>, we present our results for the structure and kinematics of the DM wakes. <ref> discusses the response of the stellar halo to both the LMC and the DM wakes. In <ref>, we consider the effect of the FDM particle mass on our results, introduce a toy model for how the stellar wake might be observed from Earth, and determine the robustness of our results to observational errors. We also explore the effect of the chosen DM model on the LMC's orbit, and discuss the wake's influence as a perturbation to the MW's DM halo. <ref> examines the consequences of changing major assumptions in our simulation setup. Finally, we summarize our findings in <ref>. § SIMULATIONS Here, we describe the simulations we use to study the formation of the LMC's DF wake and corresponding response of the MW's stellar halo. In <ref>, we explain the motivation for and the design of our windtunnel setup. <ref> and <ref> describe the motivation for our choices of initial conditions for the DM and stars, respectively. In <ref>, we describe our CDM simulations which we perform with the code <cit.>. Our FDM simulations are performed with the module <cit.> for the code <cit.>, and are described in <ref>. §.§ Dark Matter Windtunnels To study the formation of the DM wake behind an LMC-like perturber, we use windtunnel-style simulations, in which the perturber is stationary while a “wind” of particles moves past the perturber with a common bulk velocity. The box's boundary conditions are set up such that one boundary acts as an inflow, the opposite boundary acts as an outflow, and the boundaries parallel to the wind's motion are periodic. In this way, the interaction of the perturber with the background wind can be studied in a maximally controlled environment. Windtunnel setups are commonly used in hydrodynamic simulations (e.g. ; ; ; ; ) and when studying DF in FDM backgrounds (; ). In hydrodynamic windtunnels, it is common to use inflow/outflow boundaries, where the wind particles are created at the inflow and removed at the outflow. Such boundaries also allow one to change the wind properties with time to mimic a perturber falling deeper into a host galaxy's halo <cit.>. In principle, these time-dependent wind properties would seem ideal for our simulations, but in practice, increasing the wind density and speed with time results in the gravitational collapse of the most dense regions of the wind, creating artificial shockwave-like structures. This restricts us to using a completely uniform background wind, in which the density, dispersion, and velocity remain constant throughout the simulation. Such a wind is most efficiently created by using fully periodic boundary conditions as in <cit.> and <cit.>. When the wind is given a bulk velocity, it loops through the box and naturally creates inflow/outflow-like boundaries. Of course, care must be taken to stop the simulation prior to the wake wrapping through the box boundary, and so all of our simulations are run for one-half the box crossing time at the bulk wind speed. All of our boxes are cubic and have side lengths of L = 600 kpc, which allows us to simulate wakes longer than the MW's virial radius. The LMC in our simulations is represented by an external, stationary Hernquist potential <cit.> at the center of the simulation volume. This potential is modeled using the density profile of 's LMC3 (see Table <ref>). A uniform background of DM (the DM “wind”) with constant mass density ρ̅ and isotropic velocity dispersion σ̅ moves across the LMC potential with a constant bulk velocity v in the +y-direction. In this study we choose two sets of wind properties, described in Section <ref>. The advantages of this setup over simulating the full LMC-MW interaction with live halos are threefold: * For FDM in particular, a windtunnel is far less computationally expensive. Specifically, live FDM halos require exceptionally high spatial and temporal resolution (see Section <ref>) that makes an FDM simulation analogous to those in prohibitively expensive. A windtunnel, by contrast, requires only that we resolve the relatively uniform wind instead of the complex structure of a halo. * A windtunnel setup allows us to study the role of the wake's self gravity by running simulations both with and without gravity between DM particles. N-body simulations with live halos by nature require self-gravity between their DM particles to keep the halos bound, while a uniform DM wind is not subject to this restriction. This allows us to separate how the MW's stellar halo reacts to only the LMC, vs. the LMC plus a DM wake. If this difference is observed, it will provide independent evidence that the LMC is moving through a DM medium. * Idealized windtunnels present the best stage for which to study DF wakes in the absence of other complicating factors present in live interaction simulations such as tides from the host galaxy, the host's reflex motion, and orbital resonances. Our setup thus streamlines analysis because we do not have to disentangle DF from any other process. Naturally, the drawback of the windtunnel is that there is no MW potential. As a result, the LMC “moves” in a straight line (as opposed to a curved orbit), and the wind speed and density are constant (as opposed to varying as the LMC plunges deeper into the MW's halo). Nevertheless, we use 's fiducial Simulation 3 (their LMC3 and MW1 galaxy models, summarized in Table <ref>) as a reference simulation to guide the setup of our simulations in an effort to make our wakes as realistic as possible. In Appendix <ref>, we show that the wakes in our Fiducial CDM windtunnel simulations closely resemble the wake formed in 's Simulation 3. §.§ Dark Matter Wind Parameters To select the DM wind parameters ρ̅, σ̅, and v, we choose a point along the LMC's orbit from our reference simulation, and obtain the Galactocentric position and velocity of the LMC at this point. Then, we calculate ρ̅ and σ analytically at the orbital radius of interest, using the MW1 density profile from (see Table <ref>). The wind bulk velocity v is then simply the LMC's orbital speed. Using this procedure to determine wind parameters, we simulate two different cases along the LMC's orbit: * determine the stellar halo's response to the wake is most easily observed at a Galactocentric distance of 70 kpc to maximize the stellar density while avoiding contamination from the Clouds and the Sagittarius stream. To best reproduce this response with a windtunnel, we want our `Fiducial' CDM wake to match 's wake at 70 kpc, which requires taking the wind parameters from 70 kpc as opposed to the LMC's present-day location or pericenter passage. Therefore, our Fiducial orbit case represents the MW's halo at 70 kpc, when the LMC is moving at ∼ 313 km/s. * To study the behavior of FDM vs. CDM wakes and the effect of self-gravity as a function of the LMC's speed and the MW halo's density, we also simulate an `Infall' orbit case. This Infall case represents the MW's halo at a distance of ∼ 223 kpc (between our MW model's R_200 and R_vir), when the LMC is moving at 120 km/s. Figure <ref> illustrates the selection of these parameters. In the left panel, we show the LMC's orbit since it first crossed the MW's virial radius, until the present day in the reference simulation. The orbit is projected onto the yz-plane, and we mark the locations from which we take each set of wind parameters. Meanwhile, the other panels show the LMC/MW separation, LMC orbital speed, and MW DM density and dispersion at the LMC's location as a function of time. We also mark each choice of windtunnel parameters in each panel. For both orbit cases, we run two CDM simulations and one FDM simulation, described in <ref> and <ref> respectively. See Table <ref> for a summary of the DM wind parameters in each simulation. §.§ Stellar Wind Parameters In addition to the DM, all of our simulations include a uniform wind of star particles to test the response of the stellar halo to both the LMC and the DM wake. In all simulations, regardless of orbit case or DM model, the stellar wind is composed of test particles at 1 M_⊙ resolution. Their density is calculated from the K-giant stellar halo density profile of <cit.> at a Galactocentric distance of 70 kpc, assuming the stellar halo has a total mass of 10^9 M_⊙ inside the MW's virial radius and is composed entirely of K-giants. The stars' velocity dispersion is 90 km/s, again motivated by measurements at 70 kpc (; ; ), and they move at the same bulk speed as the DM wind. See Table <ref> for a summary of the stellar wind properties. We reiterate that while the DM and stellar winds of the Fiducial case are both calibrated for a Galactocentric distance of 70 kpc, we use the same stellar wind for the Infall case (at 223 kpc) as there are few observational constraints on the stellar halo's properties at large distances. §.§ CDM Our CDM simulations are performed with the N-body and smoothed particle hydrodynamics code <cit.>. We use 10^8 DM particles in all CDM simulations, which results in a mass resolution of 5.0 × 10^3 M_⊙ for the infall wind, and 2.3 × 10^5 M_⊙ for the fiducial wind. All simulations use a softening length of 0.16 kpc, from Equation 15 of <cit.> with 's MW1 model. For our CDM initial conditions, we begin by determining the particle mass based on the box volume, number of particles, and the desired wind density ρ̅. Particle positions are set randomly throughout the box to create a wind of uniform density. All three velocity components are sampled from a Gaussian according to the isotropic velocity dispersion σ̅. Finally, every particle is boosted by the bulk wind velocity v in the +y-direction. An identical procedure is used to create the star initial conditions for all simulations in our suite, though we note again that the stellar wind uses a different density and velocity dispersion than the dark matter wind (see Table <ref>). For each orbit case, we run two CDM simulations: one without self-gravity between the DM particles (i.e. the ONLY forces on simulation particles are from the LMC), and one with self-gravity between the DM particles but NOT the star particles (i.e. all particles feel gravity from the LMC and DM particles, but not from the stars). Comparisons between these simulations allow us to isolate the effects of the DM wake's self-gravity from the influence of the LMC. §.§ FDM Our FDM simulations are performed using the module (; ) for the code <cit.>. uses a second-order pseudo-spectral method to solve the FDM equations of motion on a discretized, fixed grid, similar to the module introduced by <cit.> and <cit.>. For more detailed background on FDM as a DM candidate, we refer the reader to reviews by <cit.>, <cit.>, <cit.>, and references therein. For detailed descriptions of the numerical methods used here, we refer the reader to <cit.>, <cit.>, <cit.>, <cit.>, and references therein. However, we provide an abridged description and information specific to our windtunnel simulations here for completeness. The FDM is described by a single wavefunction, which takes the form of a complex-valued scalar field ψ = √(ρ)e^iθ , where ρ = |ψ|^2 is the mass density of the FDM and θ∈ [0,2π) is the phase. ψ obeys the Schrödinger-Poisson (SP) equations of motion in the non-relativistic limit: i ħ∂ψ/∂ t = [ - ħ^2/2m_a∇^2 + m_a V ] ψ ∇^2 V = 4 π G (ρ - ρ̅) , where m_a is the FDM particle mass, and V is the gravitational potential. Additionally, the velocity field of the FDM is encoded by the phase θ via u⃗ = ħ/m_a∇θ , where u⃗ is the velocity of the FDM. The FDM wavefunction (Equation <ref>) is discretized onto a grid of N^3 cells of size dx = L/N, where L is the side length of the simulation box, and evolved using a kick-drift-kick algorithm. During one timestep dt, the potential is first calculated as V = ifft(-fft (4 π G (ρ - ρ̅)/k^2) + V_LMC, where fft and ifft indicate fast-Fourier and inverse fast-Fourier transforms, respectively, k is the wavenumbers associated with the grid cells, and V_LMC is the external LMC potential. Then, the first “kick” is performed using half the timestep: ψ←exp[-im_a/ħdt/2V]ψ Next is the “drift,” performed in Fourier space as ψ̂ = fft (ψ) ψ̂←exp[-i ħ k^2/2 m_a dt] ψ̂ ψ = ifft (ψ̂) and finally, the timestep is completed by an additional half-step “kick” via Equation <ref>. Directly solving the SP equations as we do here has the advantage that it self-consistently describes the full wave dynamics of the FDM, including interference patterns (sometimes called “granules” or “fringes”) that arise from the velocity dispersion of the FDM and interactions with the LMC potential. Capturing the full wave behavior of the FDM is especially important in studies of DF, as the interference patterns that arise in FDM DF wakes can cause significant deviations from CDM, including stochastic oscillation of the drag force (). Other numerical descriptions of FDM such as SPH methods or fluid dynamics approaches via the Madelung transformation <cit.> either approximate or ignore the detailed wave behavior. The disadvantage of directly solving the SP equations is the enormous spatial and temporal resolution required for numerical convergence. The resolution criteria arise from the wavefunction phase θ, which cannot vary by more than 2π in a grid cell during one timestep (which gives the temporal resolution requirement), or between adjacent grid cells in the same timestep (which gives the spatial resolution requirement). To satisfy the temporal resolution requirement, uses the timestep criterion dt ≤max[m_a/6ħ dx^2, h/m_a |V|_max] where |V|_max is the maximum of the absolute value of the potential (; ). The spatial condition may equivalently be thought of as the requirement that all velocities are resolved, i.e. that the largest velocity in the simulation does not exceed 2πħ/m_a dx (see Equation <ref>), or that the smallest de Broglie wavelengths in the problem are resolved: dx = L/N≤h/m_au_max In practice, to ensure that the largest velocities in our simulations are well below u_max, we set the limit on dx according to the bulk wind velocity (the largest velocity scale in the simulation) and then divide by a further factor of 2π, such that our grid cell sizes follow dx ≤ħ/m_av. For m_a = 10^-23 eV and our highest (Fiducial) wind speed of 313.6 km/s, the right-hand side evaluates to 0.611 kpc, which satisfies Equation <ref> when dx = 600 kpc/1024=0.586 kpc. To generate our FDM initial conditions, we take advantage of the property that ψ can be constructed according to a desired distribution function f as ψ(x⃗) ∝∑_j=0^N^3√(f(x⃗, u⃗_j))exp[ i m_a/ħx⃗·u⃗_j + i2π (ϕ_rand,j) ] , where the sum is over all grid cells in 3-D, and ϕ_rand,j∈ [0, 2π) is a random number that ensures the phases of each mode are random and uncorrelated, i.e. the FDM has some isotropic velocity dispersion <cit.>. In practice, we desire an FDM wind that is equivalent to our CDM wind, such that it is uniform on the scale of the box and follows an isotropic, Maxwellian velocity distribution. To do this, we take the equivalent approach of constructing the initial conditions in frequency space before taking the inverse Fourier transform and then normalizing such that the mean FDM density is the desired wind density ρ̅: ψ̂∝√(exp[ - ( ħ/m_a)^2 (2π/L)^2 k^2/2σ^2]) e^i 2 πϕ_rand,j ψ = ifft (ψ̂) ψ←ψ√(ρ̅ / (1/N^3∑_j=0^N^3|ψ_j^2|)) Finally, we apply the bulk wind velocity boost by calculating the wavenumber associated with the desired wind velocity k_boost = vm_aL/2πħ and then applying the boost via ψ←ψ exp[ i k_boost2 π y/L]. For each orbit case (Fiducal, Infall; see Table <ref> and Figure <ref>), our primary choice for the FDM particle mass is 10^-23 eV. This is the largest particle mass that is feasible to simulate with N = 1024 and L = 600 kpc.[Our FDM simulations each take ∼ 370000 CPU hours at this resolution. We are restricted to L ≥ 600 kpc to simulate a sufficiently long wake, so increasing the particle mass by a factor of just two requires 2048^3 cells. Using the characteristic 𝒪(N log(N)) scaling of the FFT calculations that BECDM relies on, such simulations would take ∼ 800000 CPU hours each, which we consider prohibitively expensive.] Lastly, we justify our choice to use the same wind parameters for both our FDM and CDM simulations, as FDM halos differ fundamentally from CDM halos. Instead of being constructed from individual DM particles that obey a particular distribution function (as in CDM), FDM halos are better described as a superposition of eigenmodes that combine to produce a ground-state soliton core surrounded by a “skirt” of excited states that follow an NFW-like <cit.> density profile <cit.>. Thus, it is important to verify that our choice of DM wind parameters v, ρ̅, and σ̅ is reasonable in FDM given that we motivate them from a CDM simulation. As described in <ref>, v is given by the LMC's orbital speed, while ρ̅ and σ̅ come from the MW's halo. We discuss each parameter in turn: * In <ref>, we argue that the LMC's orbit is the same in both a CDM and FDM universe, so our choices of v are valid in both DM models. * The MW halo's density profile is expected to match in CDM and FDM provided we are interested in a regime well outside the soliton core such that the FDM halo follows an NFW-like density profile similar to a CDM halo. <cit.> show that the MW's soliton would have a radius of ≈ 0.18 kpc, so at the orbital distances of the LMC (≥ 40 kpc) we expect our choices of ρ̅ to be valid in both DM models. * <cit.> show in their Appendix A that far from the soliton core, there is a direct correspondence between the classical particle distribution function of a CDM halo and the eigenmodes that comprise an FDM halo. As such, we expect that for the region of interest in our windtunnel (i.e. a volume many times larger than the de Broglie wavelength and far from the core), using a CDM distribution function to set the FDM eigenmodes (Equation <ref>) is a reasonable approach (T. Yavetz, personal communication 2023). Ultimately, we expect our choice of wind initial conditions to be equally valid in CDM and FDM. It is also worth noting that the inner density profile of the LMC would likely be different in FDM due to the presence of a core. However, in this work we use the same LMC model in both our CDM and FDM simulations to ensure that any differences in our wakes are due purely to our choice of DM model and not the density profile of the perturber. We leave an investigation of the wake's dependence on the perturber's density profile to future work. § DARK MATTER WAKES In this section, we compare the structure and kinematics of the DM wakes in 1) CDM without self-gravity, 2) CDM with self-gravity, and 3) FDM with m_a = 10^-23 eV. §.§ Density Figure <ref> shows the density structure of the simulations with the Fiducial wind (see Table <ref>) for our three primary DM models/scenarios. In this figure and throughout this work, when we discuss the density of simulation particles, we will use the overdensity δρ = ρ/ρ̅ - 1 = Δρ / ρ̅, which measures the relative change of the density compared to the input wind density, i.e. an overdensity of 0.1 corresponds to a 10% increase in density over the background. Figure <ref> shows the projected overdensity of each simulation after they have been evolved for 0.7 Gyr,[The wind travels ≈ 225 kpc during this time] which is the latest time at which there is no evidence that the wake has begun wrapping through the box's periodic boundaries. In Figure <ref>, we begin by taking a 120-kpc wide slice about the box's midplane in z, that is we select particles/cells with z ∈ [-60, 60]. For the CDM simulations, we then calculate the projected (column) density of DM particles in a grid of 2 kpc wide bins in x and y, before calculating the overdensity according to Equation <ref>. For the FDM simulation, we calculate and display the column overdensity in each z-column of cells in the 120-kpc slab with the same x-y coordinates. The white cross in each panel marks the location of the center of the LMC potential. In each simulation, the DM wake is apparent as an overdensity extending from the center of the box in the +y - direction. To ease comparison between the DM models, we calculate the half-max of the overdensity in the CDM simulation with self-gravity (δρ = 0.38), and enclose the region with δρ higher than this with a contour in each panel. When placing the contours, we smooth the density with a Gaussian kernel of σ = 4 kpc, which reduces the noise associated with the FDM granules. The two leftmost panels show the two CDM simulations. Comparing these two panels, the DM wake becomes larger when adding self-gravity: in the left panel (without self-gravity), the region enclosed by the contour reaches a maximum width of ∼ 50 kpc and extends ∼ 130 kpc behind the LMC. Adding self-gravity (middle panel) increases the width of the contour to ∼ 80 kpc, and the length to ∼ 200 kpc. Importantly, the augmentation in wake length demonstrates that the DM wake's self-gravity plays a significant role in the wake's structure, acting to hold the wake together at larger distances behind the LMC. The right panel shows the FDM simulation (with m_a = 10^-23 eV; see Table <ref>). We stress that the relative fuzziness of the FDM wakes is not a resolution effect (in fact, the FDM simulation is at higher resolution than the CDM). Rather, this granularity is a characteristic property of the FDM that arises due to wave interference between the FDM particles in a velocity-dispersed medium. The FDM wake looks qualitatively similar to the CDM wake with self-gravity aside from the granularity. While some granules near the center of the wake reach much higher overdensities than are seen in CDM, these granules are small and the overall density structure is qualitatively similar to the CDM wake. In <ref>, we will discuss the impact of FDM particle mass on these results. We quantify the wake overdensity by plotting a time-averaged, cross-sectional profile of the wake along the x-direction (perpendicular to the wind motion). Figure <ref> describes this process in a schematic. We begin by taking the same z-slice as we do for the projection plots (z ∈ [-60, 60]). Then, we select particles/cells in a 100 kpc-thick slice in y ∈ [50, 150] just behind the LMC, before binning the particles/cells along the x-direction in 10 kpc wide bins. Within each x-bin, we calculate the overdensity. To reduce noise and limit errors related to our choice of a specific snapshot, we repeat this process for five time-adjacent snapshots, spanning 100 Myr. The density in each x-bin is then averaged over the five snapshots, giving us a time-averaged profile of the density as a function of x across the wake. Figure <ref> shows the resulting profiles of the overdensity across the wakes generated in our Fiducial simulations (Figure <ref>) from t=0.6-0.7 Gyr. The upper panel shows the overdensity of each DM wake as a function of x, i.e. across the wake, and the lower panel shows the residuals with respect to CDM with self-gravity. The wakes show up as strong density peaks at the center of the box. The addition of self-gravity to the CDM wake raises the peak overdensity by roughly 10%, in agreement with the results of <cit.>. The granularity of the FDM wake shows up as oscillations with an amplitude of δρ∼ 0.05, though the average profile of the FDM wake matches the CDM wake with self-gravity. Figure <ref> is the same as Figure <ref> but for the Infall orbit case simulations after 2 Gyr (again the last timestep at which there is no evidence for the wake wrapping through the box). As a reminder, the wind in this case is roughly 100 times less dense and moving 1/3 as fast as the wind in the Fiducial case (see Table <ref>). The lower wind speed means particles spend a longer time near the LMC, creating a wider wake when compared to the Fiducial case: in the CDM simulation with self-gravity, the contour is ∼ 20 kpc wider in the Infall case than the Fiducial case (compare to Figure <ref>). The slower speed greatly reduces the effect of the wake's self-gravity, as the relative importance of the LMC's influence on the particles' motions increases. Comparing the CDM simulations (two leftmost panels) shows they are now almost indistinguishable in projection. Just as in the Fiducial case, the FDM wake appears similar to the CDM wake with self-gravity but is more granular. The density profiles in Figure <ref> reinforce this result, as we see the density profiles across the wakes of the two CDM simulations are very close, only showing a ∼ 3% difference at the peak. Meanwhile, the FDM wake's density oscillates about the CDM simulation with self-gravity with an amplitude of δρ∼ 0.05 as in the Fiducial wind case. An additional effect of the slower wind speed is that the wakes in the Infall case reach much higher overdensity peaks (δρ∼ 1.6, compared to ∼ 0.56 in the Fiducial case (see Figure <ref>). Overall, these results imply that the wake's self-gravity is only expected to become relevant at higher orbital speeds, i.e. once the LMC reaches a Galactocentric distance of ∼ 100 kpc. Therefore, observable effects of the wake's self-gravity (i.e. halo tracers' reaction to the wake) will likely not be present outside of ∼ 100 kpc. Meanwhile, the SMC is at a distance of 60 kpc and extends ∼ 30^o on the sky from the LMC <cit.>. The LMC's orbit extends past the SMC on the sky at a distance of ∼70 kpc . Together, the decreased effect of DM self-gravity outside of 100 kpc and the need for avoiding SMC contamination suggest the effects of the wake's self-gravity are best searched for at distances of 70-100 kpc. §.§ Velocity Dispersion Figure <ref> shows the z-velocity dispersion of the DM in each simulation of the Fiducial case, analogous to Figure <ref>. We follow the same binning procedure as in the previous section, with a few small differences: The z-slice is still from z ∈ [-60,60], however, we use an x-y grid of 3 kpc bins, and calculate the z-velocity dispersion in each bin. Finally, we apply a 6 kpc-wide Gaussian smoothing kernel. Similar to the overdensity, we report the dispersion as its relative difference from the mean dispersion, which we refer to as the velocity dispersion enhancement: δσ_z = σ_z / σ̅ - 1 = Δσ_z / σ̅, We also include a single contour placed at the half-max of the CDM simulation with self-gravity. The wake signature is an increase of the dispersion resulting from particles being deflected as they move past the LMC. Comparing the two CDM simulations in Figure <ref> (left and center panels), the effects of the wake's self-gravity on the velocity dispersion are similar to the density: when self-gravity is turned on, the wake becomes larger. Specifically, the region enclosed by the contour becomes ∼ 40 kpc longer and ∼ 20 kpc wider. For the FDM simulation (right), the granularity is still present in the velocity dispersion, causing an oscillatory behavior that washes out the smooth wake. The contour is much more irregular in shape, and encloses a ∼ 40 kpc narrower region than in CDM with self-gravity. In Figure <ref>, we compute the dispersion profile across the simulated wakes. We calculate these profiles identically to their density versions (Figures <ref> and <ref>), where, instead of overdensity, we calculate the z-velocity dispersion enhancement in each bin. Here, we again see a stronger (δσ_z ∼ 0.03) peak in the CDM wake when self-gravity is on versus when it is not included. Interestingly, unlike the density, the mean of the FDM wake's oscillations does not trace the CDM wake with self-gravity. Instead, the FDM profile is consistently similar to the CDM profile without self-gravity, showing that a self-gravitating FDM wake is colder than a self-gravitating CDM wake. Figure <ref> shows the z-velocity dispersion within the simulated wakes, but for the Infall orbit case. Overall, we see that the slower wind speed results in a stronger but less extended (in the y-direction) response in velocity dispersion compared to the Fiducial case. Like the density, the CDM wakes show a much smaller difference in the Infall case, as the LMC has more time to influence the particle velocities. The contours in both CDM simulations extend ∼ 75 kpc behind the LMC, and are ∼ 90 kpc wide. The FDM wake retains its characteristic stochasticity, though in the Infall case, the FDM response in velocity dispersion is significantly weaker than even the no-self-gravity CDM simulation, as the FDM contour is ∼ 30 kpc thinner and shorter than in CDM. The profile plots in Figure <ref> illustrate the dispersion profile across the wakes (in the x-direction). The peak dispersion is slightly (δσ_z ∼ 0.01) higher in the self-gravity-on case. The peak of the FDM wake is now much weaker than either CDM simulation, reaching δσ_z ∼ 0.23 as opposed to ∼ 0.29 in CDM. Taken together, Figures <ref> and <ref> show that FDM wakes are dynamically colder overall than CDM wakes. This can be explained by considering how FDM granules react to a gravitational potential. FDM particles collect into the characteristic granules that have a size of approximately the de Broglie wavelength. When the gravitational potential changes significantly on a scale comparable to or smaller than a granule, gravity becomes less effective at doing work on the granule <cit.>. This reduces the effectiveness of the LMC at heating the wake, and produces an FDM wake with lower dispersion than a CDM wake. The ∼ 20% reduction in the velocity dispersion response of the FDM wake compared to CDM is consistent across both the Infall and Fiducial orbit cases. This result suggests that DF wakes in FDM will be ∼ 20% colder than in CDM independent of the density of the medium or speed of the perturber. §.§ Velocity Divergence To help explain our results for the wake density and velocity dispersion, we also plot the divergence of the bulk velocity field to study how the particles are deflected by the LMC and the self-gravity of the wake. We again begin with the same 120-kpc wide slice about the z-midplane, and bin the particles/cells into an x-y grid, this time with 4 kpc bins. In each bin, we calculate the mean x and y velocity components, leaving us with a 150x150 grid of 2-D velocity vectors. We then calculate the divergence of this 2D velocity field. Finally, we apply a Gaussian kernel of σ = 12 kpc to the result to reduce noise. Figure <ref> shows the resulting divergence maps for the Fiducial simulations. The wake signature shows up as regions of negative divergence (blue) tracing where the bulk flow of wind particles is converging. In all DM models, the region of strongest convergence is directly behind the LMC, where its gravity most strongly deflects particles. After being deflected, the particles cross the undeflected wind at larger impact parameters and create a region of converging flow that effectively traces the boundary of the wake. The crossing streams of particles behind the LMC produce the enhancement in the velocity dispersion seen in Figure <ref>. Comparing the CDM simulations, we can now pinpoint the effect that the wake's self-gravity has on the particle kinematics and wake structure. In the simulation without self-gravity, the particles deflected by the LMC simply continue on straight paths, creating a region of diverging velocities immediately downstream of the LMC. When self-gravity is turned on, the pull of the wake continues to deflect particles towards the center of the box, eliminating the diverging region, narrowing the wake boundaries, and enhancing the wake's density and velocity dispersion. As in Figure <ref>, the FDM reacts less coherently to the LMC in velocity space, and the granularity persists in this kinematic signature. Despite the FDM simulations having self-gravity, the FDM wake shows regions of diverging velocity within the wake of a similar size scale to the granules in velocity space. Figure <ref> illustrates the velocity divergence of the wake produced in the Infall case for all three DM models. At this lower wind speed, the particles can be deflected significantly before they reach the LMC center. Overall, particles are deflected more strongly when the wind speed is reduced, leading to wider wake boundaries where these strongly deflected particles cross over the undisturbed wind. The larger deflection angles make it more difficult for the wake's gravity to keep the deflected streams together, and while the self-gravity results in a slight narrowing (by ∼ 20 kpc at y=150 kpc) of the downstream diverging region, it is not sufficient to eliminate the diverging flow behind the LMC. Larger deflections also cause a stronger velocity dispersion signature in the Infall case (see Figure <ref>) compared to the Fiducial case (see Figure <ref>). In the Infall case, the wake boundaries are much less clear in FDM, just as they are in the Fiducal case (see Figure <ref>). While the converging region in front of the LMC is clear, the granularity almost entirely washes out the wake boundaries. Overall, the velocity divergence illuminates several results from the previous two sections. In the Fiducial case, the wake's self-gravity eliminates the diverging flow in the center of the wake, raising the wake's density by 10% and increasing the distance the wake takes to decay by ∼ 35%. In the Infall case, the diverging region remains regardless of the wake's self-gravity due to the increased deflection angles of the particles, which explains why the CDM Infall wakes look very similar regardless of self-gravity. FDM's granularity in the velocity divergence persists across both wind speeds and densities, showing that FDM does not react as coherently to a perturber as CDM. In turn, FDM wakes have lower velocity dispersions than their CDM counterparts. § STELLAR WAKES Now, we turn our attention to the observable stellar counterpart of the LMC's wake. As a reminder, the stellar wind input parameters (see Table <ref>) are meant to mimic the MW's stellar halo at 70 kpc, just as the DM initial conditions in the Fiducial orbit case match the MW's DM halo at 70 kpc (see Table <ref>). In <ref>, we argued that the influence of DM self-gravity on the wake properties should be most observable at distances of 70-100 kpc. Extending this argument to the stellar wake, the best observational signatures of the stellar wake and the DM wake's influence on it should be between 70-100 kpc. For this reason, we focus only on the Fiducial simulations in our discussion of stellar wakes. As with the DM wakes, we examine the density and velocity structure of the stellar wakes and identify signatures with which to confirm: 1) the presence of a stellar wake; 2) the presence of a DM wake; and 3) distinguishing features between a CDM or FDM wake. The observability of these signatures will be discussed further in Section <ref>. Figure <ref> shows the density structure of the stellar wakes in the Fiducial simulations. To make these plots, we use a procedure identical to Figure <ref>, with a single additional step of smoothing the resulting density fields with a Gaussian kernel with σ = 4 kpc. This additional smoothing is done to reduce the noise that results from sampling ∼ 100 times fewer stars than DM particles. We again include a contour which encloses the region with overdensities higher than the half-max of the CDM simulation with self-gravity. The left panel shows the stellar wake in the absence of DM self-gravity, i.e. the stellar wake that would form due to only the passage of the LMC. The contour extends for roughly 150 kpc behind the LMC. In contrast, the center panel shows the stellar wake that forms when the stars feel the gravity from the CDM wake. The more striking difference is that the contour extends ∼ 50 kpc farther behind the LMC than in the wake without self-gravity. This demonstrates that the DM wake's self-gravity holds the stellar wake together. Observationally confirming the existence of a stellar wake with δρ_*≳ 0.6 more than 150 kpc behind the LMC would provide strong evidence for the existence of a DM DF wake behind the LMC. Comparing the self-gravity-on CDM simulation to the FDM simulation (right), however, reveals little difference in the density of the stellar wakes formed under the gravity of different DM particles. The profile-style plots (e.g. Figures <ref> and <ref>) become very noisy when made with star particles due to the ∼ 100 times smaller sample sizes in each bin (with respect to DM). Instead, to compare the overall strength of the stellar response in each DM model, we compute an estimate of the overall wake density (Figure <ref>), time-averaged over five snapshots spanning 100 Myr of evolution. In detail, for each of the five snapshots, we compute a 2-D histogram of the quantity of interest (exactly as in Figure <ref> for the density, or in Figure <ref> for the dispersion). For each histogram, we select bins with values that are over half that of the maximum bin, then take the median of these. The values reported in Figures <ref> are <ref> the time-average and standard deviation of the medians. Figure <ref> shows the time-averaged median density of the stellar wakes in each Fiducial simulation. The stellar wake reaches an overdensity of ∼ 0.48 when the DM wake's gravity is not included, compared to ∼ 0.58 when including the gravity of a CDM wake. The stellar wake in the FDM simulation reaches δρ_* ∼ 0.56, similar to the CDM with self-gravity case. In short, the gravity of a DM wake raises the density of the stellar wake by δρ_* ∼ 0.1, and extends the density response by ∼ 50 kpc. CDM and FDM wakes do not leave significantly different signatures in the density of the stellar wake. Figure <ref> shows the z-velocity dispersion of the stars in the Fiducial simulations, exactly as Figure <ref> but for the star particles. The smoothing length is also increased to 9 kpc to mitigate the increased noise associated with the relatively low number of star particles. The velocity dispersion signature in the CDM simulation without self-gravity (left) is ∼ 20 kpc narrower than when DM self-gravity is included (center and right). Additionally, when compared to the CDM simulation without self-gravity, the contour tapers more slowly in the CDM simulation with self-gravity and more slowly still in the FDM simulation. Figure <ref> shows the time-averaged median enhancement in the z-velocity dispersion in the same fashion as Figure <ref>. A CDM wake's gravity raises the dispersion of the stellar wake by δσ_z*∼ 0.01 compared to when the DM wake's gravity is not present. Importantly, an FDM wake heats the stars more than a CDM wake: the velocity dispersion of the stars is δσ_z*∼ 0.01 higher in the FDM simulation than the CDM simulation with self-gravity. We plot the divergence of the x-y velocity field to illuminate the density and kinematic structure of the stellar wakes (Figures <ref> and <ref>) in Figure <ref>. In the absence of the DM wake's gravity (left panel), we again see a region of converging flows immediately behind the LMC (blue), followed by a region of diverging flows (red) farther downstream as deflected stars pass by each other. Adding the gravity of the DM wake eliminates the diverging region just as it does for the DM particles, enhancing the density and velocity dispersion of the stellar wakes with DM self-gravity. The divergence of the stellar velocities looks similar between CDM with self-gravity and FDM. Altogether, the velocity dispersion enhancement of the stellar wake is slightly (∼ 5%) higher in response to an FDM wake compared to a CDM wake. The only difference in the forces on the stars in both cases is caused by the differences in the density fields of the DM wakes. In Section <ref>, we showed that FDM granules persist and are even strengthened inside of a DF wake. Therefore, we expect that the additional heating of the stellar wake in the FDM simulation is due to the scattering of stars by FDM granules. This so-called “granule heating” has been well-studied in other contexts (e.g. , , , , , ) and is a known property of FDM. In <ref>, we will discuss the role of granule heating and its dependence on FDM particle mass further. Ultimately, we have demonstrated that the gravity of the DM wake plays an important role in shaping the response of the stars. Specifically, the gravity of the DM wake raises the overdensity of the stellar response by ∼ 10% and extends the stellar wake's density response by ∼ 50 kpc. The enhancement in the velocity dispersion within the stellar wake is ∼ 5 % higher when CDM self-gravity is turned on, and ∼ 5 % higher in FDM compared to CDM. § DISCUSSION In this section, we discuss the implications of our results in a wider context. We introduce a toy model for the observables of the stellar wake in <ref>, assess the sensitivity of the LMC's orbit to the choice of DM particle in <ref>, and discuss the DM wake's mass and its impact as a perturber of the MW's dark halo in <ref>. §.§ Observational Predictions In Section <ref>, we presented three key predictions for the stellar wake. The gravity of a DM wake will: 1) enhance the overdensity of the stellar wake by roughly 10%; 2) extend the length of the stellar overdensity and kinematic response by a few tens of kpc; and 3) the velocity dispersion enhancement of the stellar wake will be mildly (∼ 5%) higher in response to an FDM wake than a CDM wake. In this section, we assess the extent to which these results could be observable by introducing a toy model to approximate how our windtunnel wakes would be viewed from Earth. Using this toy model, we study the density and radial velocity dispersion of the stellar wake with the addition of simulated distance and radial velocity errors. To study how the stellar wake will appear when observed from Earth, we transform our “windtunnel” or simulation box coordinate system to Galactic (l,b,r) coordinates, in which the origin is the solar system barycenter, the x-axis points towards the Galactic Center, and the z-axis is normal to the Galactic plane. The LMC's path in the windtunnel is straight as opposed to a curved orbit, so we cannot exactly reproduce the appearance of the wake on the sky, nor can we reproduce the effect of the collective response. However, we can carefully choose the transformation to ensure we are best-reproducing the orientation and location of the wake in the region of sky where we want to make our observations. In this case, following and our argument in <ref>, we want to focus our observations where the wake is at a Galactocentric distance of 70-100 kpc. Therefore, our goal is to transform from windtunnel coordinates such that the straight windtunnel path is tangent to the LMC's orbit at a Galactocentric distance of 70 kpc, while the LMC itself is as close to its present-day location on the sky as possible. Our coordinate transformations are performed with version 4.2.1 (; ; ) and are described in Appendix <ref>. The result of the coordinate transformation is shown in Figure <ref>. We plot the LMC's orbit in the reference simulation in Galactocentric coordinates in red as in Figure <ref>. In our toy observational model, the path of the LMC in the windtunnel is tangent to the LMC orbit from at 70 kpc from the Galactic center, which is the distance that our Fiducial wind parameters are taken from. The Galactic center is denoted by the `x.' To estimate how observational uncertainties affect our results, we also include Gaussian distance and radial velocity errors in our model. We choose two levels of errors, motivated by the performance of contemporary surveys. The distance errors are 10% and 20%, typical for spectro-photometric distance measurements from DESI <cit.> and the H3 survey <cit.>. For the radial velocity errors, we choose 1 and 10 km/s. 1 km/s reflects the performance of spectroscopic radial velocity measurements from DESI <cit.>, Gaia (; ), and H3 <cit.>, while 10 km/s provides a reasonable worst-case. With our toy model in-hand, we can now use it to study how the stellar wake might be observed. Figure <ref> shows all-sky Mollweide projections (made with ; )[https://healpix.sourceforge.io/] in Galactic coordinates of the overdensity of stars with distances of 70 - 100 kpc in the Fiducial simulations after 0.7 Gyr of evolution. The bin size is 1.16^o and the resulting density map is smoothed by a Gaussian kernel with σ = 15^o. Each panel corresponds to a DM model, with CDM without self-gravity on the left, CDM with self-gravity in the center, and FDM on the right. Each panel shows the path of the LMC in the windtunnel as the solid white line, and the LMC orbit from the reference simulation as the dashed white line; we see good agreement between the position of both paths on the sky. As in Figure <ref>, we enclose the region with an overdensity higher than 0.34 with a contour; this level is the half-maximum of the CDM with self-gravity panel. In agreement with , the stellar wake appears as an overdense region in the Galactic southeast, ranging from l ∼ 0 - 120, and b ∼ -80 - 0. Notably, the extension of the stellar wake owing to the DM wake's gravity is readily observable: while the stellar wakes in the CDM simulation with self-gravity and FDM simulation do not decay below δρ_* of 0.38 until b ≈ 0, the stellar wake decays to this level by b ≈ -20 in the simulation without self-gravity. To quantify the differences in the strength of the response in this observed frame, we use the same procedure as in <ref>: we calculate the median wake density or velocity dispersion in bins that are higher than half of the maximum bin. For each simulation, we repeat this for five snapshots spanning 100 Myr, and then report the average and standard deviation of the medians from the five snapshots. In this section, we calculate the quantity of interest in on-sky bins as in Figures <ref> and <ref>. To estimate the number of stars that need to be observed to distinguish between the simulations, we also downsample the number of star particles, i.e. after adding simulated errors, we sample a fixed number of stars with distances between 70 and 100 kpc from the entire sky. Without downsampling, there are approximately 10^5 stars with distances in this range based on the stellar wind density and the volume of the shell. For our plots, we choose three different levels of downsampling, selecting 1.5 × 10^4, 10^4, and 10^3 stars. These sampling levels correspond to selecting approximately 70, 900, and 1300 stars within the wake (i.e. inside the contours in Figure <ref>), respectively. Figure <ref> shows the time-averaged median overdensity of the stellar wake between 70 and 100 kpc with different observational errors and sampling rates. The black circles show the mean and standard deviation of the median overdensity without any observational errors. The errorbars on each point are computed via bootstrapping, i.e. for each of the five snapshots we randomly sample errors and star particles 50 separate times such that the final reported median overdensity is over 250 samples. With no errors and 1.5×10^4 stars, when we compare the CDM simulations with and without self-gravity, the stellar wake's overdensity increases by ∼ 0.05 with self-gravity. In the observational frame, we now also see a further increase in density in the FDM simulation, with the FDM simulation reaching δρ_* ∼ 0.07 higher than the CDM simulation with self-gravity. Note that this is opposite to the trend we saw in <ref>, where the stellar wake was slightly less dense in FDM compared to CDM. In the observational model, we are now looking at a 30 kpc thick slice of the wake, as opposed to 120 kpc in <ref>, so this is most likely an effect of the viewing angle and distance selection of stars. When adding observational errors and reducing the number of stars to 10^4, the differences between the simulations remain visible. Sampling only 10^3 stars, however, is not sufficient to see the differences between the simulations. Figure <ref> shows all-sky maps of the enhancement in the radial velocity dispersion of the stars in the same fashion as Figure <ref>. In this plot, we report the velocity dispersion as the difference from the shell average Δσ_r* = σ_r* - σ̅_̅r̅*̅. In all panels, the velocity response traces the location of the density response well. The addition of the DM wake's gravity extends the length of the velocity response, as it decays to below 5.26 km/s by b≈ -30 in the CDM simulation without self-gravity, compared to b≈0 in both simulations with the DM self-gravity. It is also worth mentioning that we expect an increase in both the longitudinal and latitudinal velocity dispersion in the wake. At these distances (70-100 kpc), we measure this increase to be approximately 0.03 mas/yr. For our purposes of distinguishing between DM models, the qualitative differences between the simulations are the same as for the radial velocity dispersion so we do not elaborate on the proper motions here for brevity. Figure <ref> shows the the median velocity response averaged over 100 Myr with observational errors and different numbers of stars, similar to Figure <ref>. Here, we see the same trend in the observational frame that we did in the simulation box frame in <ref>: with 1.5 × 10^4 stars, the velocity dispersion enhancement in the stellar wake is lowest (∼ 5.0 km/s) without a DM wake's gravity, higher in response to a CDM wake (∼ 5.6 km/s), and highest in response to an FDM wake (∼ 6.5 km/s). The addition of observational errors does not affect this trend, i.e. the simulations are still distinguishable with the largest errors we consider. 10^4 stars is also enough to distinguish the simulations, though the differences between CDM with self-gravity and FDM become close to 1-σ with 10 km/s radial velocity errors and 20% distance errors. The differences between the simulations are not visible while sampling only 10^3 stars. With the caveat that our observational framework is only a toy model, we find that the general results reported in <ref> still hold. In particular, we have demonstrated several important qualitative results: Distinguishing the strength of the density and kinematic response of the stellar wake between DM models should be possible with ≥10^4 stars across the entire sky (≳ 900 stars within the wake) with distances between 70 and 100 kpc. This sampling rate corresponds to a number density of 3.6×10^-3 kpc^-3 which agrees with the number density of stars that report is required to confidently detect the wake. In other words, if we observe enough stars to detect the wake, we have enough stars to distinguish between the DM models considered here. Provided this sampling rate is achieved, we find that the telltale sign of the presence of a DM wake is the length of the response, as both the density and velocity dispersion responses are lengthened by over 20^o on the sky when the self-gravity of the DM wake is included. Additionally, we find that differences in the kinematics of the stellar wake between a CDM and FDM universe are still visible when accounting for the viewing perspective and observational errors. As also reported by , we find that the increased velocity dispersion is a characteristic signature of the wake that differentiates it from cold substructure such as stellar streams. Ultimately, these results demonstrate that kinematic information is crucial when making observations of DF wakes, both for detecting the wake and inferring the nature of its DM component. §.§ Dynamical Friction Drag Forces and the LMC's Orbit In this section, we compare the behavior of the DF drag force felt by the LMC due to the DM wakes in our simulations and discuss the impact of DM microphysics on the LMC's orbit. To determine the acceleration due to DF in our simulations, we calculate the y-component of the gravitational acceleration that would be felt by a constant-density sphere 5 kpc in radius at the center of the box due to all DM particles in the simulation. When done at each timestep, this gives us an approximation of the DF acceleration felt by the LMC as a function of time. Additionally, we calculate the expected DF acceleration using the classic formula from <cit.>: a_DF = 4π^2 M G^2 ρ̅ln(Λ)/σ^21/2X^2[ erf(X) -2X/√(π) e^-X^2] , where erf is the error function and X=v/√(2)σ. In these equations, we use the input wind parameters and LMC mass, i.e. M from Table <ref>, and ρ̅, v, and σ̅ from Table <ref>. For the Coulomb logarithm, we follow <cit.>, <cit.>, and , using ln(Λ) = max[L, ln(r/Ca)^α] where r is the distance between the satellite and its host, a is the satellite's scale radius, and L=0 and α=1, and C are constants. Here, r is the separation between the LMC and MW at the point in the reference simulation that we base our wind parameters on (70 kpc for the Fiducial case and 223 kpc for the Infall case), and a is the LMC's scale radius from Table <ref>. For C, we pick values such that the analytic DF acceleration roughly agrees with the measured acceleration when the wake reaches the end of the box. For the Fiducial wind, this is C=0.52, and for the Infall wind, C=2.90. Figure <ref> shows the measured and analytic DF accelerations for our simulations, with the Fiducial wind case in the left panel and the Infall wind case on the right. In each simulation, the strength of the drag increases with time as the wake forms, before plateauing once the faster-moving wake particles begin to wrap though the box. Overall, the drag from the Fiducial wake is slightly more than an order of magnitude stronger than the drag from the Infall wake, which aligns with the ρ/v^2 scaling expected from Equation <ref>. In the Fiducial case, we see that the reduction in wake size and density when self-gravity is removed translates to a weaker drag force - the acceleration is ∼ 10% weaker in the CDM simulation without self-gravity vs. with self-gravity. Meanwhile, the behavior of the FDM drag is consistent with the predictions of <cit.>, who calculated that the time-averaged drag force on the LMC should be well-approximated by classical DF (i.e. with non-interacting background particles). In the Infall case, we see closer agreement between the two CDM simulations, as the effect of the wake's self-gravity is diminished at this lower wind speed. Ultimately, our result that both DM models produce a similar drag force regardless of the wind speed and density (when DM self-gravity is included) implies that the LMC's orbit would not be impacted by the assumption of a CDM vs. FDM universe. §.§ The Mass of the Wake In this section, we calculate the mass of the DM wakes in our simulations, and develop a basic framework to understand the DM wake as a perturbation to the MW's DM halo. To calculate the wake mass in each of our simulations, we begin by defining a rectangular region that roughly contains the wake (i.e. that contains where δρ≥ 0.1; x ∈ [-100, 100], y ∈ [-50, 300], z ∈ [-100, 100] for the Fiducial wake; x ∈ [-150, 150], y ∈ [-50, 300], z ∈ [-150, 150] for the Infall case). At a particular timestep, the wake mass is estimated by taking the difference between the total DM mass within the region at that timestep and the mass within the region at the start of the simulation, i.e the region's volume multiplied by ρ̅ from Table <ref>. As we have done throughout this work, when estimating the wake mass, we average over five snapshots spanning 100 Myr of evolution. The top row of Figure <ref> shows the masses of all DM wakes in our simulations. The left panel shows the Fiducial wind after 0.7 Gyr of evolution, and the right panel shows the Infall wind after 2 Gyr of evolution. The mass of the Fiducial wake is roughly comparable to the LMC, while the mass of the Infall wake is roughly an order of magnitude lower. In both the Infall and Fiducial case, the FDM wake and CDM wake with self-gravity have similar masses, while the CDM wake without self-gravity is of order 10% less massive than either wake with self-gravity. To get a rough approximation of the impact of the DM wake as a perturbation to the MW's DM halo, we also calculate the distance at which an object with the wake's mass would need to be behind the LMC to produce a similar drag force as the wake. The middle row in Figure <ref> lists the DF acceleration during the same time frames as the top panel, taken from Figure <ref>. The distances at which an object of the wake mass would produce a gravitational acceleration equivalent to DF are shown in the bottom row of panels in Figure <ref>. In the Fiducial case, the distances all agree, and are approximately 100 kpc. The Infall distances are roughly 135 kpc, and also show agreement between each DM model. In summary, we see that the Fiducial wake acts like an additional LMC-mass object that trails the LMC at a distance of 100 kpc, while the Infall wake is equivalent to an object with roughly 10% the mass of the LMC trailing at a distance of 135 kpc. Additionally, this behavior is insensitive to the assumption of CDM or FDM. § DISCUSSION: SIMULATION PARAMETERS In this section, we explore how our results are affected by changing certain assumptions in our simulation setup. We assess the importance of the FDM particle mass to our results in <ref>. <ref> discusses the impact of the uncertainty in the LMC's mass on our observational predictions. We quantify the effect of the stellar halo's velocity dispersion in <ref> and discuss implications for the wake's impact on cold stellar substructures. Finally, we discuss the prospects for using the wake to constrain alternative DM models beyond FDM in <ref>. To study each of these effects, we run additional simulations which are summarized in Table <ref>. §.§ The Effect of FDM Particle Mass As the behavior of FDM is strongly dependent on the particle mass m_a, it is important to place our choice of m_a = 10^-23 eV into context within the literature and test the extent to which a different choice would affect our results. Table <ref> compiles a list of recent papers which report a constraint on m_a through an astrophysical technique (see also for a recent review, and Figure 1 of <cit.> for a graphical approach). We do not guarantee that this list is exhaustive, nor do we include constraints from laboratory or direct-detection experiments. Nevertheless, we hope to demonstrate that FDM particle mass constraints are abundant and may be derived with a very wide range of methods. Notably, the constraints we list here span the entire range of FDM masses (10^-26 - 10^-16 eV), though almost all come with caveats. One common method of constraining m_a relies on trying to detect soliton density cores in dwarf galaxies (e.g. ; ; ). The widest constraint comes from <cit.>, who report that a single-component FDM is incompatible with the observed differences between Fornax and Segue 1's central density profiles. This result relies heavily on the measurement of the Ultra-Faint Dwarf (UFD) density profile slopes, and relaxing the core profile slope constraint from <cit.> results in a lower bound of m_a >6 × 10^-22 eV. Moreover, <cit.> report that there is significant scatter in the FDM soliton core-halo mass relation, which may weaken constraints derived by examining DM density profiles. Meanwhile, there is a growing tension between the requirements that FDM is light enough that it produces sufficiently large cores in dwarf galaxies (; ) and heavy enough that it is consistent with the small-scale matter power spectrum as inferred from the Lyman-α forest <cit.> and other cosmological probes <cit.>. Such cosmological probes of the FDM mass typically rely on comparisons to simulations performed with traditional N-body codes which modify the linear power spectrum of the initial conditions (and sometimes the transfer function) to match that expected of FDM using <cit.>. <cit.> argue that this approach is valid for power spectrum modeling, though such methods do not consider non-linear effects. Large-scale (box sizes of L ≳ 10 Mpc/h) cosmological simulations with full SP solvers are becoming available (; ), which could be used to test the Lyman-α results with higher-fidelity simulations. Other methods (such as this work) rely on examining the gravitational effect of FDM granules and/or subhalos on luminous matter. There is a growing class of papers that examine dynamical heating of stars by FDM substructures as a method of placing upper limits on m_a (e.g. ; ; ; ; ). Again, none of these studies utilizes a fully self-consistent numerical treatment of the FDM, instead approximating granules as massive extended particles or utilizing only the subhalo mass function. A very recent study by <cit.> provides one of the more stringent kinematic constraints of m_a > 3×10^-19 by examining the heating of stars by granules in the Segue 1 and 2 UFDs. Their simulation technique, outlined in <cit.>, approximates FDM granules as linear perturbations to a static potential. This results in a computationally inexpensive, relatively accurate treatment of the wave behavior of FDM in the idealized case of a spherically symmetric, equilibrium halo. While very few of these existing constraints have been confirmed with self-consistent, non-linear SP simulations, our choice of m_a=10^-23 eV is clearly inconsistent with a wide range of observational probes. As justified in Section <ref>, this is the largest mass we can feasibly simulate, so it is important to explore how our results are affected by another choice of m_a. To determine this, in addition to our m_a=10^-23 eV simulations, we have performed another set of simulations with m_a = 2.5 × 10^-24 eV (see Table <ref>). We discuss each of our FDM-specific results and their dependence on m_a in turn: Dark Matter Wake Structure: Figure <ref> shows the overdensity projections of both Fiducial FDM simulations (similar to Figure <ref>). Reducing the mass by a factor of four correspondingly increases the de Broglie wavelength of the FDM particles by a factor of four. As expected, this increases both the size and relative strength of the granule density fluctuations, with peak granule densities within the wake reaching overdensities of ∼ 2.4 (1.7) for the lower (higher) particle mass. In the low-mass case, some of the background granules reach higher overdensities than the half-max of the CDM wake with self-gravity. At higher masses than we are able to simulate, the granules would decrease in size and strength and the density field of the wake would approach the behavior of CDM. Dark Matter Wake Velocity Dispersion: In Figure <ref>, we reproduce Figure <ref> with the inclusion of our m_a = 2.5 × 10^-24 eV FDM simulation as a dash-dotted, purple line. The CDM simulation without self-gravity is removed for readability. The increased de Broglie wavelength of the low-mass simulation causes larger velocity granules, which can be seen as the increased oscillation amplitude in the profile of the low-mass FDM wake. This roughly four-fold increase in the oscillation strength is inversely proportional to the decrease in mass compared to the primary (higher) FDM mass. Despite the oscillations, the two particle masses we consider here show very similar overall/averaged behavior, i.e. when comparing the two masses tested here, our result that the dispersion signature of an FDM wake is ∼ 80% that of CDM is unchanged. We caution that this result may not hold for higher particle masses, especially as FDM phenomenology approaches CDM when m_a increases. It is, however, suggestive that the kinematic signatures of FDM wakes are less sensitive to m_a than their density field signature. Kinematics of the Stellar Response: In <ref>, we argued that FDM granule heating is responsible for raising the velocity dispersion of the stellar wake in an FDM universe compared to a CDM universe. Following an argument similar to that of <cit.>, we can roughly estimate the extent to which granule heating is expected to operate within our windtunnel simulations: FDM granules are approximated as objects of mass δ M ≈ρ̅ r^3, where we assume that the granule overdensity fluctuation is of order unity, and the granule radius r ≈ħ/m_a σ̅ is set by the de Broglie wavelength associated with the FDM velocity dispersion. Thus, the FDM granules will cause a perturbation in the gravitational potential δΦ≈ G δ M / r = G ρ̅ r^2. Stars that encounter granules at a relative velocity of ∼σ̅_̅*̅ will have their velocities perturbed by δ v ≈δΦ / σ̅_̅*̅ = G ρ̅ r^2 / σ̅_̅*̅. Repeated encounters would increase the velocity dispersion of the stars by Δσ_* ≈√(N δ v^2), where N ≈σ̅_̅*̅ t / r is the number of star-granule encounters during a time t. Putting all of this together gives Δσ_* ≈√(G^2 ρ̅^2 t/σ̅_̅*̅σ̅^3( ħ/m_a)^3) . Notably, Equation <ref> is derived assuming a uniform density and velocity dispersion of both DM and stars, i.e. similar to our initial conditions. In addition to the increase in granule density within the wake, <cit.> and <cit.> have demonstrated that FDM wakes grow additional interference fringes during the interaction with length scales set by the de Broglie wavelength associated with the wind velocity. Therefore, we do not necessarily expect Equation <ref> to hold for our simulations but it illustrates that we may expect granule heating to become stronger for lower values of the FDM particle mass. Figure <ref> reproduces Figure <ref> but includes the low-mass FDM simulation in place of the CDM simulation without self-gravity. The leftmost two points are the same as the rightmost two points in Figure <ref>. The stellar wake's velocity dispersion is increased more by the lower-mass FDM wake when compared to CDM, confirming that granule heating becomes stronger when the FDM particle mass decreases. This demonstrates that future observations of the stellar wake's velocity dispersion may be used to place an independent constraint on m_a. Overall, we find that our choices of m_a do not affect our result that an FDM wake is ∼ 20% colder than a comparable CDM wake. We cautiously suggest that these results may hold at higher values of m_a, but emphasize the need for higher-resolution simulations conducted with values of m_a that are permitted by other astrophysical constraints to verify this conclusion. Additionally, we should expect granule heating of the stellar wake to decrease as m_a increases, and vice-versa. §.§ The Effect of the LMC's Mass In <ref>, we argued that the length of the stellar wake could be used to reveal the presence of the DM wake, as the DM wake's self-gravity enables the stellar wake to persist for longer than without self-gravity. However, the LMC's mass will affect the strength and length of the stellar wake in a manner that could be degenerate with the presence of the DM wake. To investigate this possibility, we ran two additional simulations with alternative LMC models (see Tables <ref> and <ref>) of different masses. Both of these additional simulations are run in CDM without DM self-gravity to assess whether a more or less massive LMC could cause a density enhancement in the stellar wake similar to that caused by the addition of the DM wake's gravity. Figure <ref> compares the density of the stellar wakes (similar to Figure <ref>) in all three simulations without self-gravity. The LMC mass differs between each column, and increases left-to-right. To compare the density response in these simulations to that expected with CDM self-gravity, the contours are set at the half-maximum of the wake's density with DM self-gravity, i.e. at the same level as in Figure <ref>. Increasing the LMC mass increases the strength of the and length of the response. The wake produced by the Light LMC (left) barely reaches an overdensity of 0.48, while the wake produced by the Heavy LMC is ∼ 25 kpc longer than that produced by the Fiducial LMC. Therefore, we see that the LMC mass is mildly degenerate with the presence of the DM wake for increasing the length of the stellar wake. However, in <ref> we showed that the DM wake's gravity lengthens the stellar wake by ∼ 50 kpc, roughly twice the increase resulting from raising the LMC mass to 2.5×10^11 M_⊙. Improving and additional independent constraints on the LMC's mass will also help break this degeneracy. For example, a different assumption for the mass of the LMC will change its orbit <cit.>. Characterizing the location and kinematics of the wake will constrain the LMC's orbit and in turn its mass . Thus, both the length and location of the wake could be used to break the degeneracy between DM gravity and the LMC's mass for a given MW model. §.§ The Effect of the Stellar Velocity Dispersion While we have so far assumed that the MW stellar halo is smooth and isotropic beyond 70 kpc from the Galactic center, the stellar halo at these distances is likely highly substructured, consisting of streams, shells, and other partially phase-mixed debris from the MW's past accretion events <cit.>. These substructures have lower local velocity dispersions than the phase-mixed component of the stellar halo and will complicate measurements of the wake's influence on halo stars <cit.>. While we leave a detailed study of the LMC and wake's impact on cold substructure for future work, we can ask whether the differences between CDM and FDM due to granule heating become more pronounced if the stars have an initially low velocity dispersion. To this end, we re-run our Fiducial simulations in CDM and FDM with DM self-gravity and a stellar velocity dispersion of σ̅_* = 30 km/s (see Table <ref>), a factor of three lower than in our Fiducial simulations. Figure <ref> shows the z-velocity dispersions of the stellar wakes in these new simulations. The black contour is set at the half-max of the CDM simulation with low stellar dispersion. To compare the size of the wake with the Fiducial simulations (in which σ̅_* = 90 km/s), we reproduce the wake boundary contours from <ref> in blue. The impact of lowering the initial stellar velocity dispersion is to narrow the wake (by ∼ 60 kpc) in both CDM and FDM. In Figure <ref> we compare the time-averaged median z-velocity dispersion within the stellar wakes in our Fiducial simulations to those with a lowered initial stellar dispersion. The strength of the response increases roughly five-fold in the low-stellar-dispersion simulations compared to the Fiducial simulations. While granule heating is still present in the low-stellar-dispersion simulations, i.e. the stellar wake is hotter in FDM than CDM, the difference between CDM and FDM is δσ_z*∼ 0.01, identical to the Fiducial simulations. These results suggest that the LMC and its wake will leave much stronger kinematic signatures in cold stellar substructures compared to phase-mixed populations of stars. These results warrant further testing with simulations of the LMC's infall through a substructured stellar halo. §.§ Self-Interacting Dark Matter Given the differences we find in the LMC's DM and stellar wakes between FDM and CDM, it is also worth asking whether other DM particle candidates might also impact the DM and/or stellar wake in unique ways. In particular, self-interacting DM (SIDM) (; see also and for recent reviews), in which the DM particle has some non-negligible cross section for self-scattering, has emerged as another promising alternative to CDM. In the context of DF wakes, self-scattering between DM particles could potentially alter both the density and velocity of DM particles within the wake, as well as induce a bow shock or mach cone in the DM <cit.>. For a constant cross section of σ_χ = 1 cm^2/g, the mean free path of an SIDM particle within the wake (i.e. at ∼ twice the Fiducial wind density of 1.083 × 10^5 M_⊙/kpc^3) is ∼45 Mpc, so we do not expect the wake itself to differ from CDM in this case. However, in SIDM the LMC's DM halo would subject to ram pressure from the MW's halo. At the central density of our LMC model's halo (∼ 7 × 10^9 M_⊙/kpc^3), the mean free path for an SIDM particle with σ_χ = 1 cm^2/g is reduced to 0.67 kpc, where the scattering may have a non-negligible effect. Complicating matters further, velocity-dependent cross sections are compatible with a much wider range of astrophysical observations than constant cross sections <cit.>. Such velocity-dependent cross sections are typically smaller for larger relative velocities, which would alter the efficacy of SIDM ram pressure as the LMC's orbital velocity changes during its infall. A detailed study of the effects of different SIDM cross-sections on the LMC's wake and DM halo along its orbit would require representing the LMC with a live halo of N-body particles, which we leave to future work. § CONCLUSIONS In this paper, we have presented a suite of windtunnel-style simulations of the LMC's DF wake. Our simulation suite compares the wake at two different points in the LMC's orbit (223 and 70 kpc from the MW), and with three different assumptions for the DM model (CDM with and without self-gravity, and FDM). We also explored the impact of the LMC alone and the LMC plus the DM wake on the MW's stellar halo using the three different DM models. Our goals were to quantify the impacts of self-gravity and the DM particle assumption on the DM wake's structure and kinematics. We also sought to determine the response of the stellar halo both with and without the gravity of a DM wake, whether different DM particles leave different signatures in the stellar wake, and if these differences are observable when considering typical observational errors. We summarize our conclusions about the DM wakes below: * The FDM and CDM (with self-gravity) wakes both reach comparable peak densities of ∼ 1.6 times higher than the background. * The inclusion of self-gravity increases the density of the CDM wake, and extends its length. The self-gravity of the DM wake cannot be ignored. The inclusion of self-gravity increases the peak overdensity of the wake by ∼ 10%, in agreement with <cit.>. In addition, the LMC DM wake sustains a density that is a factor of 1.38 times larger than the background over a distance ∼ 50 kpc larger than if self-gravity is ignored. The impact of self-gravity on the properties of the wake is dependent on the LMC's orbital properties and is maximized after the LMC falls within 100 kpc of the Galactic center. At larger distances, the LMC is moving at lower speeds. As such, particles spend more time under the influence of the LMC's gravity, which reduces the relative contribution of the wake's gravity to its structure. This suggests that the best possible region to search for the influence of the DM wake's gravity observationally is at Galactocentric distances of 70-100 kpc (which also avoids contamination from the Clouds themselves). * In FDM, the DM wake is more granular as the DM background grows stochastic interference patterns that interact with the LMC. While individual granules can reach much higher overdensities than are seen in the CDM wake, the overall density of the FDM wake is similar to CDM when self-gravity is included. * The dispersion of the CDM wake with self-gravity is ∼1.13 times higher than the mean dispersion. The inclusion of self-gravity increases the velocity dispersion of the CDM wake by ∼ 20%. * FDM wakes are 20% colder than CDM wakes regardless of the wind speed and density. This is due to the reduced response of FDM granules to a steep gravitational potential. Consequently, FDM wakes have a granular structure in kinematic signatures as well (see e.g. Figure <ref>), compared to the smooth signatures of CDM. This result is insensitive to the FDM particle mass within the range tested here (m_a = 2.5 × 10^-24 - 10^-23 eV), suggesting this result may hold for higher FDM masses. * The DF drag forces felt by the LMC are similar in FDM and CDM when self-gravity is included. This result holds across all simulation parameters that were varied (e.g. wind speed, density: Infall vs. Fiducial case). As such, we do not expect the LMC's orbit to change in an FDM universe compared to a CDM universe. When self-gravity is turned off, the drag force is reduced by ∼ 10%, consistent with the ∼ 10% reduction in wake density when self-gravity is removed. * The LMC's DM wake reaches a mass comparable to the LMC's infall mass in the Fiducial wind case, regardless of the DM model. To a first approximation, the wake acts like an additional subhalo with a mass of ∼ 1.9 (1.5) × 10^11 M_⊙ when self-gravity is on (off) (comparable to the LMC's infall mass) that trails the LMC by ∼ 100 kpc. This implies that the wake is a non-negligible perturber to the dynamics of MW halo tracers. We summarize our conclusions about the stellar wakes below: * The stellar counterparts to the FDM and CDM (with self-gravity) wakes both reach comparable peak densities of ∼ 1.6 times higher than the background. This is similar to the behavior of the DM wakes alone. * The self-gravity of the DM wake causes the stellar wake to peak at higher densities (by 10%) and persist over larger distances behind the LMC than if there were no DM wake. The LMC's gravity will cause the formation of a stellar wake in the absence of DM. However, the stellar wake persists over a larger distance (by ∼ 50 kpc) if the DM wake self-gravity is included. * In the CDM simulation with self-gravity, the stellar wake velocity dispersion is ∼ 1.173 times higher than the mean stellar dispersion. The self-gravity of the DM wake causes the stellar wake's velocity dispersion to increase by ∼ 5%. * In the FDM simulations, scattering of stars by FDM granules increases the stellar velocity dispersion by ∼ 5%. Interestingly, this behavior is opposite to that of the DM wake: while the stellar wake is dynamically hotter in FDM, the DM wake is colder in FDM when compared to CDM. The effect of granule heating in the stellar wake will decrease for higher values of m_a. * Reducing the initial velocity dispersion of the stellar halo by a factor of three (from 90 km/s to 30 km/s) results in an increase in the stellar wake dispersion relative to the mean by a factor of 5 in both the CDM (with self-gravity) and FDM simulations. This implies that the LMC's wake will have a stronger imprint on the motions of cold substructures in the stellar halo than on phase-mixed halo stars. Meanwhile, the effect of FDM granule heating remains the same when the initial stellar velocity dispersion is lowered. * The angular extent of the stellar wake on the sky can indicate the existence of a DM wake. When viewed in Galactic coordinates between distances of 70-100 kpc, the stellar wake appears as an enhancement in the density and radial velocity dispersion of the stellar halo. The response appears in the Galactic southeast, traces the past orbit of the LMC, and extends up to the Galactic plane in b when the DM wake's gravity is included, in agreement with the results of . Without the gravity of the DM wake, the stellar wake decays below an overdensity of 0.34 by b ≈ -20, and the velocity dispersion enhancement decays below 5.26 km/s by b ≈ -30. Thus, the length of the stellar wake is an observational sign of the presence of a DM wake, though this is partially degenerate with the LMC's mass. Independent constraints on the LMC's mass or orbit (such as by determining the wake's location) will help break this degeneracy. * The differences in the density and velocity dispersion of the stellar wake found across the three models considered (CDM with or without self-gravity, and FDM) persist when the wake is viewed in Galactic coordinates with simulated observational errors, provided at least 10^4 (900) stars are observed across the sky (in the wake). Distinguishing FDM from CDM through measurements of the stellar wake's velocity dispersion will be difficult, as the effect of granule heating is only at the percent level. However, our results demonstrate granule heating does play a role in DF wakes and merits further study. Additionally, we find in general that the increased velocity dispersion and extent of the stellar wake is a telltale feature of a DM wake that would distinguish it from cold stellar streams, confirming the findings of . These results underscore the importance of making kinematic measurements when designing observations of the stellar wake. In this work, we have demonstrated that there are marked differences in the density structure and kinematics of the LMC's DF wake in a CDM vs. an FDM universe, but these differences may be challenging to distinguish in observations of the stellar halo. Significantly, we have also illustrated that the self-gravity of the DM wake plays a crucial role in strengthening and extending the stellar halo's response to the DM wake - providing a new avenue to test for the existence of DM. Next-generation spectroscopic surveys like DESI <cit.>, LSST/Vera Rubin Observatory <cit.>, and the Nancy Grace Roman Space Telescope are poised to provide precision radial velocity and distance measurements of increasing numbers of stars in the stellar halo. These measurements will provide an unprecedented window into the underlying DM distribution of our Local Group <cit.>, and potentially the nature of DM itself. § ACKNOWLEDGMENTS H.R.F. would like to thank Peter Behroozi, Arjun Dey, and Dennis Zaritsky for productive discussions that greatly improved this manuscript, Tomer Yavetz for a particularly helpful discussion on the correspondence between CDM and FDM halos, and the anonymous referee for insightful comments that improved the clarity of the paper and presentation of our results. This work is based upon High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department. H.R.F. thanks Derrick Zwickl and Chris Reidy for their assistance with MPI troubleshooting, which was made possible through University of Arizona Research Technologies Collaborative Support program. H.R.F. and G.B. are supported by NSF CAREER AST-1941096. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. E.C.C. acknowledges support for this work provided by NASA through the NASA Hubble Fellowship Program grant HST-HF2-51502.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. F.A.G. acknowledges support from ANID FONDECYT Regular 1211370 and by the ANID BASAL project FB210003. F.A.G. acknowledges funding from the Max Planck Society through a “Partner Group” grant. C.F.P.L. acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852839). <cit.>; (; ; ); <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; § COMPARISON OF THE WINDTUNNEL WAKE TO A WAKE IN A LIVE HALO In this Appendix, we compare our Fiducial CDM (with self-gravity) wake to our reference simulation (simulation 3 from ). Our windtunnel simulations have several important inherent differences from the full-interaction scenarios presented in : in our simulations, the LMC is represented by a fixed potential instead of as a live halo, there are no Galactic tides from the MW, the LMC “travels” on a straight line instead of a curved orbit, and the DM wind parameters are time-independent instead of varying along the orbit as the LMC plunges into the MW's halo. Therefore, it is vital to compare the CDM wake simulated in this paper to the wake in our reference Simulation 3 to ensure that our windtunnel is a reasonable laboratory to study differences between CDM and FDM and the impact of self-gravity on the wake. Figure <ref> compares the strength and size of the DM wake in 's Simulation 3 (left) to our Fiducial CDM simulation with self-gravity (right). Similar to Figure <ref>, we draw a contour enclosing the region with an overdensity greater than half the maximum overdensity in the right panel. Both wakes are very similar in strength and size, demonstrating that our windtunnel simulation framework can well-reproduce the wake formed in a full-interaction simulation. § TRANSFORMATION FROM WINDTUNNEL TO GALACTIC COORDINATES To perform transformations between coordinate frames (windtunnel simulation box, Galactocentric, and Galactic), we make use of version 4.2.1 (; ; ), and adopt the definitions and conventions of Galactocentric and Galactic coordinates as given in this version of . Our steps for transforming between simulation box coordinates and Galactocentric coordinates are as follows: * For reference, we use the LMC's present-day location and velocity vector from <cit.>. The LMC's orbit, as usual, is taken from 's Simulation 3. * We rotate the simulation box such that the LMC's unit velocity vector (-ŷ in the windtunnel frame) points in the same direction as the LMC's velocity vector at r_MW = 70 kpc from the reference simulation. * Next, we translate the simulation box such that the center of the box (where the LMC potential is located) matches the present-day location of the LMC. This ensures that the LMC is as close as possible to the correct location on the sky after the next step. * A further translation matches the location of the straight windtunnel orbit and the curved orbit from the reference simulation at a Galactocentric distance of 70 kpc. Together with the rotation, this ensures the LMC's path in the windtunnel is tangent to the LMC's orbit at 70 kpc, which is the location our Fiducial wind parameters are drawn from. * Finally, the particles are given a velocity boost to remove the bulk wind velocity, i.e. to ensure the wind particles have no net motion in a Galactocentric frame. Figures <ref>, <ref>, <ref>, and <ref> in <ref> are in Galactic coordinates, and we use 's built-in functionality for transforming between Galactocentric and Galactic coordinates. When plotting the velocities of stars in Galactic coordinates, we also remove the Sun's motion about the Galactic Center. aasjournal
http://arxiv.org/abs/2306.08760v1
20230614221714
Productivity, Inputs Misallocation, and the Financial Crisis
[ "Davide Luparello" ]
econ.GN
[ "econ.GN", "q-fin.EC" ]
Productivity, Inputs Misallocation and the Financial Crisis Davide LuparelloPenn State University First version: 8^th November, 2022 This version: July 31, 2023 =================================================================== This paper reevaluates the conventional approach to quantifying within-industry resource misallocation, typically measured by the dispersion of an input’s marginal product. My findings suggest that this statistic incorporates inherent productivity heterogeneity and idiosyncratic productivity shocks, irrespective of the input under scrutiny. Using balance sheet data from American and European manufacturing firms, I show that total factor productivity (TFP) volatility accounts for 7% of the variance in the marginal product of capital, 9% for labor, and 10% for material inputs. Consequently, this index, taken at face value, fails to identify policy-induced misallocation for any production input. To overcome this limitation, I propose a comparative analysis strategy driven by an identified policy variation. This approach allows the researcher to assess induced misallocation in relative terms whilst controlling for differences in TFP volatility. I show that the 2008 financial crisis had an uneven impact on the within-industry dispersion of the marginal product of capital across European nations, reflecting their differing financial sector maturity and suggesting the existence of financial misallocative frictions. The crisis did not affect the dispersion of the marginal product for other inputs. Keywords: misallocation, input-output models, production, cost, capital, total factor productivity, economic growth, aggregate productivity JEL Codes: E22 D24 O47 empty § INTRODUCTION This paper reevaluates the conventional approach to quantifying within-industry resource misallocation, typically measured by the dispersion of an input’s marginal product (<cit.> and <cit.>). My findings suggest that this statistic incorporates inherent total factor productivity (TFP) heterogeneity and idiosyncratic productivity shocks, irrespective of the input under consideration. This facet implies that, as long as it embodies economic fundamentals and uncertainty, it cannot represent, when taken at face value, a target for policy-makers aiming to mitigate broadly defined policy-induced input misallocation. To address this limitation, the paper proposes a cross-country or cross-industry comparative analysis driven by a defined policy variation. This approach allows the researcher to assess induced misallocation in relative terms whilst controlling for differences in TFP volatility. I show that the 2008 financial crisis had an uneven impact on the within-industry dispersion of the marginal product of capital across European nations, reflecting their differing financial sector maturity and suggesting the existence of financial misallocative frictions. In agreement with the standard narrative about the causes and effects of the financial crisis, I found that it did not affect the dispersion of the marginal product for other inputs. Economics literature consistently acknowledges the overwhelming and persistent heterogeneity in productivity and marginal products of inputs across firms, even within narrowly defined industries. <cit.> and <cit.> argue that the dispersion of the marginal product of inputs across firms signifies policy frictions that obstruct the optimal allocation of resources, consequently resulting in aggregate productivity losses. The latter authors, for instance, famously posited that decreasing the variability of the marginal (revenue) product of capital and labor in India and China to the level observed in the US could increase aggregate manufacturing TFP by 40%-60% and 30%-50%, respectively. This measure has since enjoyed widespread use in literature due to its simplicity in quantifying aggregate misallocation [For example, <cit.> employed it to study financial misallocation.]. Nevertheless, its utility in identifying its determinants is less evident (<cit.>). Several previous works have also expressed skepticism about its potential spuriousness [Several papers have noted that the measure proposed by <cit.> relies heavily on assumed structures of technology and demand (<cit.>). Further, <cit.> contends that under general conditions, the dispersion of an input's marginal product may reflect economic features such as idiosyncratic demand and cost conditions, which can hardly be deemed "misallocative". Identified determinants include idiosyncratic regulations and institutions (<cit.>), employment protection measures (<cit.>), financial frictions (<cit.>), sub-optimal endogenous firms' selection (<cit.>), noise introduced by measurement error (<cit.>), heterogeneity in production units and demand elasticities (<cit.>), factor adjustment costs and productivity information frictions (<cit.>).]. A notable contribution to this discourse comes from <cit.>. They argue that for a dynamic input like capital, a firm's uncertainty implies that a predetermined, costly allocation, which is optimal in expectation, may not remain optimal once ex-post productivity shocks occur[<cit.> shows that productivity uncertainty significantly influences firms' allocation choices and affects the business cycle.]. This realization suggests that the observed dispersion of the marginal product of capital across firms emerges as a byproduct of information asymmetries and dynamic adjustments. Indeed, the authors demonstrate that the predictions from a partial equilibrium dynamic investment model, featuring capital adjustment costs and productivity shocks, align with 80%-90% of the dispersion of the revenue marginal product of capital observed in the data. Building on their intuition, this paper demonstrates that the relationship between productivity uncertainty and heterogeneity and inputs' marginal products holds for all production inputs. Specifically, I establish that firm-level variations in TFP growth impact the marginal product of all production inputs. In turn, this interaction suggests that TFP volatility contributes to the variation of the marginal products of each input at the cross-sectional industry level. An important question consequently emerges: Can this measure retain its usefulness in a comparative perspective even if, taken at face value, it incorporates economic fundamentals and uncertainty? More specifically, if we can unambiguously identify policy variations and control for TFP volatility, can the cross-sectional dispersion of the marginal product of an input maintain its value in assessing the relative misallocative impact of the policy under scrutiny? To investigate this question, I examine the differential impact of the 2008 financial crisis on the cross-sectional dispersion of TFP growth and marginal products of inputs across various European countries, attributing these differences to heterogeneous efficiencies in their respective financial sectors. This analysis echoes the work of <cit.>, who formulated an open economy model of firm heterogeneity, financial frictions, and capital adjustment costs. Their model demonstrated that financially inefficient Southern European countries experienced heightened dispersion of the marginal product of capital and diminished TFP at the start of the 1990s. This situation occurred as falling real interest rates, resultant from the euro convergence process, redirected capital inflows towards less productive firms. Building on their insights, I document an extension of their results: Southern European countries also faced a greater impact from the 2008 financial crisis, which resulted in an increased TFP volatility and a heightened dispersion of the marginal product of capital in the years following the economic crisis. In this study, I apply a flexible, non-parametric gross output production function model based on <cit.>. I estimate this model using recent balance sheet data from firms in the manufacturing sector across the US and various European countries. The model accounts for TFP uncertainty, which firms encounter at different stages of their input allocation decision-making processes. Within the model, I consider material inputs as the only flexible input, meaning their allocation solves a static first-order condition under imperfect information. I take an agnostic stance on the dynamic implications of labor and capital.[This stance is not without consideration of historical evidence suggesting that labor tends to be sticky and subject to adjustment frictions, particularly when considering European countries in the post-2000 years. Since the launch of the Euro area, many reforms have been adopted to improve the flexibility of the labor market and ease employment protection legislation. However, the process has been slow, incomplete, and not supported by widespread consensus. Several protests sparked, the most notable being anti-labor reform protests in France in 2016 for the El Khomri Law and strikes in Italy in 2014 for the Jobs Act.]. Thus, the model allows for a variety of dynamic and static distortions affecting capital and labor allocations. Through the lens of this model, I theoretically trace the channels via which a counterfactual, firm-level variation in TFP growth influences the marginal products of inputs. Estimating the model yields firm-time-specific estimates for the inputs' marginal products, TFP growth, and its channels. I then estimate the impact of TFP growth and its channels on the inputs' marginal products using country-specific linear regression models with sector-time-specific fixed effects. Subsequently, I quantify and compare the proportion of cross-sectional dispersion of the inputs' marginal product explained by TFP volatility and its channels at the industry level. Lastly, using an event study approach, I leverage the temporal dimension of my estimates to assess the financial crisis's effect on the dispersion of the marginal products of inputs in financially less advanced (Southern) versus more advanced (Northern) European countries. I find a strong connection between changes in TFP growth and the marginal product of each input at the micro level. On aggregate, TFP volatility alone accounts for an overall 7% of the dispersion of the marginal product of capital, 9% for labor, and 10% for materials at the industry level. This explanatory power increases with the decomposition of TFP growth variation into its channels. While demonstrating that the cross-sectional industry-level dispersion of the marginal product of any input captures a significant amount of fundamental heterogeneity taken as-is, this measure still informs us about policy-induced misallocation from a comparative perspective. For example, when analyzing the Financial Crisis in Europe, I find the South experienced a 65% greater increase in TFP volatility than the North. However, it also saw the dispersion of the marginal product of capital rise by 40% more in the South, even after accounting for TFP volatility. This finding suggests that the crisis influenced the dispersion of the input's marginal product through mechanisms beyond just an increase in productivity uncertainty, with financial sector friction heterogeneity likely playing a significant role. The organization of this paper is as follows. In Section <ref>, I present the theoretical setup. Section <ref> provides details about the data used. The estimation strategy takes center stage in Section <ref>. In Section <ref>, I showcase the production function estimates and discuss the relationship between the inputs' marginal products and TFP growth at both micro and industry levels. Section <ref> delves into the application of these findings to the 2008 financial crisis. Finally, Section <ref> concludes. § THE PRODUCTION FUNCTION MODEL Let Y_jt the gross output and K_jt, L_jt and M_jt the capital, labor, and material inputs allocations of firm j at time t. Lowercase variables indicate natural logarithms. The log production function is non-parametrically specified as y_jt=f(k_jt, l_jt, m_jt)+ν_jt Where ν_jt is the exogenous log TFP term, specified as the composition of a persistent and an unexpected component (<cit.>) ν_jt=ω_jt+ε_jt From equation (<ref>), ω_jt is a persistent productivity factor that firm j perfectly observes at the beginning of period t but is unknown at time t-1. On the other hand, ε_jt represents a residual, short-term idiosyncratic output fluctuation observed only as period ends[Another observationally equivalent interpretation for ε_jt is the ex-post measurement error in the output. However, I keep the interpretation in <cit.> as a productivity forecast error.]. Let ℐ_jt the information set that firm j holds at time t before the realization of ε_jt. Moreover, call ℐ̃_jt the information set that firm j holds at the end of period t. The following assumptions characterize the persistent productivity process and the realization timing of the productivity shocks. [Persistent Productivity Markov Process]   ω_jt follows the Markovian process ω_jt=m(ω_jt-1)+η_jt m(.) is a continuous function and η_jt is a mean-independent forecast error such that ω_jt-1∈ℐ̃_jt-1, η_jt∉ℐ̃_jt-1, η_jt⊥ω_jt-1 ∀ j,t and E(η_jt|ℐ̃_jt-1)=E(η_jt)=0 E(e^η_jt|ℐ̃_jt-1) =E(e^η_jt)=ℳ Where ℳ is a scalar constant. [Productivity Shocks Realization Timing]   η_jt realizes at the beginning of period t. Then, the persistent productivity component, ω_jt, is known at the beginning of period t. The unexpected component, ε_jt, is an idiosyncratic, mean-independent, short-term output fluctuation and realizes towards the end of period t. Then, formally ω_jt∈ℐ_jt, ε_jt∉ℐ_jt, ε_jt∈ℐ̃_jt, ω_jt⊥ε_jt ∀ j,t And E(ε_jt|ℐ_jt)=E(ε_jt)=0 E(e^ε_jt|ℐ_jt)=E(e^ε_jt)=ℰ Where ℰ is a scalar constant. Furthermore, the following two assumptions characterize the inputs allocations and the pricing behavior. [Inputs Allocations]   Firm j allocates the period t capital and labor inputs, K_jt and L_jt, just before period t starts. Hence, these inputs are "predetermined". On the other hand, the firm allocates intermediate materials, M_jt, after knowing ω_jt but before knowing ε_jt solving a static first-order condition. Then, intermediate materials are a "flexible" production input. [Pricing Behavior][Assumption <ref> is closely tied to the assumption on output and input markets in <cit.> (page 13). Unlike the authors, I assume firms are also price takers in the output market. Moreover, while they assume that nominal inputs prices and the final output price demand shifter follow an unspecified Markovian process, in this paper the Markovian assumption regards relative prices. ]   Firms are price takers in the output and input markets. The firm faces the final output price P_jt, and w_jt, r_jt and ρ_jt as the wage rate, the rental rate of capital, and the unit cost of materials, respectively. The input prices are paid simultaneously with each input allocation. The inputs' relative prices evolve according to an exogenous Markov process not further specified, which accounts for time-inhomogeneity and may depend on the firm's productivity via the input allocation's relevant information set. Then, the following characterizations follow without loss of generality r_jt/P_jt=r̃_jt(ℐ̃_jt-1,⋯) w_jt/P_jt=w̃_jt(ℐ̃_jt-1,⋯) ρ_jt/P_jt=ρ̃_jt(ℐ_jt,⋯) The following timeline summarizes and contextualizes the assumptions above * Firm j ends period t-1 knowing the period's TFP ν_jt-1 and the final output Y_jt-1. That is, the firm holds information set ℐ̃_jt-1. * The firm now chooses L_jt and K_jt, period t predetermined inputs, for real wage rate w_jt/P_jt and real rental rate of capital r_jt/P_jt as a function of ℐ̃_jt-1. * Firm j enters period t. η_jt realizes. The firm now knows ω_jt and holds information set ℐ_jt. * The firm chooses the period t intermediate materials allocation, m_jt, as a function of ℐ_jt given the real material inputs price ρ_jt/P_jt, solving an intermediate inputs value-added conditional maximization problem _M_jt[E(F(k_jt,l_jt,m_jt)e^ν_jt|ℐ_jt)-ρ_jt/P_jt M_jt] * Finally, right before period t ends, the firm can now observe ε_jt and, then, the period's TFP ν_jt and final output Y_jt. The firm now holds the information set ℐ̃_jt. Throughout this paper, I will consistently refer to η_jt as the 'ex-ante productivity shock' and ε_jt as the 'ex-post productivity shock', relative to the material inputs allocation. Given the timing constraints and uncertainty, TFP in this model is not necessarily Hicks-neutral. A more detailed discussion on this topic can be found in Appendix <ref>. Finally, the model exhibits flexibility due to the unspecified decision problem for the predetermined inputs. This flexibility allows the model to accommodate a wide array of dynamic and static distortions affecting capital and labor allocations. Examples are resale losses due to transaction costs, the market for lemon phenomenon, the physical cost of resale and refit costs for capital, hiring, firing, and training losses for workers (<cit.>), working capital and borrowing constraints (see <cit.> for a discussion on their relevance for shocks propagation on an international perspective), government regulations, transportation costs, subsidies, and taxes (<cit.>). §.§ Productivity Growth and the Inputs' Marginal Product TFP growth. Let the output elasticity of input X be a non-parametric function of the inputs allocation[Notice that ∂ y_jt/∂ x_jt=∂ f(k_jt,l_jt,m_jt)/∂ x_jt since ∂ν_jt/∂ x_jt=0 by the exogeneity assumptions on ν_jt.] elas^X(k_jt,l_jt,m_jt)=∂ f(k_jt,l_jt,m_jt)/∂ x_jt ∀ X ∈{K,L,M} Denote MP_jt^X as the marginal product of input X for firm j at time t. Using the chain rule, it follows that MP_jt^X is a function of the inverse input X share of output and the output elasticity of input X. MP^X_jt=∂ Y_jt/∂ X_jt=∂ Y_jt/∂ y_jt∂ x_jt/∂ X_jt∂ y_jt/∂ x_jt=Y_jt/X_jtelas^X(k_jt,l_jt,m_jt) Taking the natural logarithm, denote for each input mp^X_jt as mp^X_jt=log(MP^X_jt) Totally differentiating mp^X_jt with respect to TFP growth Δν_jt, where Δ denotes a difference between period t and t-1, leads to[Notice indeed that ∂mp^X_jt/∂ y_jt=-∂mp^X_jt/∂ x_jt=∂mp^X_jt/∂log elas^X(k_jt,l_jt,m_jt)=1 ] dmp^X_jt/dΔν_jt=dy_jt/dΔν_jt-dx_jt/dΔν_jt+dlog elas^X(k_jt,l_jt,m_jt)/dΔν_jt Equation (<ref>) tells that three factors determine the elasticity of an input's marginal product with respect to a hypothetical change in the firm's TFP growth. The first and second terms are the output and input allocation elasticity to TFP growth. Finally, the third term accounts for the effect of the change in TFP growth on the input's output elasticity. TFP growth channels. One can further decompose TFP growth Δν_jt as Δν_jt =(ω_jt-ω_jt-1)+(ε_jt-ε_jt-1) =m(ω_jt-1)+η_jt-ω_jt-1+ε_jt-ε_jt-1 =g(ω_jt-1)+η_jt+Δε_jt In the last line, I aggregate the terms related to the previous period persistent productivity term, ω_jt-1, in the function g. Assume for simplicity that there has been no ex-post productivity shock in the previous period (i.e., ε_jt-1=0, its unconditional mean). Then, equation (<ref>) simplifies into Δν_jt = g(ω_jt-1)+η_jt+ε_jt Using the chain and the inverse-function rules[Indeed dmp_jt^X/dΔν_jt=(dΔν_jt/dmp_jt^X)^-1=(dg(ω_jt-1)/dmp^X_jt+dη_jt/dmp^X_jt+dε_jt/dmp^X_jt)^-1 = ((§ mp^X_jt/§ω_jt-1(∂ g(ω_jt-1)/∂ω_jt-1)^-1)^-1+(§ mp^X_jt/§η_jt)^-1+(§ mp^X_jt/§ε_jt)^-1)^-1 The symbol § refers to a partial total derivative. For more details, I refer the reader to <cit.>, page 192. In this setup, § mp^X/§θ has the interpretation of a total derivative keeping the other exogenous variables fixed. For example § mp^X_jt/§ω_jt-1 = . d mp^X_jt/d ω_jt-1|_dη_jt=0,dε_jt=0 However, since ω_jt-1⊥η_jt⊥ε_jt by Assumption <ref>, I can ease the notation by replacing the partial total derivatives with simple total derivatives dmp_jt^X/dΔν_jt=((d mp^X_jt/dω_jt-1(∂ g(ω_jt-1)/∂ω_jt-1)^-1)^-1+(d mp^X_jt/d η_jt)^-1+(d mp^X_jt/d ε_jt)^-1)^-1 ], I decompose the effect of a counterfactual static variation of the firm's TFP growth on an inputs' marginal products into its channels dmp_jt^X/dΔν_jt=c (d mp_jt^X/d ω_jt-1,dmp_jt^X/dη_jt,dmp_jt^X/dε_jt) Expression (<ref>) reveals three channels that determine the impact of a counterfactual variation in TFP growth on an individual input's marginal product. The first channel ties into a variation in past TFP, known to the firm at the time of every input allocation. The second and third channels hinge on counterfactual variations in the ex-ante and ex-post productivity shocks. The firm possesses knowledge of the second channel only during material allocation, while the third channel always represents a post-allocation variation. Formally, totally differentiating mp^X_jt with respect to the TFP growth channel θ_jt leads to dmp_jt^X/dθ_jt=d y_jt/d θ_jt-dx_jt/dθ_jt+dlog elas^X(k_jt,l_jt,m_jt)/dθ_jt In Appendix <ref>, I analytically derive the effect of a counterfactual change of each TFP growth source on each input's marginal product. Given the timing and allocation assumptions, these channels affect the inputs' marginal products differently. § DATA §.§ US data Yearly firm-level panel data for the US are sourced from Compustat. The sample contains all publicly traded firms operating in the manufacturing sector (NAICS 31-33) between 1960-2018. The data contain a unique firm identifier, year of operation, the industrial sector at the 6-digit NAICS level, location, and standard balance sheet variables. Since the data do not report quantities, the output measure used is deflated net sales and, for the labor and capital inputs, the number of employees and the deflated total net value of property, plant, and equipment[The Chilean and Colombian data used in <cit.> do not report quantities either. The authors then resort to a similar approximation.]. I directly source these measures from the data. However, the dataset does not contain values for the cost of material inputs and the cost of labor, which I construct following <cit.>. First, I compute the labor cost by multiplying the number of employees by the average industry wage, retrieved from <cit.>. Second, I construct the cost of materials as the cost of goods sold plus administrative and selling expenses minus wage expenditure and capital depreciation. Then, I use this value deflated as a measure for material inputs allocation. I source industry-specific deflators from <cit.>. Appendix <ref> contains additional information on data cleaning. Table <ref> in Appendix <ref> shows summary statistics of the variables of interest. As many authors already pointed out (see <cit.> and <cit.>), the Compustat sample is biased towards bigger and older companies since it is restricted to only publicly traded firms. I compare the results using US data to those obtained from European balance-sheet panel data, described below, to reduce the distortions that may arise. §.§ European Data Annual firm-level harmonized balance sheet data for the European countries are sourced from BvD-Orbis. <cit.> describe the dataset and its main advantages in detail. I restrict the data to firms operating in the manufacturing sector (NACE 10-33). The bias towards bigger firms that affects Compustat data is absent in this dataset since balance sheet reporting is mandatory for small private firms in European countries, and Orbis collects administrative data from local Chambers of Commerce. I collect data from Belgium (2007-2019), Germany (2004-2018), Spain (2000-2017), France (2000-2017), Hungary (2007-2019), Italy (2000-2017), Poland (2004-2018), Portugal (2007-2019), Romania (2004-2018), Slovenia (2007-2019), Sweden (2009-2019). Like Compustat, the dataset contains a unique firm identifier, year of operation, industrial sector up to 4-digit NACE level, and standard accounting data. The output measure I use is deflated revenue. Moreover, I use the number of employees, deflated fixed assets, and deflated cost of materials to measure labor, capital, and material inputs. These quantities, as well as the cost of labor, are present in the data. Industry-specific deflators are sourced from EU KLEMS and, when unavailable, from OECD STAN. Appendix <ref> has additional dataset cleaning and construction details. Table <ref>-<ref> in Appendix <ref> shows summary statistics of the variables of interest. Finally, to ease the computational burden, when the number of individual firms for each country is too big (i.e., exceeds 5,000), I randomly draw 5,000 individual firms from the country's pool of individual firms. This approach has been used for France, Italy, Spain, Germany, Poland, Romania, Portugal, Slovenia, and Sweden. § ESTIMATION The model primitives align with the estimation strategy proposed by <cit.>. For each country's aggregated manufacturing sector, I estimate the production function separately, employing their innovative, non-parametric method. Detailed insights into this approach to identification and estimation appear in Appendix <ref>, with a brief overview provided below. The first step unfolds by rearranging the first-order condition (FOC) from the firm's conditional value-added maximization problem (<ref>). This process yields a non-linear equation, which is estimable using least squares methods, and allows to recover the output elasticity of material inputs and the ex-post productivity shock at the firm-time level. By integrating the material input elasticity, we can identify the portion of the production function related to material inputs. Subtracting this, along with the ex-post productivity shock, from the firm's output allows to identify the sum of the two remaining unobservables: the part of the production function unrelated to material inputs, and the firm's persistent TFP component. In the second stage, I approximate the Markovian process of the persistent component of TFP and the remaining part of the production function using polynomials. Then, I estimate the parameters of the polynomials using the Generalized Method of Moments (GMM), leveraging the orthogonality between the allocation of labor and capital and past persistent productivity on one side, and the ex-ante productivity shock on the other. The production function estimates then permit the recovery of the quantities of interest, such as input elasticities and marginal products. I compute standard errors by applying non-parametric bootstrap resampling 150 times at the firm level (and 600 times for the US). § RESULTS §.§ Production Function Estimation In Figure <ref>, I show the distribution of the estimated US firm-specific average[Average over the period the firm is active in the data.] output elasticities for each input. As one would expect, all distributions are normally shaped and mostly lie in the unit interval: 98% of the observations of capital elasticities, 89% of material inputs elasticities, and 83% of labor elasticities are in between 0 and 1[This result is in line with <cit.>. The authors use Chilean and Colombian data and estimate the elasticities distributions separately for the food products (NAICS 311), textiles (NAICS 321), apparel (NAICS 322), wood products (NAICS 331), and fabricated metal products (NAICS 381) industries. They found that, for any given industry, at most 2% of the labor and intermediate-input elasticities are outside the unit interval. In contrast, for capital, the elasticities are more concentrated on zero on average, but even in the worst case, less than 9.4% have values outside the (0,1) range.]. In the Appendix, section <ref>, I report the same distributions for the European countries in the dataset. For them, the average firm-specific elasticities are even more concentrated in the unit interval[Notice that the spike in the material inputs elasticity distribution that some of them display at 0 is a byproduct of the share regression in equation (<ref>), which forces the elasticity of material inputs to be strictly positive.]. Table <ref> presents the estimates for each country's average output elasticities for each input, their sum, and the ratio of the average capital and labor elasticities. Across all production inputs, capital typically exhibits the lowest elasticity, with the highest being Germany's 24%. Conversely, labor generally possesses higher elasticity than intermediate inputs, with the US as an exception, where the latter's elasticity triples the average elasticity of labor[<cit.> also observed higher average output elasticity for material inputs in their studies using Colombian and Chilean manufacturing data.]. The sum of elasticities aligns with either constant or diminishing returns to scale. Furthermore, the ratio of average capital to labor elasticities, indicative of the capital intensity relative to labor in the production technology, exhibits significant heterogeneity across countries, ranging from a low of 0.06 for Sweden to a high of 0.99 for the US. Regarding the estimation results for TFP, I assumed the systematic component of productivity follows a Markov process, which I approximated by a 3-rd order degree polynomial. ω_jt=δ_0+δ_1ω_jt-1+δ_2ω_jt-1^2+δ_3ω_jt-1^3+η_jt Table <ref> Section I presents the country-specific estimates for the parameter vector δ from equation (<ref>). For most cases, only the estimates for the parameters δ_0 and δ_1 display sufficient precision to reject the null hypothesis of equality to 0 at standard confidence levels. Moreover, the estimates for δ_1 range between 0.76 for Romania and 0.96 for France, leading to the conclusion that an AR(1) process (with drift) of high persistence closely approximates the systematic productivity component dynamics in each country. Section II of Table <ref> lists the maximum likelihood parameter estimates derived from fitting a Generalized Extreme Value distribution, denoted as GEV∼(ξ,σ,μ), to each country's pooled firm-time specific estimates of TFP levels (e^ν̂_jt). The parameter ξ shapes the distribution, while the parameters σ and μ determine the scale and location of the distribution. For every country, the estimated shape parameter is positive, indicating that Frechet distributions fit the data accurately[The assumption of a Frechet distribution for TFP is prevalent in the economic literature, especially in International Trade following the foundational work of <cit.>. In this context, the distribution is a result of estimation.]. Expectedly, Germany's TFP distribution exhibits the highest mean (4.03), closely followed by Belgium (1.49)[Given the estimates ξ̂, σ̂, and μ̂, and since 0<ξ̂<1 for each country, the expected value of the estimated GEV distribution is computed as E(X)=μ̂+σ̂(Γ(1-ξ̂)-1)/ξ̂ where Γ(.) is the Gamma function.]. The model regards the allocation of capital and labor inputs as predetermined relative to materials, but it remains agnostic about their allocation problem. Despite robust empirical evidence supporting the notion that capital is a quasi-fixed input subject to depreciation, lumpy investments, and adjustment costs (refer to <cit.>), there is no consensus regarding the nature of the labor input. In Appendix <ref>, a model-consistent test I developed allows for assessing the null hypothesis that labor is flexible. In this context, flexible means the input solves a static value-added maximization problem under imperfect information. Empirical evidence lends strong support to the rejection of this null hypothesis. §.§ Productivity Growth and Inputs Marginal Products This section delves into the relationship between TFP growth and the marginal products of all inputs. Initially, the exploration centers on the micro-level correlation between the marginal product of inputs and a firm's TFP growth. Subsequently, attention shifts to estimating the proportion of cross-sectional industry-level dispersion of the marginal product explained by the implied TFP volatility for all inputs. Micro-level correlation. Figure <ref> reveals a notable correlation at the firm level between the (log) marginal products of inputs and (log) TFP growth in the pooled data. A simple linear regression yields coefficients of 0.51 for capital, 0.24 for labor, and 0.33 for materials, all bearing strong statistical significance. Furthermore, the data suggest a noticeable heterogeneity in the correlation's absolute size across different countries. For instance, Section (b) of Figure <ref> indicates a comparatively lower correlation between labor's marginal product and TFP growth for the US. To accommodate industry-time-specific omitted variables, I introduce the corresponding fixed effects into a country-specific linear regression model as follows: mp_jt^X=β_devΔν_jt+ι_st+ζ_jt Here, Δν_jt again represents the shift in the firm-specific log TFP (ν_jt) from period t-1 to period t, whereas ζ_jt serves as a residual measurement error term.[This residual error term can account for measurement errors made by the econometrician or latent measurement errors in the data balance sheet variables which have been carried on in the analysis and incorporated in the TFP growth and marginal products estimates.] ι_st is a sector-time fixed effect, where the sector is identified as the three-digit level NACE for European countries or NAICS for the US. A closer examination of specification (<ref>) allows us to interpret the regression coefficient β_dev causally, in comparative static terms, as the (country-average) marginal effect of a hypothetical variation in the firm's TFP growth on the input's marginal product. Each variable in this interpretation is measured in terms of log deviations from its industry-time mean. Formally ∂ (mp_jt^X-E_st(mp_jt^X))/∂(Δν_jt-E_st(Δν_jt)) ∀ X ∈{K,L,M} where E_st(.) denotes industry-time-specific means. Table <ref> displays the results of executing regression (<ref>) for each country. Consistent with the trends observed in Figure <ref>, all production inputs' marginal products react substantially to changes in TFP growth, when measured in terms of log deviations. The US exhibits the smallest effects, although all the estimated coefficients still have high significance. I compute elasticities of 0.51 for the marginal product of capital, 0.47 for labor, and 0.36 for materials. Interestingly, the incorporation of fixed effects does not alter the overall elasticity of the marginal product of capital with respect to TFP growth. However, it does lead to larger estimated elasticities for the marginal products of labor and materials. Given the joint estimation of the inputs' marginal products and TFP, the discrepancy is primarily due to higher within-group industry-time variation than between-group variation rather than a bias caused by omitted industry-time fixed variables. As anticipated, the US exhibits the smallest effects —0.4 for capital (compared to a peak of 0.78 for Belgium), 0.23 for labor, and 0.21 for materials— with Spain showing the highest effects for both labor and materials, at 0.64 and 0.48 respectively. [!h] The Effect of TFP Growth on the Marginal Product of Inputs 1.0! 4cCapital 4cLabor 4cMaterial Inputs 2-5 6-9 10-13 [-1.8ex] 1lCountry 1lβ̂_dev R^2 Adj R^2 N 1lβ̂_dev 1cR^2 1cAdj R^2 N 1lβ̂_dev R^2 Adj R^2 N [-1.9ex] 1lUSA 0.398^*** 0.211 0.196 68,289 0.225^*** 1c0.208 1c0.190 1c58,294 0.214^*** 0.074 0.056 69,210 (0.028) (0.029) (0.026) 1lBelgium 0.776^*** 0.118 0.082 28,632 0.546^*** 1c0.265 1c0.235 1c28,631 0.439^*** 0.198 0.166 28,755 (0.083) (0.053) (0.047) 1lFrance 0.587^*** 0.223 0.199 53,833 0.585^*** 1c0.370 1c0.351 1c53,833 0.387^*** 0.371 0.351 53,835 (0.025) (0.019) (0.017) 1lGermany 0.495^*** 0.146 0.100 24,394 0.380^*** 1c0.272 1c0.232 1c24,346 0.262^*** 0.170 0.125 24,459 (0.052) (0.043) (0.044) 1lHungary 0.548^*** 0.115 0.079 29,188 0.512^*** 1c0.303 1c0.274 1c29,168 0.362^*** 0.205 0.173 29,188 (0.031) (0.025) (0.022) 1lItaly 0.521^*** 0.118 0.085 50,533 0.367^*** 1c0.172 1c0.142 1c50,465 0.303^*** 0.200 0.171 50,643 (0.028) (0.022) (0.018) 1lPoland 0.397^*** 0.113 0.080 34,611 0.430^*** 1c0.243 1c0.215 1c34,615 0.272^*** 0.259 0.232 34,625 (0.035) (0.025) (0.021) 1lPortugal 0.601^*** 0.173 0.149 35,028 0.500^*** 1c0.277 1c0.257 1c35,034 0.438^*** 0.260 0.240 35,163 (0.033) (0.017) (0.015) 1lRomania 0.466^*** 0.120 0.085 31,508 0.515^*** 1c0.272 1c0.243 1c31,426 0.451^*** 0.417 0.394 31,587 (0.016) (0.010) (0.008) 1lSlovenia 0.591^*** 0.102 0.068 26,444 0.489^*** 1c0.223 1c0.193 1c26,418 0.330^*** 0.262 0.233 26,446 (0.037) (0.021) (0.014) 1lSpain 0.644^*** 0.128 0.101 53,934 0.623^*** 1c0.252 1c0.229 1c53,923 0.476^*** 0.250 0.226 54,101 (0.028) (0.021) (0.017) 1lSweden 0.327^*** 0.115 0.071 13,228 0.419^*** 1c0.198 1c0.158 1c13,235 0.287^*** 0.190 0.149 13,261 (0.050) (0.027) (0.024) 1lAll 0.514^*** 0.238 0.207 449,622 0.466^*** 1c0.514 1c0.493 1c439,388 0.359^*** 0.279 0.249 451,273 (0.009) (0.007) (0.006) [1.1ex] * The table reports the coefficient of the linear regression in (<ref>). * The sector is identified at the 3-digit level of disaggregation (NACE for European countries, NAICS for the US). * I include country-year-industry fixed effects for the regression where all countries are pooled. * Standard errors are clustered at the individual firm level. * ^*p<0.1. * ^**p<0.05. * ^***p<0.01. Industry-level dispersion. The design in (<ref>) allows aggregation at the cross-sectional industry level. Assuming that the exogeneity assumption for the measurement error in equation (<ref>) also holds at the industry level, we can estimate the following equivalent specification of equation (<ref>) sector by sector for a given country: (mp_jt^X-mp_st^X)=β_s,dev(Δν_jt-Δν_st)+(ζ_jt-ζ_st)_ζ̃_jt where mp_st^X, Δν_st, and ζ_st are industry-time specific averages. The β_s,dev regression coefficient thus serves as an estimator for the sector-specific average of the marginal effect in (<ref>). Given the exogeneity assumption on the error term, a sum of squares transformation of equation (<ref>) produces the projection: Var_st(mp^X_jt)=β_s,varVar_st(Δν_jt)_Vol_st(ν_jt)+φ_st Here, Vol_st(ν_jt) represents the volatility of log TFP at time t and sector s, and β_s,var=β_s,dev^2 by construction. By employing the uncentered R^2 statistic used in linear regression analysis, we can compute the average proportion of cross-sectional dispersion in the marginal product of inputs explained by TFP volatility at the industry-time level. I denote this quantity as S^2, interpreting it as the share of dispersion in the marginal products of inputs captured by the projection in equation (<ref>). S^2=1-∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β̂_s,varVol_st(ν_jt))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 Notably, since I compute the coefficient parameter β̂_s,var by construction rather than by direct OLS estimation of equation (<ref>), the resulting S^2 statistic is a conservative measure and can be negative. Negative S^2 values are considered uninformative. Table <ref> shows the country-specific S^2 results. On average, the industry-time-specific TFP volatility explains 7% of the dispersion in the marginal product of capital, 9% for labor, and 10% for materials. These results exhibit notable heterogeneity across countries. As expected, the explanatory power is low for the US, averaging 2.1% for capital, 2.6% for labor, and 1% for materials. Conversely, the percentages for Portugal rise significantly: 11% for capital, 18% for labor, and 28% for materials. §.§ Decomposing TFP Growth: The Productivity Channels Building upon the TFP growth decomposition presented in Section <ref>, the analysis now pivots to ascertain how a marginal variation in each channel, holding the others constant, determines a corresponding fluctuation in the inputs' marginal products at the firm level. Further, the discussion will explore the aggregate implications of these variations. Micro-level correlation. I substitute the TFP growth term in specification (<ref>), denoted as Δν_jt, with its channels from equation (<ref>). The transformed equation (<ref>) reads: mp_jt^X=β_ωω_jt-1+β_ηη_jt+β_εΔε_jt+ι_st+ζ_jt Estimating this equation (<ref>) for capital, labor, and materials in each country separately generates the results displayed in Table (<ref>), Table (<ref>), and Table (<ref>) respectively. The marginal product of capital responds strongly to variations in the ex-post productivity shock, η_jt. The estimated overall elasticity stands at 1.17, ranging from a low of 0.6 for the US to a high of 1.83 for Romania. Comparatively, changes in past productivity (ω_jt-1) and in the growth of ex-post productivity shock (Δε_jt) induce a milder reaction with overall positive elasticities of 0.72 and 0.48, respectively. Notably, the effect of past productivity exhibits considerable heterogeneity across countries, with Romania leading the pack with the highest estimated elasticity (1.58). In contrast, the impact of the ex-post productivity shock is more evenly distributed across countries. Likewise, for capital, the marginal product of labor increases for all TFP growth channels. The overall influence of the ex-ante productivity channel mirrors that of past productivity, albeit with the ex-post productivity shock channel inducing a comparatively milder overall effect. Hence, I estimate the overall elasticities of the ex-ante productivity, past productivity, and ex-post productivity channels at 0.91, 0.85, and 0.45, respectively. Interestingly, these estimates exhibit less cross-country variation than those for capital. Lastly, for materials, the marginal product increases in response to the ex-post productivity shock across all countries except for the US and Romania. However, it decreases in response to changes in past productivity and ex-ante productivity shock. On an overall basis, the elasticity to the ex-post productivity shock and ex-ante productivity shock stands at 0.45 and 0.13, respectively. This latter result likely stems from the US data. Conversely, the overall elasticity to the past productivity channel is -0.21. Industry-level dispersion. Adopting a similar approach to the previous section, we can devise an equivalent specification of equation (<ref>) with sector-specific coefficients, under the assumption that the exogeneity of the error term also holds at the sector level: (mp_jt^X-mp_st^X)= β_s,ω(ω_jt-1-ω_ st-1)+β_s,η(η_jt-η_st) +β_s,ε(Δε_jt-Δε_st)+(ζ_jt-ζ_st)_ζ̃_jt In this equation, the terms mp_st^X, ω_st-1, η_st, Δε_st, and ζ_st represent industry-time specific means. A sum of squares transformation, along with the exogeneity assumption of the error term and the orthogonality assumptions on the TFP components (see Assumptions <ref> and <ref>), results in the following projection: Var_st(mp^X_jt)=β_s,ω varVar_st-1(ω_jt-1)+β_s,η varVar_st(η_jt)+β_s,ε varVar_st(Δε_jt)_Vol_st(ε_jt)+φ_st Where, by construction, β_s,ω var=β̂_s,ω^2, β_s,η var=β̂_s,η^2, β_s,ε var=β̂_s,ε^2. Using the S^2 statistic computed for each component on the right-hand side of equation (<ref>), I determine the average proportion of the cross-sectional, sectoral dispersion of the inputs' marginal products that is attributable to the variability of each TFP channel, ω_jt-1, η_jt, and Δε_jt, all else being equal. S^2_ω-1=1-∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β_s,ω varVar_st-1(ω_jt-1))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 S^2_η=1-∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β_s,η varVar_st(η_jt))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 S^2_Δε=1-∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β_s,ε varVar_st(Δε_jt))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 One must remember that the usual remarks regarding the S^2 statistic still apply. Moreover, the sum of S^2_ω-1, S^2_η, and S^2_Δε does not necessarily approximate the S^2 statistic given in Table <ref> for the composite TFP growth term[For example, considering the Δν_jt decomposition in equation (<ref>), Var(g(ω_jt-1))≠ Var(ω_jt-1) if g(.) is not linear or affine with unitary linear coefficient.]. Table <ref> illustrates the results. The decomposition of TFP growth into its components augments the overall explanatory power by harnessing this additional, valuable source of variation. Past productivity variability significantly accounts for the dispersion of the marginal product of inputs. For labor, this explanatory power ranges from 12% for Portugal to 47% for Belgium. Yet, the degree of explanation exhibits more heterogeneity across countries for capital and materials. In relation to capital, the explained share of variation in the marginal product oscillates between 5% for Spain and 40% for Hungary. For materials, the range extends from 2% for Spain to a substantial 77% for the US. Conversely, the ex-ante and ex-post productivity shock channels demonstrate a less pronounced explanatory power. The ex-ante channel explains between 2% (US) and 12% (Hungary and Romania) of the dispersion in the marginal product of capital, between 4% (US) and 11% (Romania) for labor, and between 0.4% (US) and 35% (Slovenia) for materials. Comparable estimates emerge for the ex-post productivity shock channel. It explains between 2% (US) and 10% (Germany) of the dispersion in the marginal product of capital, between 1% (US) and 18% (Romania) for labor, and between 2% (US) and 31% (Romania) for materials. Observe the notably limited explanatory power of the ex-ante and ex-post productivity channels for each input's marginal product dispersion, specifically in the case of Germany. Portugal also stands out due to the minimal capacity of past productivity to explain its capital's marginal product dispersion. The emergence of these large, negative results could stem from two causes: the cross-sectional variances of these productivity components are, on average, significantly higher than the variances of the marginal products, or the marginal product of the input is extremely elastic to changes in the productivity components, resulting in exceptionally high β_s,var. In relation to the first point, the adopted assumption holds that the productivity components are orthogonal at the firm-time level. Consequently, the projection specification in (<ref>) precludes any correlated variation between them. Nevertheless, this correlation could still manifest in the aggregate sector-level data. A more detailed exploration of this issue is available in Appendix <ref>, where I re-compute the S^2 statistics accounting for cross-sectional sectoral co-variations between TFP growth components. The integration of these correlated variations tends to enhance the explanatory power of the productivity channels overall, boosting the lowest estimated explained shares while leaving the remainder unaltered. As an outcome, each productivity channel becomes explanatory for each input's marginal product dispersion[Germany remains an exception, despite the considerable surge in the negative estimated explained shares.]. [h] S^2 - Decomposed TFP [-1.8ex] 3cCapital 3cLabor 3cMaterials 2-4 5-7 8-10 [-1.8ex] Country 1cω_-1 1cη 1lΔε 1cω_-1 1cη 1cΔε 1cω_-1 1cη 1cΔε USA 9.83% 1.65% 2.09% 26.22% 4.12% 1.17% 77.12% 0.35% 1.66% Belgium 22.40% 6.37% 8.03% 47.08% 6.95% 10.90% 15.93% 13.48% 26.50% France 38.38% 8.52% 4.12% 40.63% 9.24% 9.18% 8.61% 19.25% 19.47% Germany 14.89% -451.05% -209.84% 45.92% -35.59% -0.62% 37.76% -41.98% -160.14% Hungary 39.78% 12.20% 10.47% 32.41% 13.45% 10.88% 32.39% 11.25% 6.83% Italy 17.10% 6.01% 6.14% 31.49% 7.53% 9.04% 10.16% 8.35% 28.05% Poland 22.18% 6.85% -7.58% 25.23% 6.88% 8.45% 3.54% 2.03% 13.90% Portugal -1217.34% 6.49% 9.75% 11.59% -4.49% 15.31% 3.12% 3.18% 24.58% Romania 21.30% 12.41% 7.53% 15.89% 10.58% 18.11% 3.65% 2.92% 31.29% Slovenia 24.94% 5.70% 2.48% 25.02% 7.27% 5.66% 9.21% 35.28% 24.87% Spain 4.68% -11.39% 3.93% 16.38% 4.52% 6.86% 2.30% -7.56% 17.44% Sweden 10.38% 4.42% 4.40% 20.11% 7.62% 6.76% 3.72% 1.73% 7.91% [1.1ex] § THE 2008 FINANCIAL CRISIS The literature thoroughly examines the crisis, which stands out as the most acute economic downturn since the Great Depression of 1929. The crisis took root in the financial sector, driven by an interplay of factors such as widespread subprime mortgage sales targeting low-income homebuyers, excessive risk-taking by U.S. financial institutions engaging in both commercial and proprietary trade (enabled by the 1999 Glass-Steagall Act repeal), and the implosion of the housing financial bubble. The collapse of key banks, exemplified by Lehman Brothers' bankruptcy declaration in September 2008, eroded confidence in bank solvency and resulted in a contraction of credit availability. Consequently, the capital reallocation channel from households to firms stalled, transforming a financial-sector crisis into a global economic shock that decelerated the world economy. In 2009, GDP growth plummeted to a stunning -2.5% for the US and -4.5% for the European Monetary Union (EMU) (refer to Figure <ref>). Yet, the financial crisis wielded disparate effects across European countries. Countries' financial sectors function with varying degrees of efficiency. A relatively more efficient financial sector enables better capital allocation from households to firms and between firms. This efficiency also bolsters resilience and adaptation to localized financial shocks, halting their spread to real economic activity through mechanisms such as credit crunches. Observe the time trends of productivity and inputs allocations for France and Spain in Figures <ref> and <ref>. France, with a relatively efficient financial sector, shows industry-specific standard deviations for the (log) TFP level and growth and the (log) marginal products of capital, labor, and material inputs maintaining a more stable pattern over time. The most noticeable alteration in the series before and after 2008 is a gradual rise in TFP dispersion to a level 30% higher than in 2001. In contrast, Spain, with a relatively less efficient financial sector, reveals more stark contrasts. Before 2008, the dispersions of the inputs marginal products and TFP were stable, with a temporary apex in 2004 reaching nearly 20% more than their 2001 level. Post-2008, these indicators began a persistent climb, with the dispersion of the marginal product of labor and capital hitting a peak level almost 20% higher than in 2001, 50% higher for the dispersion of the marginal product of intermediates, and 70% higher for the dispersion of TFP. TFP volatility dropped by 20% in 2008 compared to 2001, while in 2015 it was 20% higher than its 2001 level. Figures <ref>-<ref> in Appendix <ref> showcase the patterns for the US, Germany, Italy, Poland, and Romania. Interestingly, while the US and Germany exhibit a stable pattern over time with no significant variation between pre- and post-2008 periods, Italy, Poland, and Romania appear relatively more unstable with some breaks, on average, after 2008. Following the categorization in the literature (<cit.>), I classify Italy and Spain as "South" countries, characterized by their financially underdeveloped status and less efficient financial sectors. France and Germany, on the other hand, comprise the "North" group, noted for their financial development and more efficient financial sectors. To gauge the financial crisis's impact on productivity volatility and the dispersion of input's marginal products in the more crisis-prone South, I use an event study approach applying the <cit.> Difference-in-Differences estimator, with the North serving as a control group. This estimator permits the inclusion of multiple periods and differentiates between short and long-term effects. I first evaluate the financial crisis's impact on the industry (log) volatility of TFP, Var_st(ν_jt-ν_jt-1). Figure <ref> showcases the results, revealing a considerable increase in TFP volatility in the South—about 65% on average across the nine years following the crisis. Confirming the satisfaction of the parallel trend assumption in the pre-crisis years, the figure depicts an immediate and lasting effect of the crisis on TFP volatility, with a significant ATT estimate of 75% in 2009 and comparable, statistically significant estimates in subsequent years. By breaking down TFP growth into its components, it becomes apparent that the composite effect largely stems from the variance of past systematic productivity, Var_st-1(ω_jt-1), and the volatility of the ex-post productivity shock, Var_st(ε_jt-ε_jt-1) (Refer to Figure <ref> in Appendix <ref> for more details). Finally, I estimate the 2008 financial crisis's impact on the industry-specific (log) dispersion of the marginal products of inputs in the South. Table <ref> displays these results. Of note, the financial crisis did not significantly affect the dispersion of the marginal products of labor and materials in the South[The pre-treatment ATT also reveals statistical evidence supporting the rejection of the parallel trends assumption before the crisis for both inputs.]. A different scenario plays out for capital. As shown in column (1), the financial crisis, on average, spurred a robust and statistically significant increase in the industry-specific variance of the marginal product of capital by 33%. Further, the ATT pre-treatment shows no evidence suggesting the rejection of the parallel trends assumption for the pre-treatment years. By observing the magnitude and significance of the year-specific ATT, one can discern that the crisis's effect was not only persistent but also intensifying over time. These findings hold firm and remain virtually unaltered when controlling for industry-specific (log) TFP volatility (columns (2), (3), (5), (6), (8), and (9)) and industry-specific market concentration (columns (3), (6), and (9)), measured by the (log) Herfindahl-Hirschman Index (HHI)[I account for the (log) industry-specific HHI to control for any changes in the dispersion of the industry-specific marginal product of capital due to changes in industry-specific market concentration. Figure <ref> in the Appendix <ref> presents the <cit.> DiD estimates of the financial crisis's impact on the (log) HHI, considering the South as the treatment and the North as the control group. The results highlight that the crisis's effect has been gradual, with a relative increase in market concentration in the South peaking in 2012, but temporary.]. For robustness, I reproduce the above analysis, but this time incorporating the "East" group (comprising Poland and Romania) into the treatment group and the US into the control group. Appendix <ref> contains the results of this exercise in Table <ref>. For labor and material inputs, the conclusions remain unaltered. As for capital, the average ATT and year-specific ATTs exhibit a slightly greater magnitude, with the latter also presenting heightened statistical significance in the immediate post-crisis years. [h] North-South DID Results 0.95! 3cVar(mp^K) 3cVar(mp^L) 3cVar(mp^M) 2-4 5-7 8-10 1c(1) 1c(2) 1c(3) 1c(4) 1c(5) 1c(6) 1c(7) 1c(8) 1c(9) 1c 1c 1c 1c 1c 1c 1c 1c 1c ATT 0.334^*** 0.335^*** 0.312^*** 0.0557 0.0212 -0.0348 0.0586 0.0816 -0.0247 (0.0921) (0.0896) (0.0884) (0.0881) (0.0876) (0.0780) (0.102) (0.104) (0.0983) [1em] [1em] ATT Pre-treatment 0.00475 -0.0125 -0.00292 0.0796^*** 0.0572^* 0.0569^** 0.0746^** 0.0811^** 0.0913^*** (0.0190) (0.0255) (0.0236) (0.0217) (0.0228) (0.0216) (0.0273) (0.0258) (0.0276) [1em] ATT 2008 0.195^* 0.209^* 0.162 -0.0542 -0.0264 -0.00929 -0.0976 -0.0750 -0.0964 (0.0976) (0.0974) (0.0830) (0.0929) (0.0758) (0.0753) (0.104) (0.109) (0.105) [1em] ATT 2009 0.279^** 0.268^* 0.248^* 0.153 0.104 0.0608 0.101 0.127 0.111 (0.105) (0.109) (0.0982) (0.104) (0.0962) (0.0867) (0.112) (0.108) (0.109) [1em] ATT 2010 0.223^* 0.221^* 0.259^** 0.0765 0.0600 0.108 0.115 0.151 0.108 (0.0965) (0.0932) (0.0947) (0.100) (0.111) (0.100) (0.120) (0.122) (0.110) [1em] ATT 2011 0.194 0.215^* 0.233^* -0.0625 -0.112 -0.114 -0.0234 0.00638 -0.136 (0.0992) (0.0999) (0.0987) (0.105) (0.105) (0.0907) (0.117) (0.124) (0.128) [1em] ATT 2012 0.163 0.171 0.226^* -0.00449 -0.0537 -0.0707 -0.0308 0.0217 -0.0568 (0.112) (0.119) (0.106) (0.105) (0.100) (0.0881) (0.149) (0.132) (0.120) [1em] ATT 2013 0.328^** 0.323^** 0.334^*** 0.157 0.114 0.0570 0.195 0.196 0.0684 (0.105) (0.109) (0.0958) (0.132) (0.129) (0.108) (0.139) (0.133) (0.133) [1em] ATT 2014 0.432^*** 0.442^*** 0.429^*** 0.208 0.169 0.0291 0.178 0.181 0.0213 (0.109) (0.113) (0.101) (0.125) (0.119) (0.105) (0.160) (0.155) (0.155) [1em] ATT 2015 0.460^*** 0.457^*** 0.431^*** 0.0315 -0.0160 -0.136 0.0638 0.0727 -0.0715 (0.131) (0.131) (0.112) (0.131) (0.122) (0.108) (0.146) (0.143) (0.141) [1em] ATT 2016 0.530^*** 0.527^*** 0.407^*** 0.0344 -0.00881 -0.0888 0.0604 0.0853 -0.0564 (0.154) (0.146) (0.115) (0.125) (0.116) (0.0956) (0.149) (0.150) (0.136) [1em] ATT 2017 0.534^*** 0.518^** 0.389^** 0.0169 -0.0182 -0.183 0.0256 0.0489 -0.138 (0.157) (0.162) (0.127) (0.152) (0.158) (0.130) (0.157) (0.163) (0.140) [1em] N 1c5,149 1c5,058 1c5,058 1c5,149 1c5,058 1c5,058 1c5,149 1c5,058 1c5,058 [1em] Controls: 9c [1em]   Vol(TFP) 1cNO 1cYES 1cYES 1cNO 1cYES 1cYES 1cNO 1cYES 1cYES   HHI 1cNO 1cNO 1cYES 1cNO 1cNO 1cYES 1cNO 1cNO 1cYES * The figure reports the <cit.> Difference-in-Difference dynamic estimates with never treated control for the effect of the financial crisis on the industry-specific variance of the inputs marginal products. * The HHI is computed, for each year and industry, over the full sample for each country. * All variables are in logs. * The treatment group comprises the South, i.e., Italy (2001 - 2017) and Spain (2001 - 2017), while the control group comprises those in the North, i.e., France (2001 - 2017) and Germany (2005 - 2017). The industry is defined at the NACE 3-digit level. * The method used is the doubly-robust <cit.> estimator. Standard errors are bootstrapped using Wild bootstrap with 999 repetitions and clustered at the panel level. § CONCLUSION This paper revisits the conventional measure of within-industry resource misallocation —namely, the dispersion of an input’s marginal product— and reveals that this statistic reflects productivity uncertainty and heterogeneity across all inputs. My analysis determines that firm-level changes in TFP growth trigger substantial responses in the marginal product of each input. Aggregated at the industry level, cross-sectional TFP volatility independently explains approximately 7% of the marginal product of capital dispersion, 9% for labor, and 10% for materials. These findings suggest that, since this measure inherently captures economic fundamentals and firms' imperfect information, its use as a standalone indicator of policy-induced input misallocation (and hence as a target for policy intervention) is questionable. However, my research also illustrates that, with identified policy variations, we can effectively adapt this measure for cross-country or cross-industry evaluations, thereby providing insight into relative misallocation. For instance, my research establishes that the 2008 Financial Crisis amplified the dispersion of the marginal product of capital in Southern Europe by 40% more than in Northern Europe, even after accounting for TFP volatility. This divergence reflects the respective financial sector maturity in these regions, suggesting the presence of financial misallocative frictions. Consistent with established narratives about the crisis's causes and impacts, my findings indicate no discernible effect on the dispersion of the marginal product for other inputs. In conclusion, I highlight several limitations of this study that could steer future research. First, the crisis-induced misallocation may be driven not only by financial frictions but also by idiosyncratic institutions and regulations. Second, my theoretical and applied framework does not consider heterogeneous firm-specific market power, a potent influence on the dispersion of inputs' marginal product and a potential source of misallocation that policy intervention could address. Consequently, further research is essential to construct a unified framework encompassing a variety of policy-induced and non-policy-induced determinants of the dispersion of an input's marginal product. Such a comprehensive understanding is crucial to enhancing the informative value of this measure for guiding policymaking. § PARTIALLY HICKS-NEUTRAL PRODUCTIVITY Notice that the timing allocation and the productivity uncertainty the firm faces allow TFP to be not necessarily Hicks-neutral, depending on the source of variation. For the sake of explanation, the focus is restricted to labor and materials. Only for the following, assume that labor is a flexible input, marginal costs are constant, and the production function is Cobb-Douglas. F(L_jt,M_jt)=L_jt^α M_jt^β Equating the expected marginal product of each input to its real marginal cost delivers MP^Me_jt=βY_jt/M_jte^m(ω_jt-1)+η_jtℰ=ρ/P MP^Le_jt=αY_jt/L_jte^m(ω_jt-1)ℳℰ=w/P Then, once the productivity shocks realize, the final period's marginal products are MP^M_jt=ρ/Pe^ε_jt/ℰ MP^L_jt=w/Pe^ε_jt/ℰe^η_jt/ℳ Consider first a variation in ε_jt. ε_jt is a post-allocation output fluctuation for each input considered. As such, it draws a wedge between marginal costs and final period marginal products, having an elasticity of one with the latter. Then, the final period relative marginal product between labor and materials is unchanged, meaning that this productivity variation is Hicks-neutral. Consider a TFP variation coming from η_jt. η_jt is unknown when the firm allocates labor but known when it allocates materials. For labor, like ε_jt, η_jt is an ex-post allocation fluctuation that draws a wedge between the final period marginal product and marginal cost and changes its marginal product with an elasticity of one. However, since η_jt is known when the firm allocates materials, a variation of it has no effect on its final period marginal product. A change in η_jt entails a change in the allocation of materials to equate expected marginal product and marginal cost. Since the marginal cost is unaffected by assumption, the expected marginal product and the final period marginal product are unaffected. Then a variation in η_jt changes the final period relative marginal product between labor and materials, and, as such, it is not Hicks-neutral. § ANALYTICAL DERIVATION OF THE PRODUCTIVITY CHANNELS To ease the notation, I eliminate the subscripts. Each variable is assumed to be j and t-specific unless it has a -1 subscript. In that case, it is j and t-1-specific. First, notice that totally differentiating the final output y with respect to the productivity channel θ leads to the following d y/d θ= d f(k,l,m)/d θ+d m(ω_-1)/d θ+d η/d θ+ d ε/d θ Notice that d f(k,l,m)/d θ= ∂ f(k,l,m)/∂ kd k/d θ+ ∂ f(k,l,m)/∂ ld l/d θ+ ∂ f(k,l,m)/∂ md m/d θ And then d y/d θ= elas^Kd k/d θ+elas^Ld l/d θ+ elas^Md m/d θ+d m(ω_-1)/d θ+d η/d θ+ d ε/d θ Second, it follows from developing the total differential of the log output elasticity of input X with respect to channel θ dlog (elas^X)/dθ=1/elas^X(d elas^X/dθ) =1/elas^X(∂^2 f(k,l,m)/∂ x∂ kd k/dθ+∂^2 f(k,l,m)/∂ x∂ ld l/dθ+∂^2 f(k,l,m)/∂ x∂ md m/dθ) =1/elas^X(∂ elas^K/∂ xd k/dθ+∂ elas^L/∂ xd l/dθ+∂ elas^M/∂ xd m/dθ) Or, equivalently dlog (elas^X)/dθ=∂log (elas^X)/∂ kd k/dθ+∂log (elas^X)/∂ ld l/dθ+∂log (elas^X)/∂ md m/dθ §.§ Material Inputs From the FOC of the conditional value-added maximization problem (<ref>), it follows that the final period marginal product of materials has a closed form MP^M=ρ/Pe^ε/ℰ Then, taking the total derivative of its log transformation yields[Remember that ℰ is assumed to be constant.] dmp^M=dlog(ρ/P)+dε Finally, notice that combining the equation above with equation (<ref>) delivers an expression for the total derivative of materials allocation dm=dy+dlog(elas^M)-dlog(ρ/P)-dε_jt Combining the above with equation (<ref>) delivers dm/dθ=dy/dθ+∂log (elas^M)/∂ kd k/dθ+∂log (elas^M)/∂ ld l/dθ+∂log (elas^M)/∂ md m/dθ-dlog(ρ/P)/dθ-dε/dθ Substituting in equation (<ref>) yields dm/dθ =elas^Kd k/d θ+elas^Ld l/d θ+ elas^Md m/d θ+d m(ω_-1)/d θ+d η/d θ+ d ε/d θ +∂log (elas^M)/∂ kd k/dθ+∂log (elas^M)/∂ ld l/dθ+∂log (elas^M)/∂ md m/dθ-dlog(ρ/P)/dθ-dε/dθ Then, finally dm/dθ=(1-elas^M-∂log (elas^M)/∂ m)^-1 ((elas^K+∂log (elas^M)/∂ k)d k/dθ+(elas^L+∂log(elas^M)/∂ l)d l/dθ+d m(ω_-1)/d θ+d η/d θ-dlog(ρ/P)/dθ) §.§.§ Past productivity channel d mp^M/d ω_-1=dlog(ρ/P)/dω_-1 By the allocation and pricing assumptions, by equation (<ref>) and by the independence of ex-post productivity shocks from past productivity levels, the past productivity channel affects the marginal product of materials only through a variation in the materials relative price ρ/P. §.§.§ Ex-ante productivity shock channel d mp^M/d η= d log(ρ/P)/dη By similar steps and with the same interpretation as for the past productivity channel. §.§.§ Ex-post productivity shock channel d mp^M/d ε= 1 The result above follows by the pricing assumption that the materials' relative price is independent of ex-post productivity shocks. The ex-post productivity shock channel is then a mere shifter of the marginal product since its realization generates only post-allocation output fluctuations. §.§ Capital One can develop the total derivative of the marginal product of capital with respect to TFP growth channel θ in the following way d mp^K/d θ =d y/d θ-dk/d θ+dlog elas^K/d θ = elas^Kd k/d θ+elas^Ld l/d θ+ elas^Md m/d θ+d m(ω_-1)/d θ+d η/d θ+ d ε/d θ -dk/d θ+∂log (elas^K)/∂ kd k/dθ+∂log (elas^K)/∂ ld l/dθ+∂log (elas^K)/∂ md m/dθ =(elas^K+∂log (elas^K)/∂ k-1)d k/dθ+(elas^L+∂log (elas^K)/∂ l)d l/dθ +(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1 ((elas^K+∂log (elas^M)/∂ k)d k/dθ+(elas^L+∂log(elas^M)/∂ l)d l/dθ+d m(ω_-1)/d θ+d η/d θ-dlog(ρ/P)/dθ) +d m(ω_-1)/d θ+d η/d θ+ d ε/d θ =((elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^K+∂log (elas^M)/∂ k) +(elas^K+∂log (elas^K)/∂ k-1))d k/dθ +((elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^L+∂log(elas^M)/∂ l) +(elas^L+∂log (elas^K)/∂ l))d l/dθ -(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1dlog(ρ/P)/dθ +(1+(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1)d m(ω_-1)/d θ +(1+(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1)d η/d θ +d ε/dθ The first equality follows from substituting in equation (<ref>), and the second from equations (<ref>) and (<ref>). Substituting dm/dθ using equation (<ref>) and rearranging delivers the third equality. The fourth equality is just a linear rearrangement. §.§.§ Past productivity channel d mp^K/d ω_-1 =((elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^K+∂log (elas^M)/∂ k) +(elas^K+∂log (elas^K)/∂ k-1))d k/dω_-1 +((elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^L+∂log(elas^M)/∂ l) +(elas^L+∂log (elas^K)/∂ l))d l/dω_-1 -(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1dlog(ρ/P)/dω_-1 +(1+(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1)d m(ω_-1)/d ω_-1 A counterfactual change in known, past systematic productivity does not affect ex-ante and ex-post productivity shocks (dη/dω_-1=dε/dω_-1=0) by Assumption <ref>. It has a direct and indirect effect on the marginal product of capital. The indirect effect works via a variation in the allocation of the predetermined inputs, that is, capital itself and labor, and via a change in the relative price of materials, which changes the firm's expectations about the future allocation of material inputs, and then, the marginal product of capital because of inputs interconnectedness. §.§.§ Ex-ante productivity shock channel d mp^K/d η =-(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1dlog(ρ/P)/dη +(1+(elas^M+∂log (elas^K)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1) An ex-ante productivity shock do not change the allocation of capital and labor since they are pre-allocated by timing assumptions (d k/d η=d l/d η=0). Moreover, by Assumption <ref>, it is independent of past productivity and ex-post productivity shocks (d m(ω_-1)/d η=d ε/d η=0). Then, an ex-ante productivity shock affect the marginal product of capital via a direct effect and an indirect effect through a change in the relative price of material, which entails a change in the materials allocation and, then, a change in the marginal product of capital by input interconnectedness. §.§.§ Ex-post productivity shock channel d mp^K/d ε=1 An ex-post productivity shock do not change the allocation of any input since they are allocated before its realization by timing assumptions (d k/d ε=d l/d ε=d m/d ε=0). Moreover, by Assumption <ref>, it is independent of past productivity and ex-post productivity shocks (d m(ω_-1)/d η=d ε/d η=0). Finally, the relative price of material inputs that the firm faces is independent of the ex-post productivity shock by Assumption <ref> (dlog(ρ/P)/dη=0). the ex-post productivity shock is only a shifter of the marginal product since its realization generates only post-allocation output fluctuations. §.§ Labor The derivation and interpretation of the channels' equations mirror the ones developed for capital in Section <ref>. d mp^L/d θ =d y/d θ-dl/d θ+dlog elas^L/d θ = elas^Kd k/d θ+elas^Ld l/d θ+ elas^Md m/d θ+d m(ω_-1)/d θ+d η/d θ+ d ε/d θ -dl/d θ+∂log (elas^L)/∂ kd k/dθ+∂log (elas^L)/∂ ld l/dθ+∂log (elas^L)/∂ md m/dθ =(elas^K+∂log (elas^L)/∂ k)d k/dθ+(elas^L+∂log (elas^L)/∂ l-1)d l/dθ +(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1 ((elas^K+∂log (elas^M)/∂ k)d k/dθ+(elas^L+∂log(elas^M)/∂ l)d l/dθ+d m(ω_-1)/d θ+d η/d θ-dlog(ρ/P)/dθ) +d m(ω_-1)/d θ+d η/d θ+ d ε/d θ =((elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^K+∂log (elas^M)/∂ k) +(elas^K+∂log (elas^L)/∂ k))d k/dθ +((elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^L+∂log(elas^M)/∂ l) +(elas^L+∂log (elas^L)/∂ l-1))d l/dθ -(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1dlog(ρ/P)/dθ +(1+(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1)d m(ω_-1)/d θ +(1+(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1)d η/d θ +d ε/dθ §.§.§ Past productivity channel d mp^L/d ω_-1 =((elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^K+∂log (elas^M)/∂ k) +(elas^K+∂log (elas^L)/∂ k))d k/dω_-1 +((elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1(elas^L+∂log(elas^M)/∂ l) +(elas^L+∂log (elas^L)/∂ l-1))d l/dω_-1 -(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1dlog(ρ/P)/dω_-1 +(1+(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1)d m(ω_-1)/d ω_-1 §.§.§ Ex-ante productivity shock channel d mp^L/d η =-(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1dlog(ρ/P)/dη +(1+(elas^M+∂log (elas^L)/∂ m)(1-elas^M-∂log (elas^M)/∂ m)^-1) §.§.§ Ex-post productivity shock channel d mp^L/d ε=1 § DATA CLEANING §.§ US Data The Compustat raw dataset spans the years 1960-2022. Following <cit.>, I deleted all the firms that are not US incorporated. Then, I clean the dataset according to the industry reporting. I keep only the observations that report the industry at six-digit NAICS levels. Furthermore, I keep only firms operating in the manufacturing sector, with NAICS codes between 310000 and 340000. I remove firms that report negative or zero sales (sale), employment (emp), cost of goods sold (cogs) and sales and administrative expenses (xsga). Finally, I remove all observations whose year is after the firm's deletion date recorded in Compustat (that is, I keep only "active" firms). Due to the availability of the deflators, the sample is restricted to the years 1960-2018. Once constructed cost of employees and cost of materials variables according to <cit.>, I replace with missing values all negative values of the cost of materials variable. Now, the last major data cleaning problem left is to fill the remaining missing values of the main variables for the years the firms have been active, which threaten the dynamic structure of the production function model. To solve this issue, I resort to focusing on the "productivity sample" (<cit.>) based on the material cost variable (which is the variable with the highest number of missing values). First, I keep only the firms with at least two observations of the variable cost of materials. Then, I keep only firms with at least 50% of non-missing observations for this variable. These firms compose the "productivity sample". Finally, of these firms, the missing values for the variables output, labor, capital, cost of materials, and cost of employees are linearly interpolated. Interpolation is limited to the case of a maximum of three missing data points in a row. Notice that in the cases there are not enough observations to perform the interpolation (i.e., there are more than three missing points in a raw for a firm), the interpolated variable will be missing for all the years for that firm. §.§ European Data European raw balance sheet data are imported from BvD-Orbis online platform and BvD-Orbis historical data. The cleaning procedure is in line with <cit.>, using the firm's unconsolidated balance sheet data. The output variable I use is the firm's (deflated) operating revenue. Moreover, the input variables are measured using the number of employees (NEMP) for labor, (deflated) fixed assets (FASS) for capital, and (deflated) material costs (MCOST) for material inputs. The cost of employees (CEMP) is also retrieved from the BvD-Orbis. Once the variables are downloaded, the merge between the historical and current Orbis datasets calls for a harmonization of the status of the firm, if "active" or "inactive". To this end, the firm's status throughout the dataset is updated with its most recent observation. Once the set of active firms has been defined, as done with the US data, I focus on the "productivity sample" (<cit.>) based on the number of employees variable. The construction of the "productivity sample" and the filling of the missing values of the variables of interest for the years of activity of the firms is performed as described above for Compustat data. Finally, variables with no information about the industrial sector at the NACE 4-digit level and with negative or zero values for the number of employees, revenue, fixed assets, and cost of materials variables are eliminated. Again, according to the availability of the deflators, the sample is restricted to the years reported. § <CIT.> - IDENTIFICATION AND ESTIMATION Taking the FOC for M_jt in equation (<ref>) yields ∂ E(F(k_jt,l_jt,m_jt)e^v_jt|ℐ_jt)/∂ M_jt-ρ_jt/P_jt=0 Since the firm is a price taker in the output and inputs markets and given the distributional assumptions on ω_jt and ε_jt (see Assumption <ref>), it follows that the FOC reads ∂ F(k_jt,l_jt,m_jt)/∂ M_jte^ω_jtℰ-ρ_jt/P_jt=0 Taking logs, and substituting ω_jt=y_jt-f(k_jt,l_jt,m_jt)-ε_jt delivers log(ρ_jt)=log(P_jtY_jt)-f(k_jt,l_jt,m_jt)-ε_jt+logℰ+log∂ F(k_jt,l_jt,m_jt)/∂ M_jt Let s_jt=ln((ρ_jtM_jt)/(P_jtY_jt)), the (log) intermediate-input share of output of firm j at time t. Rearranging[Indeed, notice that log∂ F(k_jt,l_jt,m_jt)/∂ M_jt-f(k_jt,l_jt,m_jt) = log∂ F(k_jt,l_jt,m_jt)/∂ M_jt-log F(k_jt,l_jt,m_jt) = log(∂ F(k_jt,l_jt,m_jt)/F(k_jt,l_jt,m_jt)/∂ M_jt/M_jt)-log M_jt = log(∂ f(k_jt,l_jt,m_jt)/∂ m_jt) -m_jt Where the last equality follows from ∂ F(k_jt,l_jt,m_jt)/F(k_jt,l_jt,m_jt)/∂ M_jt/M_jt=∂ f(k_jt,l_jt,m_jt)/∂ m_jt, being the output elasticity of material inputs.] equation (<ref>) yields an expression of the intermediate log share as a function of the log intermediate inputs elasticity, a scalar constant, and the ex-post productivity forecast error s_jt= log(∂ f(k_jt,l_jt,m_jt)/∂ m_jt)+logℰ-ε_jt The intermediate inputs elasticity ∂ f(k_jt,l_jt,m_jt)/∂ m_jt can be approximated using a second degree, complete polynomial in the capital, labor, and material costs. Then, substituting it into equation (<ref>) yields s_jt= log([ γ_0+γ_k k_j t+γ_l l_j t+γ_m m_j t+γ_k k k_j t^2+γ_ll l_j t^2; +γ_m m m_j t^2+γ_k l k_j t l_j t+γ_k m k_j t m_j t+γ_l m l_j t m_j t ])+logℰ-ε_jt The γ parameters in equation (<ref>) can be consistently estimated via non-linear least square up to the scalar constant ℰ. Call γ̂' this estimate[Indeed, notice that, from equation (<ref>), regressing s_jt just on log([ γ_0+γ_k k_j t+γ_l l_j t+γ_m m_j t+γ_k k k_j t^2+γ_ll l_j t^2; +γ_m m m_j t^2+γ_k l k_j t l_j t+γ_k m k_j t m_j t+γ_l m l_j t m_j t ]) delivers the vector γ̂' which is an estimate for γℰ.]. However, since the (negative) regression residuals are estimates for the structural ex-post productivity forecast error ε_jt, by Assumption <ref>, one can use the estimated residuals to estimate ℰ̂=(1 /(J T)) ∑_(j, t) e^ε̂_jt. Then, having an estimate for ℰ, one can recover an estimate[γ̂ = γ̂^' / ℰ̂.] γ̂ for the parameters vector free of the constant ℰ and, then, an estimate for the intermediate input elasticity, ∂ f(k_jt,l_jt,m_jt)/∂ m_jt. Now, notice that one can invoke the Fundamental Theorem of Calculus[For a more detailed discussion about the use of integration for identification, see Section IV in <cit.>.] to derive the production function from integrating the output elasticity of material inputs ∫∂ f(k_j t, l_j t, m_j t)/∂ m_j t d m_j t=f(k_j t, l_jt, m_j t)+𝒞(k_jt, l_j t) The integral has a closed form solution since a complete, second-order polynomial approximates ∂ f(k_j t, l_j t, m_j t)/∂ m_j t. Indeed ∫∂ f(k_j t, l_j t, m_j t)/∂ m_j t d m_j t=([ γ_0+γ_k k_jt+γ_l l_j t+γ_m/2 m_j t; +γ_k k k_j t^2+γ_ll l_j t^2+γ_m m/3 m_j t^2; +γ_k l k_j t l_j t+γ_k m/2 k_j t m_j t+γ_m/2 l_j t m_jt ]) m_j t Substituting the γ parameters vector with its estimate γ̂ is enough to retrieve the sample analog for the integral, ∫∂ f(k_j t, l_j t, m_j t)/∂ m_j t d m_j t.   Finally, consider constructing the following random variable 𝒴_j t 𝒴_j t ≡ y_j t-ε_j t-∫∂ f(k_j t, l_j t, m_j t)/∂ m_j t d m_j t =y_j t-ε_j t- f(k_j t, l_j t, m_j t)-𝒞(k_jt, l_j t) =ω_j t-𝒞(k_jt, l_j t) Where the second line follows by equation (<ref>) and the third line by equations (<ref>) and (<ref>). Notice that the variable y_jt is observed. At the same time, I derived estimates in the previous steps for the ex-post productivity forecast error ε_jt and ∫∂ f(k_j t, l_j t, m_j t)/∂ m_j t d m_j t. Then, I can estimate 𝒴_j t. Call it 𝒴̂_j t.   However, from the last line of equation (<ref>), 𝒴_j t is also a function of ω_jt, the persistent component of the TFP, and 𝒞(.), the residual function from integration in equation (<ref>). Both need to be estimated. One can approximate the 𝒞(.) function and the Markovian process of the persistent component of the TFP using second and third-order degree complete polynomials. Formally[The 𝒞(.) function is normalized to contain no constant since this latter cannot be separately identified later from the mean productivity, E[ω_jt].] 𝒞(k_j t, l_j t)=α_k k_jt+α_l l_jt+α_kkk_jt^2+α_lll_jt^2+α_klk_jtl_jt While the Markovian process for ω_jt is given by ω_jt=m(ω_j t-1)+η_jt=∑_0 ≤ a ≤ 3δ_aω_j t-1^a+η_jt Substituting equations (<ref>) and (<ref>) in (<ref>), and replacing for ω_jt-1 using the third line of equation (<ref>), yields 𝒴̂_j t =-α_k k_jt+α_l l_jt+α_kkk_jt^2+α_lll_jt^2+α_klk_jtl_jt +∑_0 ≤ a ≤ 3δ_a(𝒴̂_j t-1+α_k k_jt-1+α_l l_jt-1+α_kkk_jt-1^2+α_lll_jt-1^2+α_klk_jt-1l_jt-1)^a+η_j t Since k_jt and l_jt are predetermined at period t-1 and given Assumption <ref> regarding the information set available to firm j at the end of period t-1, the parameters vectors α and δ can be estimated from (<ref>) using an exactly identified GMM system with the following unconditional moment conditions. E[η_j t k_j t^τ_k l_j t^τ_l]=0 ∀τ_k, τ_l  such that 0<τ_k+τ_l≤2 E[η_j t𝒴̂_j t-1^a]=0 ∀ a such that 0≤ a ≤ 3 The first set of conditions contains five moments identifying the five components of the α parameter vector. The last set of moments contains four conditions identifying the four components of the δ parameter vector. Having the estimates γ̂ and α̂ for the γ and α parameters vectors allows computing the firm-time specific output elasticities for material inputs, capital, and labor. Then, respectively, elas^M(k_jt,l_jt,m_jt)=∂ f(k_jt,l_jt,m_jt)/∂ m_jt= γ̂_0+γ̂_k k_j t+γ̂_l l_j t+γ̂_m m_j t+γ̂_k k k_j t^2+γ̂_ll l_j t^2 + γ̂_m m m_j t^2+γ̂_k l k_j t l_j t+γ̂_k m k_j t m_j t+γ̂_l m l_j t m_j t elas^K(k_jt,l_jt,m_jt)=∂ f(k_jt,l_jt,m_jt)/∂ k_jt= m_jt(γ̂_k+2γ̂_kkk_jt+γ̂_lkl_jt+γ̂_mk/2m_jt) -α̂_k-2α̂_kkk_jt-α̂_lkl_jt elas^L(k_jt,l_jt,m_jt)=∂ f(k_jt,l_jt,m_jt)/∂ l_jt= m_jt(γ̂_l+2γ̂_lll_jt+γ̂_lkk_jt+γ̂_ml/2m_jt) -α̂_l-2α̂_lll_jt-α̂_lkk_jt Standard errors for the estimated parameters and functionals can be computed via non-parametric bootstrap since they are sieve M-estimators (<cit.>, <cit.>). §.§ Additional Functional Estimates Having estimated the input elasticities, one can compute the firm-time-specific inputs' marginal products using equation (<ref>). MP^M_jt=Y_jt/M_jtelas^M(k_jt,l_jt,m_jt) MP^K_jt=Y_jt/K_jtelas^K(k_jt,l_jt,m_jt) MP^L_jt=Y_jt/L_jtelas^L(k_jt,l_jt,m_jt) And, taking the natural logarithm, denote for each input X mp^X_jt=log(MP^X_jt) Moreover, notice that, from equation (<ref>), ∂m̂(ω_j,-1)/∂ω_j,-1=∂∑_0 ≤ a ≤ 3δ̂_aω_j t-1^a/∂ω_j,-1=δ̂_1+2δ̂_2ω_j,-1+3δ̂_3ω_j,-1^2 Furthermore, from equation (<ref>) ∂ĝ(ω_j,-1)/∂ω_j,-1=∂(∑_0 ≤ a ≤ 3δ̂_aω_j t-1^a-ω_j,-1)/∂ω_j,-1=δ̂_1-1+2δ̂_2ω_j,-1+3δ̂_3ω_j,-1^2 And finally, from equation (<ref>) ∂elas^M(k_j,l_j,m_j)/∂ k_j=γ̂_k+2γ̂_kkk_j+γ̂_kll_j+γ̂_kmm_j ∂elas^M(k_j,l_j,m_j)/∂ l_j=γ̂_l+2γ̂_lll_j+γ̂_klk_j+γ̂_lmm_j ∂elas^M(k_j,l_j,m_j)/∂ m_j=γ̂_m+2γ̂_mmm_j+γ̂_kmk_j+γ̂_lml_j From equation (<ref>), ∂elas^K(k_j,l_j,m_j)/∂ k_j=2m_jγ̂_kk-2α̂_kk ∂elas^K(k_j,l_j,m_j)/∂ l_j=m_jγ̂_lk-α̂_lk ∂elas^K(k_j,l_j,m_j)/∂ m_j=γ̂_k+2γ̂_kkk_j+γ̂_lkl_j+γ̂_mkm_j From equation (<ref>), ∂elas^L(k_j,l_j,m_j)/∂ k_j=m_jγ̂_lk-α̂_lk ∂elas^L(k_j,l_j,m_j)/∂ l_j=2m_jγ̂_ll-2α̂_ll ∂elas^L(k_j,l_j,m_j)/∂ m_j=γ̂_l+2γ̂_lll_j+γ̂_lkk_j+γ̂_mlm_j §.§ Estimation Algorithm and Computational Issues The first step of the estimation procedure is minimizing the residual sum of squares in equation (<ref>). This step is the most computationally demanding because: * The minimization problem is nonlinear. The minimum found is local and possibly dependent on the initial condition. * The log specification might allow for complex solutions. * Even solving point 2. above, if the evaluation of the function at the candidate minimum is complex, the minimization algorithm stops. I will return to point 1. later. Regarding point 2., to avoid complex solutions to the minimization problem, I use the lsqnonlin Matlab routine employing the trust-region-reflective algorithm with appropriately defined (real) lower and upper bounds for the solution. It forces the minimization routine to limit its scope only to real solutions as long as the first function evaluation is real, given the initial values. Otherwise, if a complex candidate solution is found, the algorithm stops. The classic levenberg-marquardt algorithm returns a complex solution even if real bounds are specified. Concerning point 3., The routine stops if the first evaluation of the minimizing function is complex, even using the method above. Then, I set the initial parameter values such that the logarithm's argument at the procedure's beginning is most likely positive. I can do this by setting the initial values for γ'_kk, γ'_ll, and γ'_mm positive and big enough. These parameters are key in avoiding this issue because they multiply the only variables that are always strictly positive (k^2_jt, l^2_jt, and m^2_jt). Once estimated the share regression in equation (<ref>) and having computed γ̂∈R^10, the second step of the estimation procedure involves estimating the parameters vectors α∈R^5 in equation (<ref>) and δ∈R^4 in equation (<ref>). I use an inner loop-outer loop routine as suggested by <cit.>. * I consider an initial guess for α_0. * Given α_0, I construct 𝒞̃_2(k_j t, l_j t)=∑_0<τ_k+τ_l≤ 2α_0,τ_k, τ_l k_j t^τ_k l_j t^τ_l. * Construct ω_jt(α_0)=𝒴̂_j t+𝒞̃_2(k_j t, l_j t). 𝒴̂_j t is observable, given the estimates for γ̂ and equation (<ref>). * Regress ω_jt(α_0) on a third order sieve in ω_jt-1(α_0) (I use lsqnonlin again). * Get an estimate for δ̂(α_0). * Given α_0, δ̂(α_0) and ω_jt(α_0), I get η̂ from (<ref>), that is η̂=𝒴̂_j t+∑_0<τ_k+τ_l≤ 2α_τ_k, τ_l k_j t^τ_k l_j t^τ_l-∑_0 ≤ a ≤ 3δ_a(α_0)ω_jt-1^a(α_0) * Use the unconditional moment conditions E[η_j t k_j t^τ_k l_j t^τ_l]=0 ∀ τ_k, τ_l such that 0<τ_k+τ_l≤2 to estimate α via an exactly identified GMM system (I use the Matlab routine fminsearch). * Update the guess α_0 with the newly estimated α. * Iterate until convergence. A final consideration regards the sample size. In particular, given the initial values, the relatively large size of the countries' samples negatively affects the time of convergence in both estimation steps and the quality of convergence to a local minimum in the first step. Taking initial guesses for the γ and α parameters vectors as close as possible to the "true" ones would solve this problem. Then, I randomly draw a training panel subsample of 500 firms from the sample dataset for each country, and I perform the entire estimation procedure using arbitrary initial guesses. The relatively small sample size implies that the convergence time for both steps is significantly reduced. Moreover, I check that the local minimum found in the first step of the estimation procedure is stable by checking the solutions with different arbitrary initial values. Finally, I use the estimated parameters γ̂ and α̂ of the training subsample as initial values for the estimation procedure on the bigger sample (though I manually adjust the initial guess for γ to avoid incurring in problem at point 3.). Comfortingly, the estimated parameters γ̂ and α̂ for the bigger sample are close enough to the one found using the training subsample and used as initial values. Regarding asymptotic inference, the standard errors for the relevant parameters and functions are computed using non-parametric bootstrap drawing 150 times (600 times for the US) with replacement from each country's pooled individual firms identifiers. To decrease the computational time, I parallelize the routines over 40 cores using the Matlab command parpool. Related MATLAB codes will be available on GitHub. § TESTING FOR FLEXIBLE LABOR Given the primitives, the production function model provides a framework to test if labor is a flexible (i.e., solves a limited information static first order condition) production input. Indeed, labor is only assumed to be predetermined. On the other hand, I made no assumption regarding its dynamic implications. Assume that labor allocation is predetermined but flexible. Then, the firm solves the following value-added maximization problem at the end of period t-1 _L_jt[E(F(k_jt,l_jt,m_jt)e^ν_jt-w_jt/P_jt L_jt|ℐ̃_jt-1)] Assuming that firms are price takers in the output and input markets, this leads to the following FOC[Notice that, since labor is predetermined in period t-1, differently from equations (<ref>) and (<ref>) in Appendix <ref>, the systematic productivity term ω_jt in ν_jt cannot be taken out of the expectation.] E(∂ F(k_jt,l_jt,m_jt)/∂ L_jte^ν_jt-w_jt/P_jt|ℐ̃_jt-1)=0 Multiplying and dividing by L_jt and F(k_jt,l_jt,m_jt) yields E(P_jtF(k_jt,l_jt,m_jt)e^ν_jt∂ F(k_jt,l_jt,m_jt)/F(k_jt,l_jt,m_jt)/∂ L_jt/L_jt-w_jtL_jt|ℐ̃_jt-1)=0 By equation (<ref>), and knowing that ∂ F(k_jt,l_jt,m_jt)/F(k_jt,l_jt,m_jt)/∂ L_jt/L_jt is the output elasticity of labor, rearranging E(P_jtY_jt∂ f(k_jt,l_jt,m_jt)/∂ l_jt-w_jtL_jt|ℐ̃_jt-1)=0 Finally, by the Law of Iterated Expectations, the following unconditional moment condition follows E(P_jtY_jt∂ f(k_jt,l_jt,m_jt)/∂ l_jt-w_jtL_jt)=0 The unconditional moment condition in equation (<ref>) can be used as a test to assess the null hypothesis of labor as a flexible production input. Notice that the terms P_jt Y_jt and w_jt L_jt are, respectively, the revenue and the wage bill of firm j at time t, and they are both observed in the data. Moreover, the term ∂ f(k_jt,l_jt,m_jt)/∂ l_jt is the output elasticity of labor, and it has been estimated via the baseline production function model. The associated test statistic is then T=1/N∑_(j,t)(P_jtY_jtelas^L_jt-w_jtL_jt) Where N is the sum of all firm-time observations. The results from conducting the above test for each country over the whole manufacturing sector and the periods available are shown in Table <ref>. The test rejects the null hypothesis of flexible labor at the highest confidence level for all countries in the data unless Portugal, where the null hypothesis cannot be rejected at any standard confidence level. The finding is unsurprising given the Portuguese season of reforms to make more flexible the labor market, which started in the aftermath of the 2008 financial crisis. Indeed, Portugal's labor market regulation has been a case study in the post-financial crisis years. First, in 2009, the Labor Code was reformed in a more employer-friendly fashion, allowing, for example, unilateral changes to the employee's location or function. Second, following the international bailout in 2011 and the instauration of the troika by the European Commission, the European Central Bank, and the International Monetary Fund in the context of the Economic Adjustment Programme, a new set of labor reforms meant to foster industrial competitiveness took effect. The new policies had the objectives, among others, of reducing labor costs by paying lower overtime, increasing working time by eliminating holidays, making it easier to layoff workers by reducing severance pay, allowing employers to make unilateral changes to agreed schedules, easing and simplifying the procedures to adjust the workforce and restructuring companies during business cycles, and annulling clauses from existing collective agreements which contradicts the new regulations (<cit.>). Apart from Portugal, only Greece, Cyprus, and Ireland were subject to the troika system, and none of the other countries in the dataset experienced such an extent of intensive policy restructuring. After the general elections in 2015, the suspension of holidays was revoked, but no significant changes to the labor regulations have been made. Furthermore, I also conducted the test on particular aggregates of manufacturing industries, namely "Motor Vehicles, Machinery, and Fabricated Metal Products" (NACE sectors 25, 28, 29, and 30; NAICS sectors 332, 333, and 336), "Food and Beverage" (NACE sectors 10, 11 and 12; NAICS sectors 311 and 312), "Chemicals" (NACE sectors 20 and 21; NAICS sector 325), "Electrical and Computers" (NACE sectors 26 and 27; NAICS sectors 334 and 335), and "Textile" (NACE sectors 13, 14 and 15; NAICS sectors 313, 314, 315 and 316)[These industry aggregates, respectively, accounted for 35%, 13%, 12%, 10%, and 3% of total manufacturing value added and 31%, 16%, 11%, 8%, and 3% of total manufacturing production value in 2014 for the European Union-28 countries (Eurostat, "Structural Business Statistics"). Similarly, concerning the USA in 2014, these industries accounted for, respectively, 28%, 12%, 16%, 15%, and 1% of total manufacturing value added and 29%, 16%, 14%, 8%, and 1% of total manufacturing gross output (U.S. Bureau of Economic Analysis, "Value Added by Industry" and "Gross Output by Industry").]. The test results are reported in Tables <ref>-<ref> in Appendix <ref>. For "Motor Vehicles, Machinery, and Fabricated Metal Products," the tests reject the null hypothesis of flexible labor for all countries of the dataset at the highest confidence level unless Portugal, where the null is rejected for any standard confidence level. The same goes for "Food and Beverage," with the only difference that the null hypothesis can be rejected for Portugal at a 10% confidence level. Concerning "Chemicals," the null hypothesis cannot be rejected for Spain and Portugal at any standard confidence level and for Germany and Italy only at the highest confidence level. For "Electrical and Computers," the null hypothesis cannot be rejected only for Portugal at any standard confidence level and Spain at a 5% confidence level or higher. Finally, for "Textile," the test cannot reject the null hypothesis of flexible labor at any standard confidence level for Portugal and the USA. §.§ Constructing the Two-Stages Non-parametric Bootstrap for Testing Flexible Labor For the test conducted on the whole manufacturing sector, I compute confidence intervals using a two stages non-parametric bootstrap procedure in the following steps: * For each country, I retrieve the same re-drawn samples to compute the bootstrapped standard errors in Section <ref> (150 samples for each country, 600 for the US). * For each one of these samples, I compute the firm-time level T_jt statistic. That is, I compute T_jt=P_jtY_jtelas^L_jt-w_jtL_jt. * Then, for each of these samples, I re-draw 15,000 times with replacements from the pool of firms identifiers. Then, in the end, I have 150*15,000 (or 600*15,000, for the US) samples. * In each one of these samples, I compute the average across the T_jt statistics, which delivers the T statistic in equation (<ref>) for each sample. * Confidence intervals are computed from the quantiles of the implied distribution of the T statistic. For the test conducted on particular aggregates of manufacturing industries as in Tables <ref>-<ref>, I compute confidence intervals using a two stages non-parametric bootstrap procedure in the following steps: * For each country, I retrieve the same re-drawn samples to compute the bootstrapped standard errors in Section <ref> (150 samples for each country, 600 for the US). * For each manufacturing aggregate, for each sample, I keep only the individual firms active in industries belonging to the aggregate in question. * For each one of these samples, I compute the firm-time level T_jt statistic. That is, I compute T_jt=P_jtY_jtelas^L_jt-w_jtL_jt. * Then, for each of these samples, I re-draw 5,000 times with replacements from the pool of firms identifiers active in the manufacturing aggregate. Then, in the end, I have 150*5,000 (or 600*5,000, for the US) samples. * In each one of these samples, I compute the average across the T_jt statistics, which delivers the T statistic in equation (<ref>) for each sample. * Confidence intervals are computed from the quantiles of the implied distribution of the T statistic. § ROBUSTNESS: ACCOUNTING FOR COVARIATION IN THE TFP GROWTH COMPONENTS Even if the model assumes that ω_jt-1, ε_jt and η_jt are independent of each other, the data show that these estimated quantities display non-negligible correlation on aggregate. This is evident in Figures <ref> and <ref>, which show the decomposition of the cross-sectional total manufacturing TFP volatility in its components, according to equation (<ref>), for the US, year by year. Non-neglecting the correlation between TFP components implies that a sum of squares transformation of equation (<ref>) delivers a projection similar to (<ref>), augmented with covariance terms: Var_st(mp^X_jt)=β_s,ω varVar_st-1(ω_jt-1)+β_s,η varVar_st(η_jt)+β_s,ε varVar_st(Δε_jt) +2β_s,ωηCov(ω_jt-1,η_jt) +2β_s,ωεCov(ω_jt-1,Δε_jt)+2β_s,ηεCov(η_jt,Δε_jt)+φ_st Where, by construction, β_s,ω var=β̂_s,ω^2, β_s,η var=β̂_s,η^2, β_s,ε var=β̂_s,ε^2, β_s,ωη=β̂_s,ωβ̂_s,η, β_s,ωε=β̂_s,ωβ̂_s,ε, and β_s,ηε=β̂_s,ηβ̂_s,ε. Then, one can compute the S^2 statistics from equation (<ref>) to retrieve the proportion of the industry-time specific dispersion of the inputs' marginal product explained, on average, by the total variability of each component[Notice that, by Taylor expansion, Var(g(ω_jt-1))=f(Var(ω_jt-1)).] ω_jt-1, η_jt, and Δε_jt, assuming no independent variation of the other components. S̃^2_ω-1=1- ∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β_ω varVar_st(ω_jt-1)-β_s,ωηCov(ω_jt-1,η_jt)-β_s,εωCov(Δε_jt,ω_jt-1))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 S̃^2_η=1- ∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β_η varVar_st(η_jt)-β_s,ωηCov(ω_jt-1,η_jt)-β_s,εηCov(η_jt,Δε_jt))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 S̃^2_Δε=1- ∑^| s|_s=1∑^| t|_t=1(Var_st(mp^X_jt)-β_ε varVar_st(Δε_jt)-β_s,εηCov(η_jt,Δε_jt)-β_s,εωCov(Δε_jt,ω_jt-1))^2/∑^| s|_s=1∑^| t|_t=1Var_st(mp^X_jt)^2 Table <ref> shows the computed S̃^2s. [h] S̃^2 - Decomposed TFP, Accounting for Covariation [-1.8ex] 3cCapital 3cLabor 3cMaterials 2-4 5-7 8-10 [-1.8ex] Country 1cω_-1 1cη 1lΔε 1cω_-1 1cη 1cΔε 1cω_-1 1cη 1cΔε USA 10.89% 1.50% 2.71% 24.26% 2.03% 0.89% 71.79% 1.05% 3.48% Belgium 19.42% 3.70% 3.76% 48.33% 8.02% 9.88% 20.46% 17.02% 26.69% France 38.77% 7.68% 3.40% 42.77% 6.33% 9.86% 16.74% 35.61% 33.86% Germany 22.93% -16.99% -29.60% 56.26% 8.67% -0.73% 39.25% -12.60% 14.82% Hungary 41.34% 11.99% 7.98% 40.75% 15.47% 9.92% 56.78% 33.87% 12.15% Italy 18.08% 5.18% 6.13% 35.06% 8.87% 12.26% 13.31% 11.46% 38.26% Poland 37.66% 11.86% 6.38% 35.82% 12.49% 12.93% 6.08% 4.66% 30.16% Portugal 19.93% 5.48% 8.12% 25.12% 9.89% 18.67% 3.51% 6.17% 30.76% Romania 20.52% 12.33% 7.94% 16.66% 9.01% 17.30% 4.11% 3.61% 35.61% Slovenia 24.22% 9.65% 4.31% 32.35% 17.30% 11.48% 13.57% 3.06% 19.93% Spain 24.36% 5.61% 3.80% 27.87% 5.47% 9.31% 13.68% 7.50% 17.04% Sweden 10.35% 4.24% 4.23% 31.84% 14.70% 9.04% 7.86% 3.75% 13.70% All 23.08% 5.85% 3.72% 30.59% 6.64% 6.68% 31.03% 14.53% 21.02% [1.1ex] § ADDITIONAL FIGURES §.§ Estimated Average Output Elasticities §.§ Evolution of MPs and TFP Dispersions §.§ Log TFP Volatility - Decomposition §.§ Log HHI § ADDITIONAL TABLES [h] North-South DID Results - Robustness 0.95! 3cVar(mp^K) 3cVar(mp^L) 3cVar(mp^M) 2-4 5-7 8-10 1c(1) 1c(2) 1c(3) 1c(4) 1c(5) 1c(6) 1c(7) 1c(8) 1c(9) 1c 1c 1c 1c 1c 1c 1c 1c 1c ATT 0.402^*** 0.413^*** 0.407^*** 0.130 0.156^* 0.125 0.0925 0.0957 0.0629 (0.0953) (0.0871) (0.0824) (0.0857) (0.0797) (0.0727) (0.0910) (0.0930) (0.0846) [1em] [1em] ATT Pre-treatment 0.00561 0.00144 0.00151 0.0556^** 0.0482^* 0.0512^** 0.0662^* 0.0673^** 0.0767^** (0.0208) (0.0212) (0.0187) (0.0207) (0.0201) (0.0179) (0.0269) (0.0254) (0.0259) [1em] ATT 2008 0.184^** 0.183^* 0.167^* 0.0251 0.0704 0.0769 -0.0281 -0.00852 -0.0154 (0.0710) (0.0758) (0.0686) (0.0830) (0.0844) (0.0817) (0.0878) (0.0892) (0.0877) [1em] ATT 2009 0.251^** 0.253^** 0.247^** 0.135 0.141 0.120 -0.0227 0.0190 0.0194 (0.0878) (0.0907) (0.0826) (0.0981) (0.104) (0.0931) (0.0953) (0.100) (0.0997) [1em] ATT 2010 0.201^* 0.224^** 0.239^** 0.104 0.114 0.119 0.0452 0.0589 0.0434 (0.0905) (0.0816) (0.0759) (0.0859) (0.0768) (0.0704) (0.101) (0.101) (0.0992) [1em] ATT 2011 0.226^* 0.248^** 0.254^** 0.0385 0.0633 0.0529 -0.0192 -0.0484 -0.0923 (0.0963) (0.0916) (0.0882) (0.0949) (0.0926) (0.0805) (0.107) (0.109) (0.0999) [1em] ATT 2012 0.342^** 0.343^*** 0.367^*** 0.187 0.220^* 0.187^* 0.0540 0.0663 0.0406 (0.110) (0.0969) (0.0954) (0.0974) (0.0862) (0.0854) (0.117) (0.113) (0.105) [1em] ATT 2013 0.440^*** 0.424^*** 0.429^*** 0.222^* 0.240^* 0.199^* 0.181 0.146 0.107 (0.116) (0.101) (0.0955) (0.113) (0.114) (0.0992) (0.115) (0.113) (0.109) [1em] ATT 2014 0.491^*** 0.498^*** 0.506^*** 0.252^* 0.276^* 0.222^* 0.185 0.183 0.136 (0.113) (0.114) (0.106) (0.108) (0.115) (0.103) (0.136) (0.135) (0.135) [1em] ATT 2015 0.571^*** 0.593^*** 0.589^*** 0.155 0.169 0.108 0.171 0.177 0.141 (0.126) (0.122) (0.111) (0.119) (0.117) (0.100) (0.127) (0.130) (0.125) [1em] ATT 2016 0.671^*** 0.706^*** 0.660^*** 0.102 0.146 0.106 0.245 0.239 0.198 (0.145) (0.149) (0.130) (0.113) (0.132) (0.114) (0.126) (0.133) (0.121) [1em] ATT 2017 0.651^*** 0.666^*** 0.623^*** 0.0832 0.124 0.0590 0.118 0.127 0.0533 (0.157) (0.151) (0.131) (0.150) (0.168) (0.141) (0.135) (0.134) (0.128) [1em] N 1c7,072 1c6,848 1c6,848 1c7,072 1c6,848 1c6,848 1c7,072 1c6,848 1c6,848 [1em] Controls: 9c [1em]   Vol(TFP) 1cNO 1cYES 1cYES 1cNO 1cYES 1cYES 1cNO 1cYES 1cYES   HHI 1cNO 1cNO 1cYES 1cNO 1cNO 1cYES 1cNO 1cNO 1cYES * The figure reports the <cit.> Difference-in-Difference dynamic estimates with never treated control for the effect of the financial crisis on the industry-specific variance of the inputs marginal products. * The HHI is computed, for each year and industry, over the full sample for each country. * All variables are in logs. * The treatment group comprises the South, i.e., Italy (2001 - 2017) and Spain (2001 - 2017), and the East, i.e., Poland (2005 - 2017) and Romania (2005 - 2017), while the control group comprises those in the North, i.e., France (2001 - 2017) and Germany (2005 - 2017) and the US (2001- 2017). The industry is defined at the NACE 3-digit level for European countries and NAICS 3-digit level for the US. * The method used is the doubly-robust <cit.> estimator. Standard errors are bootstrapped using Wild bootstrap with 999 repetitions and clustered at the panel level.
http://arxiv.org/abs/2306.10130v1
20230616182959
Non-Contact Monitoring of Dehydration using RF Data Collected off the Chest and the Hand
[ "Hasan Mujtaba Buttar", "Kawish Pervez", "M. Mahboob Ur Rahman", "Kashif Riaz", "Qammer H. Abbasi" ]
eess.SP
[ "eess.SP", "cs.HC" ]
Non-Contact Monitoring of Dehydration using RF Data Collected off the Chest and the Hand Hasan Mujtaba Buttar1, Kawish Pervez1, M. Mahboob Ur Rahman1, Kashif Riaz1, Qammer H. Abbasi2 1 Electrical engineering department, Information Technology University, Lahore 54000, Pakistan 2Department of Electronics and Nano Engineering, University of Glasgow, Glasgow, G12 8QQ, UK 1{phdee21006, mahboob.rahman}@itu.edu.pk, [email protected] July 31, 2023 ============================================================================================================================================================================================================================================================================================================================================================================= In this work, we report for the first time a novel non-contact method for dehydration monitoring from a distance. Specifically, the proposed setup consists of a transmit software defined radio (SDR) that impinges a wideband radio frequency (RF) signal (of frequency 5.23 GHz) in the microwave band onto either the chest or the hand of a subject who sits nearby. Further, another SDR in the closed vicinity collects the RF signals reflected off the chest (or passed through the hand) of the subject. Note that the two SDRs exchange orthogonal frequency division multiplexing (OFDM) signal, whose individual subcarriers get modulated once it reflects off (passes through) the chest (the hand) of the subject. This way, the signal collected by the receive SDR consists of channel frequency response (CFR) that captures the variation in the blood osmolality due to dehydration. The received raw CFR data is then passed through a handful of machine learning (ML) classifiers which once trained, output the classification result (i.e., whether a subject is hydrated or dehydrated). For the purpose of training our ML classifiers, we have constructed our custom HCDDM-RF-5 dataset by collecting data from 5 Muslim subjects (before and after sunset) who were fasting during the month of Ramadan. Specifically, we have implemented and tested the following ML classifiers (and their variants): K-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network classifier. Among all the classifiers, the neural network classifier acheived the best classification accuracy, i.e., an accuracy of 93.8% for the proposed chest-based method, and an accuracy of 96.15% for the proposed hand-based method. Compared to the state-of-the-art (i.e., the contact-based dehydration monitoring method) where the reported accuracy is 97.83%, our proposed non-contact method is slightly inferior (as we report a maximum accuracy of 96.15%); nevertheless, the advantages of our non-contact dehydration method speak for themselves. That is, our proposed method is non-invasive and contact-less, has high accuracy, allows continuous and seamless monitoring, is easy to use, and provides rapid results. The anticipated beneficiaries of the proposed method include: sportsmen, athletes, elderly, diabetic and diarrhea patients, and labor working outdoors. dehydration, non-contact methods, RF-based methods, software-defined radio, covid19, machine learning. § INTRODUCTION A good sixty percent of the human body is composed of water, which is essential to many of the body's activities, including maintaining the body's temperature, transporting nutrients and oxygen to cells, lubricating joints, and eliminating waste products. Consuming sufficient water on a daily basis is necessary for preserving one's health and warding off a variety of diseases and adverse conditions <cit.>. Dehydration occurs when the body does not obtain enough water or when the body loses water through sweating and evaporation. When dehydration occurs, it throws off the natural equilibrium of the minerals and electrolytes found in the body. This could result in a variety of different health issues, ranging from quite harmless to life-threatening, depending on how much fluid is lost and what's causing it in the first place. Symptoms of mild dehydration include headache, dry mouth, thirst, dizziness, exhaustion, and dry and wrinkled skin <cit.>,<cit.>,<cit.>. In more extreme circumstances, dehydration can result in consequences such as kidney failure, convulsions, and even death. When the outside weather is hot and humid, then the dehydration could lead to heat exhaustion which could induce symptoms such as heavy perspiration, nausea, headache, and weakness. Heat exhaustion if not addressed quickly, could escalate to heatstroke, which is a life-threatening medical emergency that can cause damage to the brain, organ failure, and even death. Last but not the least, dehydration could also have some long-term adverse effects on the body, e.g., constipation, damage to the kidneys, and infections of the urinary tract, etc. <cit.>. In short, dehydration could have fatal implications if left untreated, thus, timely diagnosis of dehydration followed by imminent medical intervention is of utmost importance. For the elderly, and for the diabetic and diarrhea patients, it is especially important to track their hydration levels frequently <cit.>. But when it comes to the existing dehydration detection methods, they have their limitations as they are either invasive (e.g., blood sample based), or contact-based (e.g., pulse oximeter, smart watch based). Further, the existing methods are expensive, inconvenient and inconsistent, as discussed below. Existing dehydration measures and the dilemma: Some of the most common methods for measuring hydration levels are: body mass change, total body water, serum and urine osmolality, plasma osmolality, urine specific gravity, and urine volume <cit.>. Another method that is sometimes considered as the "gold standard" consists of a procedure whereby a subject ingests a known quantity of an isotope, which allows one to calculate its concentration in a bodily fluid in order to determine the body's total water content. Now, the dilemma. Though such "gold standards" of hydration assessment are considered useful for sports science, medicine, or for creating reference standards, but since they necessitate extensive methodological control, they are not useful for tracking one's hydration status on daily basis during a training or competition <cit.>. In other words, none of aforementioned hydration measures has been demonstrated to be valid in all dehydration scenarios (i.e., lab and field) <cit.>. Last but not the least, many of the aforementioned hydration measures could be expensive, cumbersome, erroneous, and inconvenient (either invasive or contact-based). This calls for the innovative and preferably non-contact methods for dehydration monitoring, which is precisely the agenda of this work. Contributions. This paper proposes an RF-based dehydration monitoring method that is non-invasive and contact-less, has high accuracy, allows continuous and seamless monitoring, is easy to use, and provides rapid results. Specifically, the key contributions of this work are as follows: 1) We propose a novel non-contact method called chest-based dehydration monitoring (CBDM) method. Under this method, the subject sits nearby an RF transceiver that impinges an OFDM signal onto the chest of the subject, while the receiver collects the signal reflected off the chest of the subject. 2) We propose a novel non-contact method called hand-based dehydration monitoring (HBDM) method. Under this method, the subject places his/her hand on a table and between two antennas such that the transmitted OFDM signal passes through the hand of the subject, and is subsequently collected by the receiver. The raw data collected by the receiver due to both (CBDM and HBDM) methods consists of channel frequency response (CFR) that is fed to multiple machine learning (ML) classifiers which eventually determine whether a person is hydrated or dehydrated. To the best of our knowledge, this is the first work that reports a non-contact method for dehydration monitoring. Rationale. The proposed CBDM and HBDM methods rely upon the following to infer dehydration related information from the data collected off the chest and the hand of the subject: i) Dehydration results in reduced blood volume and increased blood viscosity which in turn increases the heart rate and lessens the force of the blood against the walls of the arteries. ii) OFDM signal, being a wideband signal, helps in sensing for dehydration. That is, each OFDM subcarrier captures unique signatures of dehydration due to frequency, phase and amplitude modulation of the subcarrier reflected off the human body. Both factors assist our ML classifiers in achieving high classification accuracy. Outline. The rest of this paper is organized as follows. Section II discusses the related work. Section III provides a compact discussion of the apparatus/equipment that provides the scaffolding for our proposed non-contact dehydration monitoring method. Section IV provides further details about the software and hardware setup used for data collection, specifics of each of the two proposed experiments (chest-based, and hand-based), as well as the data acquisition protocol implemented in order to construct our custom HCDDM-RF-5 dataset. Section V talks about the training and testing of various ML classifiers on our custom dataset, and provides a detailed performance analysis. Section VI concludes. § RELATED WORK The literature on dehydration monitoring is scarce, but could be broadly classified into three categories: i) invasive methods, ii) non-invasive but contact-based methods, iii) non-contact methods. First kind of methods (i.e., invasive methods) which examine blood or urine samples in order to determine the plasma and urine osmolality (and are considered as gold standard) have already been discussed in section I. Further, to the best of our knowledge, there exists no work for third kind of methods (i.e., non-contact methods) for dehydration monitoring in the open literature. Therefore, we summarize the related work on second kind of methods (i.e., non-invasive methods) only. §.§ Non-invasive methods for dehydration monitoring The non-invasive methods for dehydration monitoring typically employ wearable sensors (e.g., oximeters, smart watch, smart wrist-bands, etc.) that capture the photoplethysmography (PPG) and electrodermal activity (EDA) signals and pass them through various ML algorithms in order to infer the dehydration status of a subject. For example, <cit.> collects both the EDA and the PPG data from 17 subjects and feeds it to a range of ML algorithms in order to detect mild dehydration by exploiting the autonomic response to cognitive stress (induced by means of Stroop test). In <cit.>, authors collect EDA data from 16 subjects for three different body postures (sitting, standing, and walking), and pass it to a hybrid Bi-LSTM neural network in order to classify the hydration level of a subject into one of the three different states (hydrated, moderate dehydration, extreme dehydration). Authors of <cit.> utilize a miniature pulse oximeter to collect PPG data from 17 dehydrated patients admitted in emergency of a tertiary care hospital. They then extract multiple features from the acquired PPG data using the variable frequency complex demodulation algorithm, feed them to a support vector machine classifier, and report an accuracy of 67.91%. <cit.> collects the EDA data, skin temperature, heart rate and body mass index from 16 participants while they undergo a workout/physical activity known as circuit training. It then feeds this data to an empirically derived formula in order to quantify fluid loss (dehydration) caused by physical activity. In <cit.>, authors developed a real-time Android-based tool called "monitoring my dehydration" that utilizes the EDA data to learn the dehydration level of a person using machine learning techniques. They did experimental evaluation of their tool by feeding it real-world data from five users, obtaining an accuracy of 84.5%. In <cit.>, authors collect EDA data using BITalino kit from 5 subjects for three different activities by the subjects (sitting, standing, laying down), feed their data to various ML classifiers to solve the binary classification problem of dehydration detection, and report best classification accuracy of 91.3% using the random forests ML classifer. In <cit.> authors collect EDA data from several subjects under different conditions (sitting, standing), feed it to several ML classifiers to solve the binary classification problem of dehydration detection, and report a maximum accuracy of 87.78% using the simple k-NN classifier. Finally, <cit.> takes a rather different approach, and utilizes a leg skin microbiome data from 63 female subjects in order to accurately predict their skin hydration levels and several other important bio-markers. Before we conclude this section, it is imperative to have a quick discussion about the rise of non-contact methods for remote health sensing in the post-covid19 era. §.§ Non-contact methods for health sensing The non-contact methods for monitoring of body vitals gained popularity in the post-covid19 era when it was learned that the covid19 pathogen/virus could stay alive on various surfaces for longer duration, and thus, could infect a healthy individual upon contact <cit.>. This gave rise to non-contact methods which can monitor a person's vital signs from a distance, and thus, could be used for long-term and real-time monitoring of a subject without inconvenience <cit.>. Such methods also have the potential to decrease the number of visits to a hospital by a patient, thereby reducing the burden on healthcare systems <cit.>. Non-contact health sensing methods could be categorized into following four categories. 1) Camera-based sensing: These methods begin by recording the face and chest video of a subject from a distance and calculate vital signs by using the periodic change in skin colour to calculate the various body vitals <cit.>. 2) Radar-based sensing: These systems incorporate various kinds of radars, e.g., ultra-wideband pulse radar, frequency modulated continuous-wave radar, etc. that utilize the traditional radar principles of range and Doppler in order to estimate various body vitals <cit.>, <cit.>. 3) Wi-Fi-based sensing: Such methods exploit the extensive existing infrastructure of WiFi routers indoors to run cutting-edge ML and deep learning (DL) algorithms on the data collected off the reflections from the human subjects in order to measure body vitals <cit.>, <cit.>. 4) Software-defined radio (SDR)-based sensing: Such methods capitalize on the amplitude and phase fluctuations in the signals reflected off the human body to measure vitals <cit.>. Note that the proposed non-contact CBDM method and HBDM method both do SDR-based sensing for dehydration monitoring. However, to the best of authors' knowledge, non-contact monitoring of dehydration has not been reported in the open literature, to date. § PROPOSED APPARATUS FOR NON-CONTACT DEHYDRATION MONITORING The proposed non-contact system for dehydration monitoring is basically an RF transceiver that consists of two workstations, each connected with a software-defined radio (SDR) by means of a USB 3.0 port (see Fig. 1). Specifically, the SDR devices used for experiments are Universal Software Radio Peripheral (USRP) model B210[The USRP B210 from National Instruments covers a wide frequency range (70 MHz to 6 GHz). It can process a wideband spectrum of up to 56 MHz in real time and sample at a high rate of up to 61.44 MS/s.]. Each SDR communicates with other by means of a directional horn antenna. We use MATLAB R2021a to program both the transmit and receive USRP SDRs. Specifically, the transmit SDR sends an orthogonal frequency division multiplexing (OFDM) signal with quadrature phase shift keying (QPSK) modulation on each sub-carrier, while the receive SDR receives it and processes it. Next, with the aim of non-contact dehydration monitoring, we design two distinct experiments. During the first experiment, the subject's chest is exposed to the OFDM signals, and thus, the receive SDR collects the signal reflected off the chest of the subject. We name this method as chest-based dehydration monitoring (CBDM) method. During the second experiment, the subject's hand is exposed to the OFDM signals, and thus, the receive SDR collects the signal that passes through the hand of the subject. We name this method as hand-based dehydration monitoring (HBDM) method[This study was approved by the ethical institutional review board (EIRB) of Information Technology University, Lahore, Pakistan.]. § THE HCDDM-RF-5 DATASET This section provides sufficient details about the hardware and software setup used to construct the custom HCDDM-RF-5 dataset[The acronym HCDDM-RF-5 stands for Hand and Chest Data for Dehydration Monitoring via Radio Frequency data collected from 5 subjects.], our thoughtful data collection methodology (that helped us capture dehydration related data in a very controlled manner), as well as the intricate details of the two experiments performed in order to collect data for the two proposed (CBDM and HBDM) methods. §.§ USRP SDRs based OFDM transceiver OFDM Transmitter: For each OFDM frame, the random bits generator block creates pseudo-random data bits with a chunk size of 128 bits. The QPSK modulator block maps these bits to (frequency domain) symbols which are then transformed into a time-domain signal by means of an inverse fast Fourier transform (IFFT) of size N=64 points. Further, a cyclic prefix (CP) of size 16 samples is appended to each OFDM frame, making each OFDM frame 80 samples long. Gain of the transmit horn antenna is set to 40 dB. Fig. 2(a) shows the Simulink flowgraph of USRP SDR based OFDM transmitter. OFDM Receiver: After removing the CP from each OFDM frame, fast Fourier transform (FFT) is then used to transform the received time-domain OFDM samples into the equivalent frequency-domain OFDM symbol. Then, keeping in mind that the transmitted QPSK symbols on each sub-carrier are known to the OFDM receiver, the channel coefficient h_i for i-th sub-carrier could simply be computed as: h_i=y_i/x_i, where x_i,y_i are the transmitted and received QPSK symbol on i-th sub-carrier, respectively. This way, the raw CFR data 𝐡=[h_1,⋯,h_N]^T is collected by the OFDM transmitter, which is to be utilized later by the ML algorithms in order to classify the status of each subject as either hydrated or dehydrated. Fig. 2(b) shows the Simulink flowgraph of USRP SDR based OFDM transmitter. Table <ref> provides a quick summary of setting of various relevant parameters of transmit and receive USRP SDRs. §.§ Data Acquisition for the HCDDM-RF-5 dataset The custom HCDDM-RF-5 dataset was constructed by collecting data from five volunteers during the month of Ramadan (between March 23rd, 2023 and April 21st, 2023). Ramadan is an Islamic holy month during which devout Muslims observe a strict fast from sunrise till sunset. That is, while they are fasting, Muslims refrain from eating and drinking from sunrise till sunset. We took advantage of this unique opportunity in order to collect dehydration related data from five devout Muslims who had been fasting during this month. Among five subjects, two were males (aged 28, 62 years), and three were females (aged 21, 26, 61 years). For each fasting subject, we collected data twice, once for each class label (hydrated and dehydrated) in order to construct a balanced dataset. Specifically, first episode of data collection took place about 30 minutes before the sunset when the subject was deemed to be maximally dehydrated (thus, this data belongs to the first/dehydrated class). Subsequently, the second episode of the data collection took place an hour after the sunset, after the subject had finished eating and drinking after breaking the fast (thus, this data belongs to the second/hydrated class). For each subject, we conducted two kinds of experiments where we exposed the subject's chest and hand to the RF signals, respectively. Some more pertinent details about data collection for our proposed CBDM and HBDM methods are given below. Data collection for the proposed CBDM method: During data acquisition for the proposed CBDM method, each participant sat on a chair that was about 80 cm away from the pair of directional horn antennas that pointed towards the chest of the subject (see Fig. 3). As described before, the transmit horn antenna impinged an OFDM signal onto the chest of the subject, while the receive horn antenna gathered the signal reflected off the subject's chest. During each experiment session, the subject sat still in order to avoid motion-induced artefacts in the data being gathered. Each single experiment session lasted for 30 seconds. For each subject, we conducted five experiment sessions before the sunset (to capture the raw CFR data for dehydrated class) and five experiment sessions after the sunset (to capture the raw CFR data for the hydrated class). This way, we were able to collect 30×5=150 seconds worth of data for each class (for a given subject), and thus, 150×2=300 seconds worth of data per subject. Ultimately, for 5 subjects, this led to a total dataset size of 300×5=1500 seconds (or, 25 minutes) of raw CFR data (that corresponds to a total of 5×5×2=50 experiment sessions). Data collection for the proposed HBDM method: During data acquisition for the proposed HBDM method, each participant sat on a chair that was about 60 cm away from the pair of directional horn antennas facing each other, and placed his/her hand on the table between the two antennas (see Fig. 4). Again, the transmit horn antenna impinged an OFDM signal onto the hand of the subject, while the receive horn antenna gathered the signal passed through the subject's hand. During each experiment session, the subject sat still in order to avoid motion-induced artefacts in the data being gathered. The rest of the details of data acquisition for the proposed HBDM method are the same as before. That is, for each subject, we conducted five experiment sessions before the sunset (to capture the raw CFR data for dehydrated class) and five experiment sessions after the sunset (to capture the raw CFR data for the hydrated class). This way, for 5 subjects, we acquired a dataset that consisted of 300×5=1500 seconds (or, 25 minutes) of raw CFR data (that corresponds to a total of 5×5×2=50 experiment sessions). In short, combining the two smaller datasets due to CBDM method and HBDM method together, the custom HCDDM-RF-5 dataset consists of a total of 50 minutes of raw CFR data that corresponds to a total of 100 experiment sessions. § TRAINING AND TESTING OF MACHINE LEARNING CLASSIFIERS For the binary classification problem (hydrated/dehydrated) under consideration, we train and test the following five ML classifiers and their variants: K-nearest neighbours (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network. Subsequently, we provide detailed performance analysis and comparison of all the ML classifiers implemented. §.§ Data Pre-processing & Training of Machine Learning Classifiers Data Pre-processing: We utilised a low-pass filter and a Savitzky-Golay filter to denoise the CFR extracted from the received OFDM signal, for all the experiment sessions (for both CBDM and HBDM methods). We inspected the whole data manually and removed artifacts where found. Training & validation of ML classifiers: The Matlab's classification learner app was used to train the following ML classifiers: K-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network. All the classifiers were trained on both labelled datasets (corresponding to the CBDM method and the HBDM method). The K-fold cross-validation strategy was used for validation in order to prevent the over-fitting issue. §.§ Performance metrics Each classifier's performance is quantified in terms of accuracy, given as: Accuracy=Correct prediction/Total observations× 100 Accuracy=T_n+T_p/T_n+T_p+F_n+F_p× 100 where T_n represents a true negative, T_p represents a true positive, F_n represents a false negative, and F_p represents a false positive. In addition, we also do a performance comparison of the various ML algorithms by means of a confusion matrix. §.§ Performance of proposed CBDM method We begin with performance analysis of the k-NN classifier for three distinct values of k, i.e., k=1,k=10,k=100 (where k is the number of neighbours used to calculate the distance to the new data point). We learn that the fine k-NN (k=1) achieves an accuracy of 79.1%, medium k-NN (k=10) achieves an accuracy of 69.2%, while the coarse K-NN (k=100) achieves a very low accuracy of 55.3% (see Fig. 5 that displays the detailed confusion matrix). Next, we focus on Fig. 6 and do performance comparison of the remaining four ML classifiers (and their variants). Beginning with an SVM classifier (with linear, quadratic, and cubic kernels), we note that the linear SVM achieves an overall accuracy of 86.5%, quadratic SVM achieves an overall accuracy of 89.6%, while the cubic SVM achieves an overall accuracy of 90.9%. Next, we focus on the decision tree classifier, and note that it has the lowest accuracy of all. That is, the fine tree (despite its many leaves and despite its ability to differentiate between classes precisely) achieved an accuracy of 68.8% only, while the coarse tree achieved a very low accuracy of 58.0% only. Next in line is the ensemble classifier (a mixture of many classifiers) that is typically implemented with the aim to boost classification accuracy. We observe the following: the ensemble boosted tree has an overall accuracy of 70.3%, the ensemble bagged tree has an accuracy of 77.9%, the ensemble subspace KNN has an accuracy of 82.9%, while the ensemble subspace discriminant has an accuracy of 89.6%. Finally, the neural network (NN) classifier. Each variant of the NN classifier is a fully-connected feedforward network. After each fully connected layer, the Relu activation function is applied, except the last year where softmax activation function is used. We observe that all the different variants of the NN classifier outperform the other ML classifiers. Specifically, the narrow variant of the neural network achieves an accuracy of 93.8%, the medium neural network achieves an accuracy of 92.5%, the broad neural network achieves an accuracy of 92.9%, the bi-layered variant of neural network achieves an accuracy of 93%, while the tri-layered variant of the neural network achieves an accuracy of 93.1%. Fig. 7 provides an alternate way of comparing the overall accuracy of all the five ML classifiers and their variants. We note that, for the proposed CBDM method, the neural network classifier (with the narrow neural network) achieves the highest accuracy, which is 93.8%. §.§ Performance of proposed HBDM method We begin performance analysis of our proposed HBDM method from Fig. 8 which provides confusion matrix of each of five ML classifiers (and their variants). Beginning with an SVM classifier (with linear, quadratic, and cubic kernels), we note that the linear SVM achieves an overall accuracy of 71.1%, quadratic SVM achieves an overall accuracy of 89.2%, while the cubic SVM achieves an overall accuracy of 88.2%. Next, the decision tree classifier. We observe that once again it has the lowest accuracy of all. That is, the fine tree achieved an accuracy of 72.2% only, while the coarse tree achieved a very low accuracy of 61.4% only. Next, the ensemble classifier. We observe the following: the ensemble boosted tree has an overall accuracy of 74.8%, while the ensemble bagged tree has an accuracy of 79.7%. Finally, the neural network (NN) classifier. Once again, all the different variants of the NN classifier outperform the other ML classifiers. Specifically, the narrow variant of the neural network achieves an accuracy of 94.7%, the medium neural network achieves an accuracy of 96.15%, the broad neural network achieves an accuracy of 95.15%, the bi-layered variant of neural network achieves an accuracy of 92.35%, while the tri-layered variant of the neural network achieves an accuracy of 94.2%. Fig. 9 provides an alternate way of comparing the overall accuracy of all the five ML classifiers and their variants. We note that, for the proposed HBDM method, the neural network classifier (with the medium neural network) achieves the highest accuracy, which is 96.15%. §.§ Performance comparison with the state-of-the-art Finally, Table II compares the accuracy of the proposed non-contact CBDM and HBDM methods with the state-of-the-art methods which are all contact-based methods for dehydration monitoring. Compared to the state-of-the-art where the maximum reported accuracy is 97.83%, our proposed non-contact method is slightly inferior (as we report a maximum accuracy of 96.15%); nevertheless, the advantages of our non-contact dehydration method speak for themselves. That is, our proposed method is non-invasive and contact-less, has high accuracy, allows continuous and seamless monitoring, is easy to use, and provides rapid results. § CONCLUSION & FUTURE WORK This work proposed for the first time a non-contact method to monitor the dehydration of a subject from a distance. Specifically, we utilized a pair of USRP SDRs whereby the transmit SDRs impinged OFDM signals onto the chest or the hand of the subject, while the receive SDR collected the modulated signal reflected off the body of the subject. For the purpose of training our ML classifiers, we collected data from 5 Muslim subjects (before and after sunset) who were fasting during the month of Ramadan. We then passed the received raw CFR data through many ML classifiers. Among them, neural network classifier achieved the best performance: an accuracy of 93.8% for the proposed CBDM method, and an accuracy of 96.15% for the proposed HBDM method. The fact that the proposed HBDM method outperforms the proposed CBDM method is a pleasant result. This is because this allow us to promote the proposed HBDM method as a non-contact method for dehydration monitoring (where only a hand is exposed to RF radiation, instead of the full chest, albeit the radiation being non-ionizing). Last but not the least, the proposed non-contact method (with a maximum accuracy of 96.15%) performs very close to its contact-based counterpart (with a maximum accuracy of 97.83%). Such a minor performance degradation of our proposed non-contact method compared to its contact-based competitor might be affordable, keeping in mind the convenience (and other benefits) of a non-contact method. One major advantage of the proposed approach is that it may pave the way for the creation of a smart mobile health (m-health) solution that could be deployed in remote areas far away from the mega cities, in order to provide comprehensive health monitoring of the people living there. This work opens up many exciting directions for the future work. For example, one could construct/acquire a more challenging dataset (unlike the current dataset that was obtained in a very controlled setting), and re-evaluate as well as fine-tune the performance of the proposed method further, in order to make it robust and amicable to the unseen data. IEEEtran
http://arxiv.org/abs/2306.02887v2
20230605135636
Gen-IR @ SIGIR 2023: The First Workshop on Generative Information Retrieval
[ "Gabriel Bénédict", "Ruqing Zhang", "Donald Metzler" ]
cs.IR
[ "cs.IR", "cs.CL" ]
[email protected] 0000-0002-3596-0285 University of Amsterdam and RTL NL The Netherlands [email protected] 0000-0003-4294-2541 ICT, Chinese Academy of Sciences China [email protected] Google Research USA <ccs2012> <concept> <concept_id>10002951.10003317</concept_id> <concept_desc>Information systems Information retrieval</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Information retrieval Generative information retrieval (IR) has experienced substantial growth across multiple research communities (e.g., information retrieval, computer vision, natural language processing, and machine learning), and has been highly visible in the popular press. Theoretical, empirical, and actual user-facing products have been released that retrieve documents (via generation) or directly generate answers given an input request. We would like to investigate whether end-to-end generative models are just another trend or, as some claim, a paradigm change for IR. This necessitates new metrics, theoretical grounding, evaluation methods, task definitions, models, user interfaces, etc. The goal of this workshop[https://coda.io/@sigir/gen-ir] is to focus on previously explored Generative IR techniques like document retrieval and direct Grounded Answer Generation, while also offering a venue for the discussion and exploration of how Generative IR can be applied to new domains like recommendation systems, summarization, etc. The format of the workshop is interactive, including roundtable and keynote sessions and tends to avoid the one-sided dialogue of a mini-conference. @ SIGIR 2023: The First Workshop on Generative Information Retrieval Donald Metzler July 31, 2023 ===================================================================== § TITLE @ SIGIR 2023: The First Workshop on Generative Information Retrieval § MOTIVATION Last year saw the rise of generative IR on two fronts. We will refer to them as: [label=(*)] *  Generative Document Retrieval (GDR): via a generative process, retrieve a ranked list of existing documents (e.g. Wikipedia or news articles) that match a query and *  Grounded Answer Generation (GAG): retrieve a human readable generated answer that matches a query; the answer can link to or refer to a document. On the GDR end of the spectrum, first proposed an end-to-end model-based retrieval approach in a position paper <cit.>: directly predict identifiers of candidate documents, instead of indexing all documents (a.k.a. index-retrieve-then-rank). The position paper builds on generative entity linking <cit.>, later extended for long sequences <cit.>. The generative model is expected to embed all relevant information that is in the documents. Soon after, released Differentiable Search Indexes (DSI), the first model generating indexes of Wikipedia articles <cit.>. The above mentioned position paper <cit.> goes beyond GDR, towards GAG and full-fledged end-to-end retrieval models that generate answers. On the GAG end of the spectrum <cit.>, recent Large Language Models (LLMs) have been released to the public that are essentially (conversational) IR models. Some are conversational with aspects of reinforcement learning (ChatGPT[<https://openai.com/blog/chatgpt/>] or Claude[<https://www.anthropic.com/constitutional.pdf>]), some cite their sources (Phind[<https://phind.com/about>] or Perplexity[<https://www.perplexity.ai/>]), some are focused on science (Galactica[<https://galactica.org/>]), some can do all of the above and more (YOU[<https://you.com/>]), and others have yet to be released (Sparrow[<https://www.deepmind.com/blog/building-safer-dialogue-agents>]). Generative IR as an end-to-end model has clear benefits over the index-retrieve-then-rank paradigm. [label=(*)] * It is simpler and more flexible. * The training pipeline is compressed. * There is no need for an index of documents that is tedious to query or compute similarity with. But Generative IR also comes with its challenges. Namely, [label=(*)] * it has yet to be demonstrated that retrieval performance is improved on big datasets (such as the full MS-MARCO dataset <cit.>), * generative models can hallucinate (i.e., generate false information). This is more obviously true for LLMs that generate answers (GAG) than for retrieval models that generate doc-ids (GDR). * The infinite index paradigm <cit.>: if LLMs can generate an infinite amount of answers to a given query, then classic recall-based IR evaluation metrics like NDCG cannot rely on a finite amount of true positives. A workshop on Generative IR will question whether IR is truly facing a paradigm change at the theoretical level <cit.>. This event will also be a way to reflect on Generative IR's benefits and challenges, as retrieval-like LLMs (GAG) get released to the general public. Finally, we will encourage submissions and discussions on further Generative IR topics and models, where existing literature is scarce, such as recommender systems, Learning to Rank, diffusion models, etc. We compiled a list of related literature[<https://github.com/gabriben/awesome-generative-information-retrieval>]. § THEME AND PURPOSE OF THE WORKSHOP 2023 will be a forum for discussion about the challenges in applying (pre-trained) generation models for information retrieval as well as the theory behind the models and applications. The aim of this workshop is multi-fold: [label=(*)] * discussing the main challenges in designing and applying generative retrieval models in practice, * establishing a bridge for communication between academic researchers and industrial researchers around Generative IR, * providing an opportunity for researchers to present new directions and early insights, and * creating an agenda for Generative IR according to the 4 pillars bellow (Model Architecture, Training, Evaluation, Applications). This agenda will then ideally be periodically revised at future occurrences of the workshop. Our call for papers and the theme of the panel / roundtable discussions will evolve around these 4 pillars. For now Generative IR revolves mostly around Generative Document Retrieval (GDR) and Grounded Answer Generation (GAG). We leave space for further tasks in the 4th pillar. §.§ Model Architecture Despite the preliminary studies on pre-trained language models (PTMs) for GDR, most research in this direction focuses on straightforwardly applying existing PTMs that are specifically designed for NLP into IR applications such as T5 <cit.> and BART <cit.>. These encoder-decoder architectures do not consider the IR cues that might benefit the downstream IR tasks, such as GAG. These cues include information about ranking, entity disambiguation, and the causal relationships behind ranking tasks. Another solution could be to generate documents via other types of models that can provide a range of predictions, like diffusion models <cit.>. Diffusion models have already been tested for language generation and categorical data in general <cit.> and are thus candidates for both GDR and GAG tasks. §.§ Training Despite the strong experimental performance of GDR models, the potential of generative models for general search problems is limited by the training strategies that are currently employed. * Learning To Rank objective. Traditional index-retrieve-then-rank paradigm implies a Learning To Rank objective at the end of the pipeline. This objective is commonly expressed as point-wise, pair-wise, or list-wise. Following the new model-based retrieval paradigm, the objective is global over the whole corpus and usually defined as a standard seq2seq objective, i.e., maximizing the output doc-id likelihood with teacher forcing conditioned on the query. There are many interesting questions to help understand whether such optimization is optimal, how it connects with existing Learning to Rank paradigms, and so on. * Generalization Ability. So far most studies only demonstrate the effectiveness of their approaches on retrieval datasets where a query has only one relevant document. In the future, we should extend the generalization ability of GDR to different search tasks, including a query with a relevant document, with multiple relevant documents at one relevance grade, and with multiple relevant documents at different relevance grades. One option to predict multiple documents via model based retrieval is to use contrastive learning between the document and query representations. * Incremental Learning. For GDR models, there remain open questions about the practical applicability of such models to dynamic corpora. In dynamic and open IR system, documents are incrementally added or removed from the indexed corpus. It is valuable to explore continuously updated learning objectives over new or removed documents (e.g. <cit.>). §.§ Evaluation We consider several topics for the evaluation of Generative Document Retrieval and Grounded Answer Generation: * We are not aware of an evaluation on a big dataset for either GDR or GAG (such as the full MS-MARCO dataset <cit.>). * Evaluation metrics need to be designed taking into account the specifics of the generative paradigm. These metrics should ideally both suit traditional IR and Generative IR. * Human evaluation of Generative IR is still at its infancy. Note that ChatGPT leverages Reinforcement Learning with Human Feedback (RLHF) <cit.>, while Claude uses RL from AI Feedback (RLAIF) <cit.>. * Interpretability and causality are still hard to determine. In the context of GAG, this implies citing its sources, a.k.a. attribution <cit.> (via for example a citation token <cit.>). In other words bridging the gap between GDR and GAG. * Robustness to adversarial attacks (how easy is it to create fake facts or fool the Grounded Answer Generation model) and to distribution shifts (does transfer learning across datasets work?). * Efficiency of models. GDR requires considerably less compute power than GAG. Is there a way to bring computational costs down for GAG or to provide more information with the same amount of compute with GDR (e.g. a ranking of documents instead of just one document or a summary of documents)? * GAGs tend to be very assertive about their claims. Uncertainty estimates would be particularly desirable for GAGs and especially for the ones which don't cite their sources like ChatGPT. * GAGs can appear like they have a mind of their own. Some new conceptual metrics and learning constraints have been proposed like truthfullness, harmlessness, honesty and helpfulness <cit.>. §.§ Applications At inference time, both GDR and GAG are sensitive to prompting strategies. Given particular prompts, it has been shown that one can provoke ChatGPT into hallucinating answers. As a solution, could we use a generation model to unify GDR and GAG, so as to provide document references to source material making it much easier to highlight the authoritativeness / accuracy of the answer? Furthermore, there are several applications to Generative IR that have not yet been subject to much scrutiny beyond GDR and GAG. We can think of summarization, Knowledge-Intensive Language Tasks (KILT) (e.g. <cit.>), recommender systems (e.g. <cit.>) and learning to rank (e.g. <cit.>). § FORMAT will be an interactive full-day hybrid workshop that avoids the one-sided dialogue of a mini-conference. * Invited panel (industrial and academic) [hybrid]. Candidates from different institutions and companies accepted our invitation: Neeva, Google, Meta AI, Tsinghua University, Chinese Academy of Science, Sapienza University of Rome, Samaya AI, KAIST, University of Waterloo, Huggingface, Stanford University. * Contributed paper presentations as posters [onsite] and video demos [online]. * An interactive session to share lessons learned [hybrid]. * Breakout sessions on issues that emerge from the contributed papers and demos (to be determined after the submission deadline but prior to the workshop) [onsite]. §.§ Workshop schedule §.§.§ Morning Time Activity 08.30–08.45 Opening 08:45–09:15 Panel Discussions (academic) 09:15–10:00 Poster Session - (1) Model Architecture 10:00–10:30 Coffee break 10:30–11:00 Panel Discussions (industrial) 11:00–11:45 Poster Session - (2) Training 11:45–12:15 Breakout preparation 11:45–13:30 Lunch §.§.§ Afternoon Time Activity 13:30–14:00 Panel Discussions - Setting an agenda for Gen-IR 14:00–14:45 Poster Session - (3) Evaluation 14:45–15:30 Refreshment break 15:30–16:30 Breakout 14:00–14:45 Poster Session - (4) Applications 17:15–17:30 Round up and closing discussions §.§.§ Schedule Date Event May 2, 2023 Submission deadline Jun 14, 2023 Notification Jul 1, 2023 Camera ready versions of accepted papers due Jul 27, 2023 Gen-IR workshop § ORGANIZERS Gabriel Benedict is an industry PhD candidate at University of Amsterdam, in collaboration with RTL NL. He is doing a mix of theoretical and applied AI research. The main themes are metrics-as-losses for neural networks, normative diversity metrics for news recommendation, intent-satisfaction modelling, video-to-music AI and most recently diffusion for IR tasks. Ruqing Zhang is an associate professor in Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS). She has worked on a number of problems related to natural language generation and neural ranking models. Her current research is especially how to design generative models for IR, how to improve the robustness of ranking models, and how to make IR trustworthy with the lens of “causality”. Donald Metzler is a Senior Staff Research Scientist at Google Inc. Prior to that, he was a Research Assistant Professor at the University of Southern California (USC) and a Senior Research Scientist at Yahoo!. He currently leads a research group focused on a variety of problems at the intersection of machine learning, natural language processing, and information retrieval. He is a co-author of the position paper <cit.>. § PC MEMBERS Potential PC members for reviewing paper submissions: * Andrew Yates, University of Amsterdam * Arian Askari, Leiden University * Hainan Zhang, JD * Hyunji Lee, KAST AI * James Thorne, KAIST AI * Nicola De Cao, University of Amsterdam * Qingyao Ai, Tsinghua University * Roi Cohen, Tel Aviv University * Ronak Pradeep, University of Waterloo * Sheng-Chieh Lin, University of Waterloo * Shengyao Zhuang, The University of Queensland * Vinh Q. Tran, Google Research * Xiao Wang, University of Glasgow * Xinyu Ma, Baidu * Yujia Zhou, Renmin University of China * Zhicheng Dou, Renmin University of China § SELECTION PROCESS We will solicit submission of papers of two to six pages through an open call for papers, representing reports of original research, preliminary research results, proposals for new work, descriptions of generative models based toolkits tailored for IR, and position papers. All papers will be peer reviewed by the program committee and judged by their relevance to the workshop, especially to the two main themes, and their potential to generate discussion. § TARGET AUDIENCE The target audience is the broad range of researchers in industry and academia interested in IR and especially in Generative IR. We will advertise the workshop via a dedicated website and a Twitter/Mastodon account. § RELATED WORKSHOPS As an emerging paradigm, there have not been related workshops held previously at SIGIR or other conferences. ACM-Reference-Format
http://arxiv.org/abs/2306.04292v1
20230607094638
Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research
[ "Timo Freiesleben", "Gunnar König" ]
cs.AI
[ "cs.AI", "stat.ML" ]
Dear XAI Community, We Need to Talk! Freiesleben and König Cluster of Excellence “Machine Learning in Science”, Tübingen University, Germany, [email protected] Department of Statistics, LMU Munich, Germany, [email protected] Dear XAI Community, We Need to Talk! Fundamental Misconceptions in Current XAI Research Timo Freiesleben 1 Gunnar König 2 Received 16 November 2022 / Accepted 17 April 2023 ========================================================================================= Despite progress in the field, significant parts of current XAI research are still not on solid conceptual, ethical, or methodological grounds. Unfortunately, these unfounded parts are not on the decline but continue to grow. Many explanation techniques are still proposed without clarifying their purpose. Instead, they are advertised with ever more fancy-looking heatmaps or only seemingly relevant benchmarks. Moreover, explanation techniques are motivated with questionable goals, such as building trust, or rely on strong assumptions about the ’concepts’ that deep learning algorithms learn. In this paper, we highlight and discuss these and other misconceptions in current XAI research. We also suggest steps to make XAI a more substantive area of research. § INTRODUCTION This is an unusual paper from start to end. We don't start the paper with generic examples of great Machine Learning (ML) achievements. The thoughts in this paper are directed at people who are already working on eXplainable Artificial Intelligence (XAI), so we are long past promotional talks. Our goals with this paper are twofold: 1. to highlight misconceptions within parts of the XAI community in past and current research; 2. to provide constructive feedback and steps forward to make XAI a scientific discipline that actually improves ML transparency. After wrapping our heads around XAI-related topics for a couple of years, we became increasingly frustrated whenever we attended a workshop or conference on the topic. We do not claim that no progress is being made or that no high-quality research is being conducted. However, we are saddened that many computational, intellectual, and financial resources are being poured into projects that, in our view, do not stand on solid grounds: * proposals for new interpretation techniques that serve no clear purpose * anecdotal evidence from intuitive-looking heatmaps or "benchmarks" on seemingly relevant criteria are used as a substitute for a clear motivation * explanations are generated that mislead humans into trusting ML models without the models being trustworthy Instead of swallowing our frustration, we decided to canalize it into this paper with the hope of helping researchers avoid such projects that might be technically interesting but conceptionally unfounded. We believe that such a debate is especially urgent since funding for XAI research is inexorably high, and the community is ever-growing. Without clear purposes and proper conceptual foundations, the XAI boom could lead to a bubble endangered to implode. We would like to see our field become a pillar of ML transparency rather than the ML trust-washing machine. The perspective we will take is more of a philosophical bird's eye view of XAI research. It is not our style to expose specific papers by pointing out their flaws. We also feel that this is not necessary because the misconceptions discussed are 'elephants in the room' in our community. Before sharing our thoughts, we would like to point the reader to work that guided our perspective on XAI and that may help to underpin our arguments. § RELATED WORK Many papers criticize XAI on various grounds, and we believe many of the criticisms still apply to current XAI. We focus on the critiques that most impacted the community and/or our thoughts. In his seminal paper, Zachary Lipton argues that XAI lacks a proper problem formulation and that this problem must be tackled to make progress as a field <cit.>. Instead of a well-defined goal, XAI offers a potpourri of motivations for explainability, such as increasing trust, fairness, or understanding. Summarized, he argues that: “When we have solid problem formulations, flaws in methodology can be addressed by articulating new methods. But when the problem formulation itself is flawed, neither algorithms nor experiments are sufficient to address the underlying problem.” <cit.> Finale Doshi-Velez and Been Kim highlight the problem of assessing the quality of explanations and comparing different explanation techniques. They describe three potential standards for evaluation: application, human, and functionally grounded interpretability, the first two rely on human studies and the third one on formal model properties <cit.>. They posit the intuitive principle that “the claim of the research should match the type of the evaluation.” <cit.> Cynthia Rudin provides examples of post-hoc explanations that can mislead the user because they are difficult to interpret <cit.>. She argues that this issue becomes particularly threatening when the stakes are high, and model authorities have a financial interest in model opacity. Rudin and her co-authors point out that: “interpretable models do not necessarily create or enable trust – they could also enable distrust. They simply allow users to decide whether to trust them.” <cit.> In consequence, they argue in favor of inherently interpretable models. Our views on XAI have also been strongly shaped by philosophical discussions around explanation and interpretability. Philosophers gave formal accounts of what constitutes an explanatory relationship, namely a statement about the phenomenon to be explained (called the explanandum), a statement about a phenomenon that explains the explanandum (called the explanans), and an explanatory link between explanans and explanandum <cit.>. For formalizing the explanatory link, especially causal accounts dominate, where the explanans is a difference maker with respect to the explanandum <cit.>. Krishnan rightfully highlights the importance of distinguishing the causal explanatory from the justificatory role of explanations. She notes that the two may often not align in the context of XAI as we might face explanations that do not justify decisions and justifications that do not explain them <cit.>. Others have emphasized the different explananda present in XAI, are we interested in explaining the model or the modeled real-world phenomenon <cit.>? Finally, Erasmus, Brunet, and Fisher argued that many statements may formally explain a phenomenon, however, it is often difficult to interpret these explanations correctly <cit.>. § MISCONCEPTIONS IN XAI RESEARCH In this section, we highlight the key misconceptions we see present in current XAI research and illustrate them in little caricatures. For many of these misconceptions, we are not the first to identify them. However, these misconceptions have persisted over time despite strong and convincing criticism. We see nothing wrong in repeating true things that are still ignored by parts of our community. §.§ Misconception 1: “Explanation Methods are Purpose-Free” Many 'explanation methods' are presented as mathematical constructs without a conceptual or practical justification. Usually, such papers have the following storyline: * ML models are black-boxes * Explanations are needed because of [trust, transparency, detecting bugs, etc.] * Here are some formalisms, theorems, and the implementation * Look at the nice [images, text annotations, plots, etc.], don't they look exactly how you would expect them? * In this arbitrary benchmark we invented, our method is much better than all the others in 'explaining'. However, it remains unclear why anyone should call these images or plots explanations in the first place. Worse, it even remains unclear what purpose these 'explanations' might serve and under what conditions they are helpful. We do not claim that explanations can serve only one purpose, but rather that they should serve at least one purpose. Moreover, it should be shown, or at least clearly motivated, how exactly the proposed explanation technique serves this purpose. One may contend here that we do science for science's sake; the purpose is knowledge. However, as long as we do not have a widely accepted definition of explainability or interpretability, a purpose is the only way to connect explainability techniques with the real world. 'Explanation techniques' that are not motivated by any practical purpose should be suspicious to our community. If you cannot think of any context in which your explanation helps potential explainees (i.e. the recipients of explanations), this is a good indication that you should trash the technique. §.§ Misconception 2: “One Explanation Technique to Rule Them All” There is a persistent belief in our community that we only need to find and research the single best explanation technique (e.g., SHAP), choose the best hyperparameters (e.g., the ideal baseline), and then we will always have the best explanations that provide perfect understanding. However, the goals we pursue with explanations are diverse: we may want to audit the model, learn something about the modeled phenomenon, debug models, or provide end-users with the ability to contest the model's decision or act based on it. Depending on the goal, an entirely different technique, with different hyperparameters choices and additional side constraints may be appropriate. Explanation Purposes are generally in conflict. Counterfactual explanations are the ideal example to illustrate these conflicts and the trade-offs we must make <cit.>. In the original paper by Wachter et al <cit.>, counterfactuals are presented as explanations that provide understanding, contestability, and recourse. If we think of algorithmic recourse (counterfactuals that guide human actions to reach a desired outcome), the actionability of features is crucial; for example, humans cannot simply become younger to reach the desired outcome. Thus, age is not part of counterfactuals tailored for recourse. Discrimination based on age, on the other side, might be a good reason to contest a decision. That is why, age can surely be part of a counterfactual tailored for contesting. Finally, for the vague purpose of understanding the ML model, counterfactuals might not be the right tool at all, as they only provide extremely limited insight into the model. §.§ Misconception 3: “Benchmarks do not Need a Ground-Truth” Benchmarks are meant to be objective comparisons between competitors according to a universally agreed standard. Machine learners love benchmarks. Benchmarks have been the bread and butter in ML research in the last decade and an important pillar for progressing the field. Because of the success of benchmarking in ML, the XAI community figured that benchmarks should be a central part of our field as well. Unfortunately, in XAI we generally lack the central element we have in supervised ML to make objective comparisons – a ground truth. Without a ground-truth, it is hard to come up with metrics that quantify desirable properties and that are widely agreed upon. Accepting the problem of the missing ground truth, there would have been two ways for progress in XAI: 1. abandon the idea of benchmarks in XAI altogether and move toward a more qualitative evaluation of explanations; 2. define benchmarks through the explanation purpose, i.e., how well does the explanation serve that purpose, which gives us again some notion of ground-truth. Parts of our community, however, have taken less rocky paths: Regardless of the explanation purpose, and with little conceptual motivation, they formally define properties that they are optimizing their explanations for. Other explanation techniques (often designed for completely different applications and optimized for distinct desiderata) are then compared according to their own standards. In this form, benchmarks lose their justification; they become advertisement space rather than an objective standard for comparison. §.§ Misconception 4: “We Should Give People Explanations They Find Intuitive” Many papers in our field use standards to motivate explanations that we find particularly questionable. For instance, explainees are given images or annotations that should convince them that the explanation technique actually highlights the right things. The images and annotations are tailored to look compelling and intuitive, conveying a message like – “You see, the model is actually looking at the parts of the object that you also look at when performing the task; you can trust this.” As a consequence, we (over-)fit explanation techniques to human intuition; however, the question is whether these 'explanations' are still faithful to the explained ML model. We think that a categorical mistake is made here; XAI should help make the model mechanism more transparent, not compel people into believing the system is good. Explanations provide grounds to decide whether to trust the model; they should not be designed to compel people into trusting the model. We should distinguish between an explanation of a decision and a justification of a decision. Justifications are good reasons for a decision; Explanations are the actual reasons for a decision <cit.>. They may align in decisions where the actual reasons for a decision can be ethically justified. In XAI, however, they very often diverge. Think of cases where an 'explainer' 'explains' the predictions of the prediction model without any access to it beyond the single prediction. Or, when the evaluation standard for explanations is which kinds of explanations people like better. Indeed, it can be argued that people also often provide only justifications for their actions, but do not provide their actual reasons or are often not even aware of them. However, this is not an argument for why we should accept the same for XAI explanations; instead, we should strive for higher standards, explanations that are faithful to the causal decision-making process <cit.>. §.§ Misconception 5: “Current Deep Nets Accidentally Learn Human Concepts” Big parts of our field share the following, in our opinion unwarranted, presupposition: Deep neural nets learn the same concepts as humans. The idea is that early layers learn low-level concepts, such as edges in images or syllables in sound; Layers closer to the output on the other side learn high-level concepts, such as the concept of a wheel or the concept of a noun <cit.>. Concepts are assumed to be learned without explicitly forcing the model to learn such concepts, but only by optimizing the model to classify images or correctly complete sentences. The assumption is that the only way to solve complex tasks is to use exactly the concepts that humans use <cit.>. Thus, all we need to do is to train the network and then use XAI techniques like activation maximization or network dissection to discover/reveal which nodes in the network stand for which concept, and then – tada – we have a fully transparent model where every part of the model stands for something, and the model basically does logical reasoning again <cit.>. We agree that this would be fantastic; however, for the following reasons, we are far more pessimistic concerning the conceptual reasoning in neural nets: * Many regularization techniques, for instance, dropout <cit.>, explicitly force the model to represent in a distributed manner by punishing overreliance on individual neurons. * Even though research showed that some nodes in the network co-activate in the presence of certain concepts (actually, the co-activation in percentage is far less impressive than one would think), the causal role of the concept is not shared <cit.>. That means that for instance cutting the neuron in a bird classifier that 'represents' wings or intervening on it does not or only marginally change the model's performance/prediction when birds with different wings are presented. Is this really what we mean when we talk about representing concepts? * One of the reasons why humans have shared concepts is because they need to effectively communicate with other humans about the world <cit.>. However, effective communication has not been a constraint in the training of ML models. Also, humans do not face one but a variety of different tasks. For simple classifications, abstract concepts are not needed as there exist shortcuts <cit.>. Fancy images like those generated by activation maximization techniques <cit.> should not fool us in this regard: Just because the images generated have some wing-like elements does not mean that they represent wings. Not only are the images we get extremely sensitive to the source image on which we perform activation maximization <cit.>, but they are likely to contain other forms and small shapes that we, as humans, blend out. For instance, research on adversarial examples indicates that deep nets use features in their classification that humans do not attend to <cit.>. It is questionable whether we as humans will ever understand the 'concepts' of ML models <cit.>. §.§ Misconception 6: “Every XAI Paper Needs Human Studies” Many pointed to the importance of human studies in making progress on XAI <cit.>. We agree that evaluating the quality of explanations based on their impact on human performance on a particular task (to which the explanations are tailored) is reasonable and solid research. However, when it comes to explaining a specific phenomenon, at least two distinct questions must be addressed <cit.>: 1. What counts conceptually as an explanation for the phenomenon? 2. Which among the explanations for the phenomenon are good explanations for a specific explainee? While the latter question requires properly designed human studies, the former does not; instead, it's a philosophical/conceptual question that can be addressed with conceptual analysis and formal mathematical tools. Why is the conceptual definition of what counts as an explanation important at all? Why can't we go directly to the second step and test explanations in the real world, with real human explainees? In principle we could do that, but in practice the space of possible 'explanations' is unlimited. Conceptualizing what counts as an explanation for a phenomenon is building up the theory needed for an informed search for good explanations. In many cases where human studies are conducted, a more careful conceptual analysis would have been advisable. More generally, not conducting human studies does not mean dismissing explanation evaluation. For instance, a purely formal evaluation of explanation techniques can be justified if human studies have already been conducted for that type of explanation. Also, not all purposes of XAI require conducting human studies. For example, if we want to use XAI to estimate a specific quantity using the model, the speed and accuracy by which this quantity is measured allows us to compare it with other estimators estimating the same quantity <cit.>. §.§ Misconception 7: “XAI Methods can be Wrong” Many papers have recently shown how saliency-based or model-agnostic explanation techniques like SHAP, LIME, counterfactuals can be 'tricked' to provide any desired explanation <cit.>. This has been taken as major arguments against these techniques and led to arguments why the techniques are wrong or questioning their reliability <cit.>. To us, there seem to be misunderstandings concerning the consequences of these lines of research. While we allow for arbitrary model and data complexity, we require that explanations be simple. Therefore, explanations will indeed not be faithful to every aspect of the model. In this sense, they do nothing wrong; they describe the formal aspects they describe. The fact that explanations are not faithful to every model aspect is the motivation for having different kinds of XAI techniques, each illuminating a different aspect while neglecting another. You may be able to fool SHAP, you may be able to fool LIME, but you won't be able to fool all techniques all the time. It is difficult to find the right level of abstraction in a given context: easily interpretable and local explanations like counterfactuals might have too little expressive power, they can be manipulated without changing much of the overall model behavior; more abstract and global explanations like partial dependence plots may zoom out too far, thereby allowing to hide problematic behavior in the specifics of the model. The fact that small model modifications can mislead explanation techniques is nevertheless important – it shows that the XAI techniques we have and the explanations they provide are very hard to interpret. We may need more diverse evidence to draw conclusions based on XAI explanations. Our field should take this as a call for developing XAI techniques on all levels of abstraction, describing all aspects of behavior relevant to real-world purposes. §.§ Misconception 8: “Extrapolating to Stay True to the Model” Most XAI techniques rely on probing the ML model in one way or the other: LIME is locally sampling inputs, predicts them, and fits a linear model; counterfactuals search for close input points from a desired predicted class; Permutation feature importance (PFI) permutes the values in a specific feature and measures the drop in performance due to this permutation; Activation maximization uses gradient descent to find an input that maximally triggers a specific unit; integrated gradients approximate the integral over the path integral between the 'explained' image and a baseline image. The problem is not THAT the model is probed, but WHERE – namely in areas where it has not seen any data, i.e., in areas where the model has to extrapolate <cit.>. ML models are notoriously bad at extrapolating to completely unseen instances <cit.>. In extrapolation regions, models disagree even when fitted to exactly the same data and achieve similar high performance on a test set. Asking an ML model to extrapolate is like asking a five-year-old kid who hasn't gone to school about her insights into algebraic topology. You might get an answer, but that answer will not really help you. Recent literature argues that explanations that rely on extrapolation are true to the model, while those that only probe the model within the data manifold are true to the data <cit.>.[If we stay within the manifold, the model explanations can even be interpreted in terms of the data-generating mechanism <cit.>.] Clearly, since the model is defined for instances outside of the manifold, probing the model in these areas will give us further insight into the model (for purposes such as debugging or robustness checks) that we would not have gained otherwise. However, we believe that for most XAI purposes, we are interested in the behavior of the model in areas where it is (at least putatively) qualified. As soon as we leave the data manifold, the interpretation of explanation techniques becomes very blurry. We think it is highly problematic for the interpretation of current explanation methods that they rely so strongly on extrapolation. § STEPS FORWARD We hope that these misconceptions show: XAI is still a pre-paradigmatic discipline <cit.>. We cannot simply adopt some arbitrary assumptions and move on to paradigmatic scientific problem-solving. We must fight about the right conceptions of what the field is about, the language we should use, and the right evaluation strategies. We know that it is very easy to be critical while it is very difficult to be constructive. So we want to share at least some thoughts and intuitions about how we think the field should evolve to become a more substantive discipline. §.§ Step 1: Go From Purpose to Benchmark Explanation techniques should start with a purpose. Again, this does not mean that they can only serve one purpose, but they should show that they serve at least one purpose. A purpose is a goal humans have in mind when they ask for explanations. Once the purpose is fixed, the evaluation of the explanations follows naturally. Your explanation technique should enable debugging? Then the evaluation for the method should be a qualitative study of whether the method suits model developers and helps them to debug their models. If your global explanation technique is supposed to infer relevant properties of the data-generating mechanism, then show in a simulation how well and how resource efficient your technique approximates these properties. When your local explanation technique is designed to provide recourse options to end-users, then either carefully conceptually justify desiderata for recourse and base your evaluation upon these desiderata, or test the suitability of these recourse options in experiments. The purpose determines the right evaluation metric; the evaluation metric(s) often allows for benchmarking. Different explanation techniques that are designed for the same purpose can be judged by the same evaluation metric(s) and thus benchmarked. One simple example is when two methods are estimating the same quantity i.e. a quantifiable property of the model. §.§ Step 2: Be Clear What You Need to Explain and by What Every explanation comes with an explanation target, the so-called explanandum. The explanandum specifies what is to be explained and is determined by the explanation's purpose. Very often, confusion in XAI research arises because it is unclear what the explanandum is in a given context. For instance, confusions about the right sampling technique are often implicit confusions about the right explanandum <cit.>. XAI techniques may for instance aim to explain: * the model prediction Ŷ, * the predicted target Y, or * an intermediate model element. If you are clear about the explanandum, the second big question is by what you want to explain it – the so-called explanans. The explanans describes the factor(s) you are pointing to in order to account for the state of the explanandum. There are a variety of explanantia (plural of explanans) in XAI research such as: * the model inputs X, * the predictors X, * the dataset or a subset of it, or * intermediate model elements. Finally, be clear on the connection between the explanans and the explanandum. Explanations can be established by pointing to associations between the explanans and the explanandum <cit.>. Usually, however, the relationship we are interested in is causal, that is, the explanans makes a difference for the explanandum <cit.>. While causal explanations are more desirable than reference to mere associations, they are also more difficult to establish. §.§ Step 3: Give Clear Instructions for How to Interpret Explanation Techniques Interpreting the outputs of XAI techniques is extremely difficult. Rather than letting people figure out how to interpret XAI statements on their own, papers should provide clear guidance on how to do so. We believe that addressing the following questions in new proposals for XAI techniques would contribute to securing good usage: * What purpose does this XAI technique serve and how should it be applied? * Under which (model) conditions does the XAI technique enable a clear interpretation? * How do the hyperparameters of the technique affect the interpretation? * What is the intuitive meaning of extremes, namely high, close to zero, or negative values? * In what way, does the explanation guide actions and decisions? * When is it better to rely on other explanation techniques and why? §.§ Step 4: XAI Needs Interdisciplinarity AND Expertise XAI is a highly interdisciplinary field. XAI involves so many aspects that a single field would fail terribly; we need interaction. XAI needs to solve the following key questions, among others: * Conceptual: What are relevant explanation purposes? What is required to establish an explanatory relationship between an explanans and an explanandum? What are general explanation desiderata for a specific purpose? How can explanations be conceptualized? How to interpret explanations? * Technical: How to describe the conceptual definitions formally? What can be shown formally about the properties of these explanations? How to compute the explanations efficiently? How to implement explanations accessibly and correctly? How to interpret formalized explanations? * Psychological: How to visualize explanations the right way? What makes a good explanation for a particular explainee? What are context and person-specific desiderata of explanations? What cognitive biases do people have when interpreting explanations? Is the explanation successful in serving the explanation purpose? * Social and Ethical: Should we provide explanations and if yes, what are ethical desiderata? What are the risks with XAI in high-stakes decisions? How do explanations affect people's trust and actions? What level of transparency do we need? Not every paper must involve researchers from each group. However, the questions between the different categories should be seen as closely tied: Formal XAI methods without a conceptual foundation should be disregarded; Conceptually solid XAI tools that experimentally fail in guiding humans should be modified and fine-tuned; Finally, XAI explanations that serve a purpose successfully but this purpose is morally questionable should be dismissed. At the same time, nothing is wrong with XAI research that focuses on a narrow field-specific question such as providing a more efficient algorithm or testing a specific XAI method in human experiments concerning its success in finding model flaws. Every field has its expertise and it is important that conceptual foundations, algorithms, experiments, and ethical evaluations live up to the highest standards of the individual fields. All we want to emphasize is to not run around having blenders on but recognize how the questions are intertwined. § CONCLUSION This paper covered the key misconceptions in current XAI research. In our opinion, the most important one is the idea of purpose-free explanations. Fixing specific purposes will provide a way for evaluating and benchmarking XAI techniques objectively. The explanation purpose will also guide us: how XAI techniques must be constructed, when they should be used, and how they have to be interpreted. Overall, purpose-centered XAI research will help us make ML systems more transparent. Therefore, we hope that future researchers will start thinking more about the purpose of explanations before they make grand proposals for new methods. § ACKNOWLEDGEMENTS This project has been supported by the German Federal Ministry of Education and Research (BMBF) and the Carl Zeiss Foundation (project on “Certification and Foundations of Safe Machine Learning Systems in Healthcare”). splncs04
http://arxiv.org/abs/2306.03610v2
20230606115811
Ultrafast Hidden Spin Polarization Dynamics of Bright and Dark Excitons in 2H-WSe$_2$
[ "Mauro Fanciulli", "David Bresteau", "Jérome Gaudin", "Shuo Dong", "Romain Géneaux", "Thierry Ruchon", "Olivier Tcherbakoff", "Ján Minár", "Olivier Heckmann", "Maria Christine Richter", "Karol Hricovini", "Samuel Beaulieu" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci", "physics.optics" ]
[email protected] Laboratoire de Physique des Matériaux et Surfaces, CY Cergy Paris Université, 95031 Cergy-Pontoise, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France Université de Bordeaux - CNRS - CEA, CELIA, UMR5107, F33405 Talence, France Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France University of West Bohemia, New Technologies Research Centre, 301 00 Plzeň, Czech Republic Laboratoire de Physique des Matériaux et Surfaces, CY Cergy Paris Université, 95031 Cergy-Pontoise, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France Laboratoire de Physique des Matériaux et Surfaces, CY Cergy Paris Université, 95031 Cergy-Pontoise, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France [email protected] Laboratoire de Physique des Matériaux et Surfaces, CY Cergy Paris Université, 95031 Cergy-Pontoise, France Université Paris-Saclay, CEA, CNRS, LIDYL, Gif-sur-Yvette, 91191, France [email protected] Université de Bordeaux - CNRS - CEA, CELIA, UMR5107, F33405 Talence, France We performed spin-, time- and angle-resolved extreme ultraviolet photoemission spectroscopy (STARPES) of excitons prepared by photoexcitation of inversion-symmetric 2H-WSe_2 with circularly polarized light. The very short probing depth of XUV photoemission permits selective measurement of photoelectrons originating from the top-most WSe_2 layer, allowing for direct measurement of hidden spin polarization of bright and momentum-forbidden dark excitons. Our results reveal efficient chiroptical control of bright excitons' hidden spin polarization. Following optical photoexcitation, intervalley scattering between nonequivalent K-K' valleys leads to a decay of bright excitons' hidden spin polarization. Conversely, the ultrafast formation of momentum-forbidden dark excitons acts as a local spin polarization reservoir, which could be used for spin injection in van der Waals heterostructures involving multilayer transition metal dichalcogenides. Ultrafast Hidden Spin Polarization Dynamics of Bright and Dark Excitons in 2H-WSe_2 Samuel Beaulieu July 31, 2023 =================================================================================== Spin-valley locking emerges in solids with broken inversion symmetry and strong spin-orbit coupling. This leads to peculiar momentum-dependent spin and orbital textures. Transition metal dichalcogenides (TMDC) are emblematic two-dimensional materials where this spin-valley locking leads to distinctive optical selection rules when using circularly polarized light, allowing for the generation of spin- and valley-polarized excitons <cit.>. These concepts are at the foundation of spin- <cit.> and valleytronics <cit.>. In bulk-TMDC of 2H polytype (e.g. 2H-WSe_2), adjacent layers are rotated by 180^∘ with respect to each other, leading to opposite and alternating local momentum-space spin textures between neighboring layers (see Fig. <ref>). This peculiar layered structure naturally introduces the concept of "hidden" spin texture <cit.>, which exists within each layer but vanishes in bulk, i.e. when the inversion-symmetry of the crystal is restored. TMDC hosts a great variety of so-called "hidden" properties, such as hidden orbital angular momentum and Berry curvature <cit.>, intrinsic circularly polarized photoluminescence <cit.>, spin-layer polarization <cit.>, and unconventional superconductivity <cit.>. Owing to the sub-monolayer inelastic mean free path of outgoing photoelectrons, the valence band's hidden spin texture of bulk-TMDC could be measured using extreme ultraviolet (XUV) spin- and angle-resolved photoemission spectroscopy (SARPES) <cit.>. As spintronic devices’ functionality arises in out-of-equilibrium states of matter, one very appealing route would be to extend this measurement methodology to the investigation of ultrafast hidden spin polarization dynamics of excited states in these layered van der Waals materials. The free carrier and exciton dynamics in 2H-WSe_2 have been extensively investigated using time- and angle-resolved photoemission spectroscopy (TR-ARPES) <cit.>, but, up to now, without spin resolution. It was shown that a near-resonant (800 nm/1.55 eV) pump pulse creates a coherent excitonic polarization, which dephases into an optically bright exciton population in less than 20 femtoseconds <cit.>. These short-lived bright excitons subsequently relax through different intervalley scattering channels. A possible channel is scattering-backscattering between inequivalent K-K' valleys. The conduction band minimum spin-orbit-splitting at K and K' is only a few tens of meV <cit.>, thus K-K' scattering events are reversible and can either be mediated by intervalley electron-hole exchange <cit.>, a process involving spin flip or be assisted by phonons <cit.>, a spin-preserving process. K-K' intervalley scattering has been shown to be responsible for the rapid decay of hidden valley polarization <cit.>, initially prepared using a circularly polarized pump pulse. Another possible relaxation channel is K-Σ intervalley scattering, leading to the formation of momentum-forbidden dark excitons, with electron and hole residing at Σ and K valleys, respectively <cit.>. These dark excitons are long-lived (tens of picoseconds) <cit.>, due to their momentum-indirect nature. Surprisingly, following K-Σ intervalley scattering, no hidden valley polarization was observed at Σ, despite the initial valley polarization in the K-K' valleys <cit.>. It was argued that as states at Σ are strongly delocalized in the direction across the layers, they are equally filled by scattering from K valleys in one layer and from K' in adjacent layers, washing out the valley- and layer-polarized nature of initial excited states at K-K' on an ultrafast timescale. These observations leave many open questions related to ultrafast dynamics of spin-polarized excitons in layered TMDCs, e.g.: Does hidden spin polarization of excitons survive intervalley scattering between adjacent K-K' valleys? Does bright excitons' initial hidden spin polarization remain upon the formation of dark excitons? Answering these questions is fundamental for designing spintronic device concepts based on multilayer TMDCs. However, directly accessing hidden spin polarization of TMDCs' excitons has not yet been demonstrated, mainly because of the famously challenging task of simultaneously combining time- and spin-resolution in ARPES, which are both extremely time-consuming. Indeed, while most successful attempts to combine time- and spin-ARPES were based on UV-Vis photoemission <cit.>, this approach does not allow accessing large parallel momentum, which makes it blind to TMDCs' bright excitons, which are located at the Brillouin zone boundary. With the recent development of XUV STARPES using high-order harmonic sources <cit.>, investigation of ultrafast exciton's spin-polarization dynamics in TMDCs can now be tackled. In this letter, we report the first spin-, time- and angle-resolved XUV photoemission spectroscopy of excitons in TMDC (here 2H-WSe_2). Our results demonstrate chiroptical control of bright excitons' hidden spin polarization, its decay upon intervalley scattering between adjacent K-K' valleys, and long-lived dark excitons with strong hidden spin polarization. The experiments were performed using the narrowband mode of the FAB10 beamline at the Attolab facility (CEA Saclay) <cit.>. In a nutshell, we used a 10 kHz amplified Ti:Sa (1.55 eV) laser system delivering up to 2 mJ with a full width at half maximum (FWHM) duration of ∼23 fs. We split the beam into two arms: in the probe arm, a few hundred microjoules are focused in an argon gas jet to produce a broad spectrum of odd harmonics of the driving laser extending up to 50 eV, through high-order harmonic generation (HHG) <cit.>. A time-preserving monochromator is used to select a single harmonic (here the 23rd harmonic, 35.65 eV, ∼250 meV FWHM) <cit.> with linear (p-) polarization. The XUV pulse duration is estimated to be around 30 fs. In the pump arm, we used a polarization-tunable IR (1.55 eV) pulse, which is near the bright A-exciton resonance of 2H-WSe_2 <cit.>. The IR pump and XUV probe pulses are non-collinearly recombined onto the sample [Fig.<ref>(b)]. The pump fluence is estimated to be ∼ 1.9 mJ/cm^2, which is very similar to the one used by Dong et al. <cit.>, where clear photoemission signatures of bright excitons formation in 2H-WSe_2 were reported. The commercially available (HQ Graphene) bulk 2H-WSe_2 single crystal was cleaved at a base pressure of ∼2x10^-10 mbar. The measurements were performed at room temperature. The photoemission endstation comprises a hemispherical analyzer (SPECS PHOIBOS 150) and a 3D spin detector (Focus FERRUM) <cit.>, based on very-low energy electron diffraction (VLEED). This detection scheme allows extracting the energy-resolved spin polarization along the three spin quantization axes in the detector reference frame, as shown in Fig.<ref>(b). An example of the measured spin polarization on the z quantization axis (P_z) of photoelectrons ejected from bright excitonic states at the majority K' valley (at the pump-probe overlap i.e. Δt=0 fs and for ±7^∘ ejection angles) is shown in Fig. <ref>(c) (spin polarization data points for photoemission intensity smaller than 20% of the peak intensity are not shown). We first investigate the spin-integrated bright exciton populations in K [Fig. <ref> (a)-(b)] and K' Fig. <ref> (d)-(e)] valleys after near-resonant photoexcitation with right and left circularly polarized light (σ^+ and σ^-), around the pump-probe overlap (Δ t=0 fs). To experimentally swap the interrogated valley pseudospin index (K-K'), we azimuthally rotate the crystal by 60^∘, which leaves all other experimental geometry parameters (e.g. angle of incidence) unchanged <cit.>. The photoemission intensity suppression in the valence band at around 2.5^∘ from K [Fig. <ref> (d)-(e)] is due to multiple orbitals interference effect, which has been discussed elsewhere <cit.>. The circular dichroism at K or K' (CD_K/K') is obtained by taking the normalized difference of the energy- and momentum-resolved signal at a given valley for different light helicity, i.e. CD_K/K' = [I^σ^+_K/K' - I^σ^-_K/K']/[I^σ^+_K/K' + I^σ^-_K/K'] (Fig. <ref>(c),(f)). We find a relatively strong CD exhibiting sign flip when changing the valley pseudospin index, indicating the initial preparation of bright excitons with strong hidden valley-polarization upon excitation with circularly polarized light. Our results agree with the experimental finding of Bertoni et al. <cit.> and are consistent with recent theoretical calculations that also reveal that these bright valley excitons formed upon absorption of circularly polarized light are chiral quasiparticles characterized by finite orbital angular momentum <cit.>. The different absolute values of CD at K and K' valleys might be due to a small pump-probe delay offset between the two measurements, also highlighted by the contribution of laser-assisted photoemission (LAPE) <cit.> signal, stronger at K valley than at K' [i.e., the valence band (VB) replica at E_VB+ħω_IR, well visible between 36.5-37.5 eV for negative emission angles in Fig. <ref>(d)-(e)]. After looking at the hidden valley polarization induced by chiroptical selection rules, we investigate the hidden spin polarization of photoelectrons emerging from bright excitons in minority and majority valleys (Fig. <ref>) at time zero, for both pump pulse helicities. Considering our photon energy, photoelectrons from the K/K' valleys are ejected towards the analyzer entrance slit with an angle of 25^∘ from the sample surface normal. Similarly to what has been measured for the valence band <cit.>, we expect excitons' spin-polarization to be out-of-plane. Since we measure a vanishing spin polarization along the x and y quantization axes (see Supplemental Material <cit.>), we consider only the P_z spin-polarization component. This P_z spin-polarization (detector frame) is strongly representative of the out-of-plane spin-polarization component (sample frame) due to the small angle between the surface normal and the detector axis. The spin polarization is obtained as P_z=1/S·(I_Up-I_Down)/(I_Up+I_Down), with S=0.29 the Sherman function <cit.>, which takes into account the calibration of the spin detector. The reported values of P_z are obtained by averaging the signal in a ±200 meV energy interval around the energy distribution curve (EDC) peak and after exponential background subtraction for both spin channels I_Up, Down. The energy-resolved data are presented in <cit.>. Spin-resolved measurements for each valley and polarization-state configurations were repeated 16 times and error bars represent the 95% confidence intervals calculated using Student’s statistics. The experimental data presented in Fig. <ref> are obtained with a net acquisition time of 14 hours. As shown in Fig. <ref> (a) and (d), photoelectrons emerging from bright excitons in the majority valleys with both σ^+ [K' valley, Fig. <ref>(a)] and σ^- [K valley, Fig. <ref>(d)] are almost fully spin-polarized (see Supplemental Material <cit.> for a note on the absolute determination of light helicity, valley pseudospin index, and electron spin polarization). It is important to note that due to selection rules in photoemission, the measured spin polarization of outgoing photoelectrons cannot de facto be linked to the initial state's spin polarization <cit.>. However, the sign reversal of the out-of-plane spin polarization with the valley pseudospin index allows us to safely link the measured photoelectron spin polarization and exciton's spin polarization, as it was concluded for the valence band <cit.>. The spin polarization of photoelectrons emerging from excitons has the same sign as the ones emerging from the valence band top. Photoelectrons emerging from excitons in the minority valleys with both σ^+ [K valley, Fig. <ref>(c)] and σ^- [K' valley, Fig. <ref>(b)] exhibit almost vanishing spin polarization. These observations indicate a balanced spin-up and spin-down population mixture of photoelectrons emerging from bright excitons population measured in the minority valley, which can originate from different microscopic scattering pathways leading to minority valley population, e.g. intervalley scattering driven by electron-hole exchange <cit.>, or phonons <cit.>, or from imperfect light polarization-state due to the non-normal incidence angle on optics and sample. We now turn our attention to the ultrafast (femtosecond) dynamics of the excitons' spin polarization initially prepared by a chiroptical transition. Valley polarization created upon circularly polarized excitation is known to decay on a sub-100 fs timescale, due to K-K' intervalley scattering [τ_KK'=(60±30) fs] <cit.>. When changing the pump-probe delay from Δ t=0 fs to Δ t=33 fs, bright excitons' hidden spin polarization is found to decay from -79±15% to -53±10% [Fig. <ref>]. This spin- and time-resolved measurement allows us to get additional insight into K-K' intervalley scattering. Indeed, spin polarization decay can only happen if both intervalley electron-hole exchange <cit.> and intervalley phonon-assisted population transfer contribute to scattering-backscattering between nonequivalent K-K’ valleys. Indeed, in a scenario where only intervalley electron-hole exchange would be active, each scattering event between K-K’ (or vice-versa) would involve a spin-flip process, which could quench the valley polarization but would leaves spin polarization in each valley time-independent. The same situation occurs if only spin-preserving phonon-assisted scattering between K-K’ would be allowed. While it is not possible to extract some relative contribution from these two channels due to the time-consuming nature of these measurements (only two pump-probe delays), we can safely conclude that this bright excitons’ hidden spin polarization decay is due to a combined and reversible intervalley electron-hole exchange and phonon-assisted scattering between nonequivalent K-K’ valleys. One open question is related to bright excitons' spin polarization dynamics over longer timescales: does it completely vanishes, or does it saturates? Future STARPES investigations using high-repetition-rate beamlines would allow measuring additional pump-probe delays and resolve the complete temporal evolution of bright excitons spin polarization. In bulk 2H-WSe_2, global conduction band minima are located at Σ, which is located roughly halfway between Γ and K. The single-particle band structure predicts a spin-orbit-splitting of almost 1 eV at the Σ conduction band <cit.>. One of the dominant relaxation pathways for bright excitons is the formation of long-lived momentum-forbidden dark excitons, where electrons and holes reside at Σ and K valley respectively <cit.>. K-Σ intervalley scattering has been reported to lead to a loss of valley- and layer-polarization. This observation was rationalized by the three-dimensional character of the states at Σ. It is still an open question whether or not the loss of valley and layer polarization is accompanied by a loss of spin polarization. It is thus of capital importance to track the dynamical evolution of the hidden spin polarization upon the formation of such dark excitons, to reveal if it is possible to harvest optically-induced initial spin-polarized bright excitons into long-lived dark excitonic states, for spintronics applications. In Fig. <ref>, we measured the hidden spin polarization of momentum-forbidden dark excitons at Σ' for three pump-probe delays (50 fs, 200 fs, and 1000 fs) and both pump pulse helicities. The spin polarization is negative for both pump helicities and all investigated pump-probe delays. The fact that spin polarization has the same sign for both helicity gives us strong insights into the scattering mechanisms involved in the creation of dark excitons. Indeed, this implies that the strong spin-orbit splitting at Σ imposes a given final spin state for each scattering event leading to its population. Thus, despite the vanishing Σ valley-polarization following K-Σ intervalley scattering <cit.>, momentum-forbidden dark excitons are locally (in reciprocal space, i.e. within each valley) spin-polarized. A picosecond after photoexcitation, hidden spin polarization has almost the same amplitude as at early time delay (50 fs), despite slightly smaller measured values at intermediate pump-probe delay (200 fs). Resolving the complete temporal evolution (tens of pump-probe delay, for both pump helicities) of (hidden) spin polarization dynamics would be highly desirable for elucidating more subtle spin relaxation mechanisms, but is not reachable using the current setup. Our results report efficient chiroptical control of excitons' hidden spin polarization in bulk 2H-WSe_2 and its ultrafast dynamics upon intervalley scattering. Our measurements reveal quasi-fully spin-polarized excitons in the majority valley upon photoexcitation with circularly polarized light. We find that subsequent K-K' intervalley scattering is due to two microscopic scattering channels, intervalley electron-hole exchange (spin-flip process) and intervalley phonon-assisted population transfer (spin-preserving process), leading to ultrafast spin polarization decay for bright excitons at K and K' valleys. Instead, the formation of momentum-forbidden dark excitons through K-Σ intervalley scattering acts as a local momentum-space spin-reservoir. Indeed, despite the ultrafast decay of valley- and layer-polarization <cit.> following photoexcitation, the strong spin-orbit splitting at Σ allows the survival of the helicity-independent local hidden spin polarization for dark excitons. This long spin-polarization lifetime is desirable for spintronic applications. Our approach can be directly extended to a wide range of out-of-equilibrium spin dynamics of many-body quasiparticles in solids. In addition, combining this STARPES methodology with a polarization-tunable (circularly polarized) XUV probe pulse would allow accessing the orbital angular momentum and chirality of these optically-prepared spin-polarized excited states <cit.>. Acknowledgments The laser system and the experimental setup are supported by the French “Investments for the Future” of the Agence Nationale pour la Recherche, Contracts No. 11-EQPX0005-ATTOLAB, and No.11-EQPX0034-PATRIMEX. The laser system is also supported by the Scientific Cooperation Foundation of Paris-Saclay University through the funding of the OPT2X research project (Lidex 2014), by the Île de France region through the Pulse-X project, and by the European Union’s Horizon. 48 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Mak et al.(2012)Mak, He, Shan, and Heinz]Mak12 author author K. F. Mak, author K. He, author J. Shan, and author T. F. Heinz, 10.1038/nnano.2012.96 journal journal Nature Nanotechnology volume 7, pages 494 (year 2012)NoStop [Zeng et al.(2012)Zeng, Dai, Yao, Xiao, and Cui]Zeng12 author author H. Zeng, author J. Dai, author W. Yao, author D. Xiao, and author X. Cui, 10.1038/nnano.2012.95 journal journal Nature Nanotechnology volume 7, pages 490 (year 2012)NoStop [Xiao et al.(2012)Xiao, Liu, Feng, Xu, and Yao]Xiao12 author author D. Xiao, author G.-B. Liu, author W. Feng, author X. Xu, and author W. Yao, 10.1103/PhysRevLett.108.196802 journal journal Phys. Rev. Lett. volume 108, pages 196802 (year 2012)NoStop [Sierra et al.(2021)Sierra, Fabian, Kawakami, Roche, and Valenzuela]Sierra21 author author J. F. Sierra, author J. Fabian, author R. K. Kawakami, author S. Roche, and author S. O. Valenzuela, 10.1038/s41565-021-00936-x journal journal Nature Nanotechnology volume 16, pages 856 (year 2021)NoStop [Schaibley et al.(2016)Schaibley, Yu, Clark, Rivera, Ross, Seyler, Yao, and Xu]Schaibley16 author author J. R. Schaibley, author H. Yu, author G. Clark, author P. Rivera, author J. S. Ross, author K. L. Seyler, author W. Yao, and author X. Xu, 10.1038/natrevmats.2016.55 journal journal Nature Reviews Materials volume 1, pages 16055 (year 2016)NoStop [Zhang et al.(2014)Zhang, Liu, Luo, Freeman, and Zunger]Zhang14 author author X. Zhang, author Q. Liu, author J.-W. Luo, author A. J. Freeman, and author A. Zunger, 10.1038/nphys2933 journal journal Nature Physics volume 10, pages 387 (year 2014)NoStop [Cho et al.(2018)Cho, Park, Hong, Jung, Kim, Han, Kyung, Kim, Mo, Denlinger, Shim, Han, Kim, and Park]Cho18 author author S. Cho, author J.-H. Park, author J. Hong, author J. Jung, author B. S. Kim, author G. Han, author W. Kyung, author Y. Kim, author S.-K. Mo, author J. D. Denlinger, author J. H. Shim, author J. H. Han, author C. Kim, and author S. R. Park, 10.1103/PhysRevLett.121.186401 journal journal Phys. Rev. Lett. volume 121, pages 186401 (year 2018)NoStop [Liu et al.(2015)Liu, Zhang, and Zunger]Liu15 author author Q. Liu, author X. Zhang, and author A. Zunger, 10.1103/PhysRevLett.114.087402 journal journal Phys. Rev. Lett. volume 114, pages 087402 (year 2015)NoStop [Guimarães and Koopmans(2018)]Guimaraes18 author author M. H. D. Guimarães and author B. Koopmans, 10.1103/PhysRevLett.120.266801 journal journal Phys. Rev. Lett. volume 120, pages 266801 (year 2018)NoStop [Liu(2017)]Liu17s author author C.-X. Liu, 10.1103/PhysRevLett.118.087001 journal journal Phys. Rev. Lett. volume 118, pages 087001 (year 2017)NoStop [Riley et al.(2014)Riley, Mazzola, Dendzik, Michiardi, Takayama, Bawden, Granerød, Leandersson, Balasubramanian, Hoesch, Kim, Takagi, Meevasana, Hofmann, Bahramy, Wells, and King]Riley14 author author J. M. Riley, author F. Mazzola, author M. Dendzik, author M. Michiardi, author T. Takayama, author L. Bawden, author C. Granerød, author M. Leandersson, author T. Balasubramanian, author M. Hoesch, author T. K. Kim, author H. Takagi, author W. Meevasana, author P. Hofmann, author M. . S. Bahramy, author J. . W. Wells, and author P. . D. C. King, 10.1038/nphys3105 journal journal Nature Physics volume 10, pages 835 (year 2014)NoStop [Razzoli et al.(2017)Razzoli, Jaouen, Mottas, Hildebrand, Monney, Pisoni, Muff, Fanciulli, Plumb, Rogalev, Strocov, Mesot, Shi, Dil, Beck, and Aebi]Razzoli17 author author E. Razzoli, author T. Jaouen, author M.-L. Mottas, author B. Hildebrand, author G. Monney, author A. Pisoni, author S. Muff, author M. Fanciulli, author N. C. Plumb, author V. A. Rogalev, author V. N. Strocov, author J. Mesot, author M. Shi, author J. H. Dil, author H. Beck, and author P. Aebi, 10.1103/PhysRevLett.118.086402 journal journal Phys. Rev. Lett. volume 118, pages 086402 (year 2017)NoStop [Tu et al.(2020)Tu, Chen, Ruan, Zhao, Xu, Chen, Zhang, Zhang, Wu, He, Zhang, Zhang, and Xu]Tu20 author author J. Tu, author X. B. Chen, author X. Z. Ruan, author Y. F. Zhao, author H. F. Xu, author Z. D. Chen, author X. Q. Zhang, author X. W. Zhang, author J. Wu, author L. He, author Y. Zhang, author R. Zhang, and author Y. B. Xu, 10.1103/PhysRevB.101.035102 journal journal Phys. Rev. B volume 101, pages 035102 (year 2020)NoStop [Bertoni et al.(2016)Bertoni, Nicholson, Waldecker, Hübener, Monney, De Giovannini, Puppin, Hoesch, Springate, Chapman, Cacho, Wolf, Rubio, and Ernstorfer]Bertoni16 author author R. Bertoni, author C. W. Nicholson, author L. Waldecker, author H. Hübener, author C. Monney, author U. De Giovannini, author M. Puppin, author M. Hoesch, author E. Springate, author R. T. Chapman, author C. Cacho, author M. Wolf, author A. Rubio, and author R. Ernstorfer, 10.1103/PhysRevLett.117.277201 journal journal Phys. Rev. Lett. volume 117, pages 277201 (year 2016)NoStop [Liu et al.(2017)Liu, Ogawa, Chen, Ozawa, Suzuki, Okada, Someya, Ishida, Okazaki, Shin, Chiang, and Matsuda]Liu17 author author R.-Y. Liu, author Y. Ogawa, author P. Chen, author K. Ozawa, author T. Suzuki, author M. Okada, author T. Someya, author Y. Ishida, author K. Okazaki, author S. Shin, author T.-C. Chiang, and author I. Matsuda, 10.1038/s41598-017-16076-z journal journal Scientific Reports volume 7, pages 15981 (year 2017)NoStop [Sie et al.(2019)Sie, Rohwer, Lee, and Gedik]Sie19 author author E. J. Sie, author T. Rohwer, author C. Lee, and author N. Gedik, 10.1038/s41467-019-11492-3 journal journal Nature Communications volume 10, pages 3535 (year 2019)NoStop [Maklar et al.(2020)Maklar, Dong, Beaulieu, Pincelli, Dendzik, Windsor, Xian, Wolf, Ernstorfer, and Rettig]Maklar20 author author J. Maklar, author S. Dong, author S. Beaulieu, author T. Pincelli, author M. Dendzik, author Y. W. Windsor, author R. P. Xian, author M. Wolf, author R. Ernstorfer, and author L. Rettig, 10.1063/5.0024493 journal journal Review of Scientific Instruments volume 91, pages 123112 (year 2020)NoStop [Dong et al.(2021)Dong, Puppin, Pincelli, Beaulieu, Christiansen, Hübener, Nicholson, Xian, Dendzik, Deng, Windsor, Selig, Malic, Rubio, Knorr, Wolf, Rettig, and Ernstorfer]Dong21 author author S. Dong, author M. Puppin, author T. Pincelli, author S. Beaulieu, author D. Christiansen, author H. Hübener, author C. W. Nicholson, author R. P. Xian, author M. Dendzik, author Y. Deng, author Y. W. Windsor, author M. Selig, author E. Malic, author A. Rubio, author A. Knorr, author M. Wolf, author L. Rettig, and author R. Ernstorfer, https://doi.org/10.1002/ntls.10010 journal journal Natural Sciences volume 1, pages e10010 (year 2021)NoStop [Roldán et al.(2014)Roldán, Silva-Guillén, López-Sancho, Guinea, Cappelluti, and Ordejón]Roldan14 author author R. Roldán, author J. A. Silva-Guillén, author M. P. López-Sancho, author F. Guinea, author E. Cappelluti, and author P. Ordejón, https://doi.org/10.1002/andp.201400128 journal journal Annalen der Physik volume 526, pages 347 (year 2014)NoStop [Maialle et al.(1993)Maialle, de Andrada e Silva, and Sham]Maialle93 author author M. Z. Maialle, author E. A. de Andrada e Silva, and author L. J. Sham, 10.1103/PhysRevB.47.15776 journal journal Phys. Rev. B volume 47, pages 15776 (year 1993)NoStop [Schmidt et al.(2016)Schmidt, Berghäuser, Schneider, Selig, Tonndorf, Malić, Knorr, Michaelis de Vasconcellos, and Bratschitsch]Schmidt16 author author R. Schmidt, author G. Berghäuser, author R. Schneider, author M. Selig, author P. Tonndorf, author E. Malić, author A. Knorr, author S. Michaelis de Vasconcellos, and author R. Bratschitsch, 10.1021/acs.nanolett.5b04733 journal journal Nano Letters volume 16, pages 2945 (year 2016)NoStop [Selig et al.(2016)Selig, Berghäuser, Raja, Nagler, Schüller, Heinz, Korn, Chernikov, Malic, and Knorr]Selig16 author author M. Selig, author G. Berghäuser, author A. Raja, author P. Nagler, author C. Schüller, author T. F. Heinz, author T. Korn, author A. Chernikov, author E. Malic, and author A. Knorr, 10.1038/ncomms13279 journal journal Nature Communications volume 7, pages 13279 (year 2016)NoStop [Lindlau et al.(2018)Lindlau, Selig, Neumann, Colombier, Förste, Funk, Förg, Kim, Berghäuser, Taniguchi, Watanabe, Wang, Malic, and Högele]Lindlau18 author author J. Lindlau, author M. Selig, author A. Neumann, author L. Colombier, author J. Förste, author V. Funk, author M. Förg, author J. Kim, author G. Berghäuser, author T. Taniguchi, author K. Watanabe, author F. Wang, author E. Malic, and author A. Högele, 10.1038/s41467-018-04877-3 journal journal Nature Communications volume 9, pages 2586 (year 2018)NoStop [Madéo et al.(2020)Madéo, Man, Sahoo, Campbell, Pareek, Wong, Al-Mahboob, Chan, Karmakar, Mariserla, Li, Heinz, Cao, and Dani]Madeo20 author author J. Madéo, author M. K. L. Man, author C. Sahoo, author M. Campbell, author V. Pareek, author E. L. Wong, author A. Al-Mahboob, author N. S. Chan, author A. Karmakar, author B. M. K. Mariserla, author X. Li, author T. F. Heinz, author T. Cao, and author K. M. Dani, 10.1126/science.aba1029 journal journal Science volume 370, pages 1199 (year 2020)NoStop [Scholl et al.(1997)Scholl, Baumgarten, Jacquemin, and Eberhardt]Scholl97 author author A. Scholl, author L. Baumgarten, author R. Jacquemin, and author W. Eberhardt, 10.1103/PhysRevLett.79.5146 journal journal Phys. Rev. Lett. volume 79, pages 5146 (year 1997)NoStop [Cinchetti et al.(2006)Cinchetti, Sánchez Albaneda, Hoffmann, Roth, Wüstenberg, Krauß, Andreyev, Schneider, Bauer, and Aeschlimann]Cinchetti06 author author M. Cinchetti, author M. Sánchez Albaneda, author D. Hoffmann, author T. Roth, author J.-P. Wüstenberg, author M. Krauß, author O. Andreyev, author H. C. Schneider, author M. Bauer, and author M. Aeschlimann, 10.1103/PhysRevLett.97.177201 journal journal Phys. Rev. Lett. volume 97, pages 177201 (year 2006)NoStop [Weber et al.(2011)Weber, Pressacco, Günther, Mancini, Oppeneer, and Back]Weber11 author author A. Weber, author F. Pressacco, author S. Günther, author E. Mancini, author P. M. Oppeneer, and author C. H. Back, 10.1103/PhysRevB.84.132412 journal journal Phys. Rev. B volume 84, pages 132412 (year 2011)NoStop [Cacho et al.(2015)Cacho, Crepaldi, Battiato, Braun, Cilento, Zacchigna, Richter, Heckmann, Springate, Liu, Dhesi, Berger, Bugnon, Held, Grioni, Ebert, Hricovini, Minár, and Parmigiani]Cacho15 author author C. Cacho, author A. Crepaldi, author M. Battiato, author J. Braun, author F. Cilento, author M. Zacchigna, author M. C. Richter, author O. Heckmann, author E. Springate, author Y. Liu, author S. S. Dhesi, author H. Berger, author P. Bugnon, author K. Held, author M. Grioni, author H. Ebert, author K. Hricovini, author J. Minár, and author F. Parmigiani, 10.1103/PhysRevLett.114.097401 journal journal Phys. Rev. Lett. volume 114, pages 097401 (year 2015)NoStop [Jozwiak et al.(2016)Jozwiak, Sobota, Gotlieb, Kemper, Rotundu, Birgeneau, Hussain, Lee, Shen, and Lanzara]Jozwiak16 author author C. Jozwiak, author J. A. Sobota, author K. Gotlieb, author A. F. Kemper, author C. R. Rotundu, author R. J. Birgeneau, author Z. Hussain, author D.-H. Lee, author Z.-X. Shen, and author A. Lanzara, 10.1038/ncomms13143 journal journal Nature Communications volume 7, pages 13143 (year 2016)NoStop [Sánchez-Barriga et al.(2016)Sánchez-Barriga, Golias, Varykhalov, Braun, Yashina, Schumann, Minár, Ebert, Kornilov, and Rader]Sanchez16 author author J. Sánchez-Barriga, author E. Golias, author A. Varykhalov, author J. Braun, author L. V. Yashina, author R. Schumann, author J. Minár, author H. Ebert, author O. Kornilov, and author O. Rader, 10.1103/PhysRevB.93.155426 journal journal Phys. Rev. B volume 93, pages 155426 (year 2016)NoStop [Battiato et al.(2018)Battiato, Minár, Wang, Ndiaye, Richter, Heckmann, Mariot, Parmigiani, Hricovini, and Cacho]Battiato18 author author M. Battiato, author J. Minár, author W. Wang, author W. Ndiaye, author M. C. Richter, author O. Heckmann, author J.-M. Mariot, author F. Parmigiani, author K. Hricovini, and author C. Cacho, 10.1103/PhysRevLett.121.077205 journal journal Phys. Rev. Lett. volume 121, pages 077205 (year 2018)NoStop [Mori et al.(2023)Mori, Ciocys, Takasan, Ai, Currier, Morimoto, Moore, and Lanzara]Mori23 author author R. Mori, author S. Ciocys, author K. Takasan, author P. Ai, author K. Currier, author T. Morimoto, author J. E. Moore, and author A. Lanzara, 10.1038/s41586-022-05567-3 journal journal Nature volume 614, pages 249 (year 2023)NoStop [Plötzing et al.(2016)Plötzing, Adam, Weier, Plucinski, Eich, Emmerich, Rollinger, Aeschlimann, Mathias, and Schneider]Plotzing16 author author M. Plötzing, author R. Adam, author C. Weier, author L. Plucinski, author S. Eich, author S. Emmerich, author M. Rollinger, author M. Aeschlimann, author S. Mathias, and author C. M. Schneider, 10.1063/1.4946782 journal journal Review of Scientific Instruments volume 87, pages 043903 (year 2016)NoStop [Eich et al.(2017)Eich, Plötzing, Rollinger, Emmerich, Adam, Chen, Kapteyn, Murnane, Plucinski, Steil, Stadtmüller, Cinchetti, Aeschlimann, Schneider, and Mathias]Eich17 author author S. Eich, author M. Plötzing, author M. Rollinger, author S. Emmerich, author R. Adam, author C. Chen, author H. C. Kapteyn, author M. M. Murnane, author L. Plucinski, author D. Steil, author B. Stadtmüller, author M. Cinchetti, author M. Aeschlimann, author C. M. Schneider, and author S. Mathias, 10.1126/sciadv.1602094 journal journal Science Advances volume 3, pages e1602094 (year 2017)NoStop [Nie et al.(2019)Nie, Turcu, Li, Zhang, He, Tu, Ni, Xu, Chen, Ruan, Frassetto, Miotti, Fabris, Poletto, Wu, Lu, Liu, Kampen, Zhai, Liu, Cacho, Wang, Wang, Shi, Zhang, and Xu]Nie19 author author Z. Nie, author I. C. E. Turcu, author Y. Li, author X. Zhang, author L. He, author J. Tu, author Z. Ni, author H. Xu, author Y. Chen, author X. Ruan, author F. Frassetto, author P. Miotti, author N. Fabris, author L. Poletto, author J. Wu, author Q. Lu, author C. Liu, author T. Kampen, author Y. Zhai, author W. Liu, author C. Cacho, author X. Wang, author F. Wang, author Y. Shi, author R. Zhang, and author Y. Xu, @noop journal journal Applied Sciences volume 9 (year 2019)NoStop [Fanciulli et al.(2020)Fanciulli, Schusser, Lee, Youbi, Heckmann, Richter, Cacho, Spezzani, Bresteau, Hergott, D'Oliveira, Tcherbakoff, Ruchon, Minár, and Hricovini]Fanciulli20 author author M. Fanciulli, author J. Schusser, author M.-I. Lee, author Z. E. Youbi, author O. Heckmann, author M. C. Richter, author C. Cacho, author C. Spezzani, author D. Bresteau, author J.-F. m. c. Hergott, author P. D'Oliveira, author O. Tcherbakoff, author T. Ruchon, author J. Minár, and author K. Hricovini, 10.1103/PhysRevResearch.2.013261 journal journal Phys. Rev. Res. volume 2, pages 013261 (year 2020)NoStop [Bresteau et al.(2023)Bresteau, Spezzani, Tcherbakoff, Hergott, Lepetit, D'Oliveira, Salières, Géneaux, Luttmann, Vadillo-Torre, Lenfant, Weber, Dehlinger, Meltchakov, Delmotte, Bourassin-Bouchet, Im, Chen, Caillaux, Zhang, Marsi, Barreau, Poisson, Dowek, Fanciulli, Heckmann, Richter, Hricovini, Sebdaoui, Dennetiere, Polack, and Ruchon]Bresteau23 author author D. Bresteau, author C. Spezzani, author O. Tcherbakoff, author J.-F. Hergott, author F. Lepetit, author P. D'Oliveira, author P. Salières, author R. Géneaux, author M. Luttmann, author I. Vadillo-Torre, author J. Lenfant, author S. J. Weber, author M. Dehlinger, author E. Meltchakov, author F. Delmotte, author C. Bourassin-Bouchet, author J. Im, author Z. Chen, author J. Caillaux, author J. Zhang, author M. Marsi, author L. Barreau, author L. Poisson, author D. Dowek, author M. Fanciulli, author O. Heckmann, author M. C. Richter, author K. Hricovini, author M. Sebdaoui, author D. Dennetiere, author F. Polack, and author T. Ruchon, @noop journal journal The European Physical Journal Special Topics (year 2023)NoStop [McPherson et al.(1987)McPherson, Gibson, Jara, Johann, Luk, McIntyre, Boyer, and Rhodes]McPherson87 author author A. McPherson, author G. Gibson, author H. Jara, author U. Johann, author T. S. Luk, author I. A. McIntyre, author K. Boyer, and author C. K. Rhodes, 10.1364/JOSAB.4.000595 journal journal J. Opt. Soc. Am. B volume 4, pages 595 (year 1987)NoStop [Ferray et al.(1988)Ferray, L'Huillier, Li, Lompre, Mainfray, and Manus]Ferray88 author author M. Ferray, author A. L'Huillier, author X. F. Li, author L. A. Lompre, author G. Mainfray, and author C. Manus, 10.1088/0953-4075/21/3/001 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 21, pages L31 (year 1988)NoStop [Poletto et al.(2007)Poletto, Villoresi, Benedetti, Ferrari, Stagira, Sansone, and Nisoli]Poletto07 author author L. Poletto, author P. Villoresi, author E. Benedetti, author F. Ferrari, author S. Stagira, author G. Sansone, and author M. Nisoli, 10.1364/OL.32.002897 journal journal Opt. Lett. volume 32, pages 2897 (year 2007)NoStop [Escher et al.(2011)Escher, Weber, Merkel, Plucinski, and Schneider]Escher11 author author M. Escher, author N. B. Weber, author M. Merkel, author L. Plucinski, and author C. M. Schneider, 10.1380/ejssnt.2011.340 journal journal e-Journal of Surface Science and Nanotechnology volume 9, pages 340 (year 2011)NoStop [Beaulieu et al.(2020)Beaulieu, Schusser, Dong, Schüler, Pincelli, Dendzik, Maklar, Neef, Ebert, Hricovini, Wolf, Braun, Rettig, Minár, and Ernstorfer]Beaulieu20-2 author author S. Beaulieu, author J. Schusser, author S. Dong, author M. Schüler, author T. Pincelli, author M. Dendzik, author J. Maklar, author A. Neef, author H. Ebert, author K. Hricovini, author M. Wolf, author J. Braun, author L. Rettig, author J. Minár, and author R. Ernstorfer, 10.1103/PhysRevLett.125.216404 journal journal Phys. Rev. Lett. volume 125, pages 216404 (year 2020)NoStop [Rostami et al.(2019)Rostami, Volckaert, Lanata, Mahatha, Sanders, Bianchi, Lizzit, Bignardi, Lizzit, Miwa, Balatsky, Hofmann, and Ulstrup]Rostami19 author author H. Rostami, author K. Volckaert, author N. Lanata, author S. K. Mahatha, author C. E. Sanders, author M. Bianchi, author D. Lizzit, author L. Bignardi, author S. Lizzit, author J. A. Miwa, author A. V. Balatsky, author P. Hofmann, and author S. Ulstrup, 10.1103/PhysRevB.100.235423 journal journal Phys. Rev. B volume 100, pages 235423 (year 2019)NoStop [Caruso et al.(2022)Caruso, Schebek, Pan, Vona, and Draxl]Caruso22 author author F. Caruso, author M. Schebek, author Y. Pan, author C. Vona, and author C. Draxl, 10.1021/acs.jpclett.2c01034 journal journal The Journal of Physical Chemistry Letters volume 13, pages 5894 (year 2022)NoStop [Saathoff et al.(2008)Saathoff, Miaja-Avila, Aeschlimann, Murnane, and Kapteyn]Saathoff08 author author G. Saathoff, author L. Miaja-Avila, author M. Aeschlimann, author M. M. Murnane, and author H. C. Kapteyn, 10.1103/PhysRevA.77.022903 journal journal Phys. Rev. A volume 77, pages 022903 (year 2008)NoStop [Material()]SOM author author S. Material, https://URL.url journal URL NoStop [Dil(2019)]dil19 journal author author J. H. Dil, 10.1088/2516-1075/ab168b journal journal Electronic Structure volume 1, pages 023001 (year 2019)NoStop [Schüler et al.(2020)Schüler, De Giovannini, Hübener, Rubio, Sentef, and Werner]Schuler20 author author M. Schüler, author U. De Giovannini, author H. Hübener, author A. Rubio, author M. A. Sentef, and author P. Werner, @noop journal journal Science Advances volume 6 (year 2020)NoStop
http://arxiv.org/abs/2306.02121v2
20230603142515
Identifying Subgroups of ICU Patients Using End-to-End Multivariate Time-Series Clustering Algorithm Based on Real-World Vital Signs Data
[ "Tongyue Shi", "Zhilong Zhang", "Wentie Liu", "Junhua Fang", "Jianguo Hao", "Shuai Jin", "Huiying Zhao", "Guilan Kong" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CY", "math.OC" ]
The mixing-spacetime symmetry in the Floquet-Bloch band theory Pei Wang July 31, 2023 ============================================================== This study employed the MIMIC-IV database as data source to investigate the use of dynamic, high-frequency, multivariate time-series vital signs data, including temperature, heart rate, mean blood pressure, respiratory rate, and SpO2, monitored first 8 hours data in the ICU stay. Various clustering algorithms were compared, and an end-to-end multivariate time series clustering system called Time2Feat, combined with K-Means, was chosen as the most effective method to cluster patients in the ICU. In clustering analysis, data of 8,080 patients admitted between 2008 and 2016 was used for model development and 2,038 patients admitted between 2017 and 2019 for model validation. By analyzing the differences in clinical mortality prognosis among different categories, varying risks of ICU mortality and hospital mortality were found between different subgroups. Furthermore, the study visualized the trajectory of vital signs changes. The findings of this study provide valuable insights into the potential use of multivariate time-series clustering systems in patient management and monitoring in the ICU setting. § BACKGROUND The Intensive Care Unit (ICU) is a specialized medical facility that provides intensive monitoring and treatment for critically ill patients. ICU patients are characterized by severe illness and life-threatening conditions, requiring close monitoring and treatment. The changes in vital signs have multifaceted implications for patients. Existing research on patient subgroup analysis often focuses on single diseases and depends on cross-sectional analysis[1], and the value of dynamic multivariate time-series data of vital signs have not been utilized[2]. Therefore, there is a research gap in the literature about making full use of time-series vital sign data to explore subgroups in ICU patients for precision ICU care. § OBJECTIVES This study aimed to use end-to-end multivariate time-series clustering algorithm to identify subgroups of ICU patients based on dynamic vital sign data recorded during the first 8 hours after ICU admission, and then to further explore the differences of prognoses in different patient subgroups. Overall, this study would make contributions to precision ICU care by classifying patients into different subgroups for more personalized clinical interventions. § METHODS In this study, the Medical Information Mart for Intensive Care (MIMIC)-IV database[3] was used as the data source. The dynamic, high-frequency vital sign data monitored in ICU during the first 8 hours was used for analysis. We used multivariate time-series clustering algorithms to cluster and group critically ill ICU patients first, and then analyzed the patient prognosis in different subgroups to help clinicians identify those patients with high mortality risk. All adult ICU patients in MIMIC-IV were included. In clustering analysis, data of patients admitted between 2008 and 2016 was used for model development and patients admitted between 2017 and 2019 for model validation. Variables including gender, age, race, height, weight, date of death (DoD), together with hourly monitored vital signs: temperature, heart rate, mean blood pressure, respiratory rate, and SpO2 were extracted. For patients having multiple ICU stays in one hospital admission, only the first ICU admission record was extracted. Patients were excluded if they had missing values for the extracted variables. Patient prognoses including ICU mortality, hospital mortality were analyzed for each patient subgroup. Finally, The elbow method and metrics including Davies-Bouldin Index (DBI) and Calinski-Harabaz Index (CHI) were used to determine the optimal number of clusters (k) and optimal clustering algorithm[4]. § RESULTS In this study, a total of 10,118 patients including 8,080 patients admitted between 2008 and 2016, and 2,038 patients admitted between 2017 and 2019 were included. Several clustering models, including Time2Feat[5] combined with k-Means, k-Shape, k-Medoids and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) were employed for analysis, and finally the Time2Feat combined with k-Means model was selected as it had the best performance with CHI of 341.59 and DBI of 5.92. According to the elbow method, the optimal number of clusters is determined to be 3 (k=3). In model development process, 8080 patients admitted from 2008 to 2016 were divided into three subgroups. In model validation process, 2038 patients admitted from 2017 to 2019 were divided into three subgroups as well. As depicted in Figure 1, vital sign trajectories in the three identified subgroups are similar in both the model development and validation datasets. And there are noticeable differences in the trajectories of heart rate, SpO2, temperature and respiratory rate among the three subgroups, while the average blood pressure trajectories show less apparent distinctions in the three subgroups. As to hospital mortality, on the dataset for model development, the risk ranked from highest to lowest were Subgroup2 (0.1092±0.005), Subgroup1 (0.0875±0.0104), and Subgroup3 (0.0867±0.0048); on the validation dataset, the risk showed consistent order: Subgroup2 (0.1245±0.0117), Subgroup1 (0.1218±0.0145), and Subgroup3 (0.1033±0.0113). Regarding the ICU mortality, the risk ranked from highest to lowest were Subgroup1 (0.0485±0.0079), Subgroup2 (0.0468±0.0034), and Subgroup3 (0.0242±0.0026) on the model development dataset, and the order was Subgroup2 (0.0436±0.007), Subgroup1 (0.0393±0.0086), and Subgroup3 (0.0234±0.0056) on the validation dataset. There was a slight ICU mortality difference between Subgroup1 and Subgroup2. Considering the smaller sample size in the validation dataset, there is a certain margin of error. However, both Subgroup1 and Subgroup2 had higher ICU mortality rates than the overall rate (0.0353±0.0041). § CONCLUSION The multivariate time-series data of vital signs monitored during the first 8 hours after ICU admission can reflect real conditions of patients and help to predict prognoses to some extent. By employing proper multi-variate time-series clustering algorithm to make second use of real-world vital sign data recorded in ICU could help clinicians to identify distinct patient subgroups with different mortality risks. The Time2Feat combined with k-Means method used in this study has shown satisfactory clustering performance. In the next step, we will generalize the Time-Series Clustering approach to other diseases and refine the model in practical applications. This study was supported by Grants from the Zhejiang Provincial Natural Science Foundation of China (Grant No. LZ22F020014), National Key Research and Development Program of China (Grant No. 2018AAA0102100), Beijing Municipal Science & Technology Commission (Grant No. 7212201), Humanities and Social Science Project of Chinese Ministry of Education (Grant No. 22YJA630036). § REFERENCES [1]Liu K, Zhang X, Chen W, et al. Development and validation of a personalized model with transfer learning for acute kidney injury risk estimation using electronic health records[J]. JAMA Network Open, 2022, 5(7): e2219776-e2219776. doi:10.1001/jamanetworkopen.2022.19776. [2]Tharakan S, Nomoto K, Miyashita S, et al. Body temperature correlates with mortality in COVID-19 patients[J]. Critical care, 2020, 24: 1-3. doi:10.1186/s13054-020-03045-8. [3]Johnson A E W, Bulgarelli L, Shen L, et al. MIMIC-IV, a freely accessible electronic health record dataset[J]. Scientific data, 2023, 10(1): 1. doi:10.1038/s41597-022-01899-x. [4]Kodinariya T M, Makwana P R. Review on determining number of Cluster in K-Means Clustering[J]. International Journal, 2013, 1(6): 90-95. [5]Bonifati A, Buono F D, Guerra F, et al. Time2Feat: learning interpretable representations for multivariate time series clustering[J]. Proceedings of the VLDB Endowment, 2022, 16(2): 193-201. doi:10.14778/3565816.3565822.
http://arxiv.org/abs/2306.08995v1
20230615094345
Instability of the optimal edge trajectory in the Blasius boundary layer
[ "Miguel Beneitez", "Yohann Duguet", "Philipp Schlatter", "Dan S. Henningson" ]
physics.flu-dyn
[ "physics.flu-dyn", "math.DS" ]
Team AcieLee: Technical Report for EPIC-SOUNDS Audio-Based Interaction Recognition Challenge 2023 Yuqi Li^1 Yizhi Luo^1 Xiaoshuai Hao^2 Chuanguang Yang^3 Zhulin An^3 Dantong Song^1 Wei Yi^4 ^1College of Computer and Information Science College of Software, Southwest University, Chongqing, China ^2Samsung Research China-Beijing(SRC-B) ^3Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China {lllyq13, l3237614606, dtsong0420}@email.swu.edu.cn [email protected] {yangchuanguang, anzhulin}@ict.ac.cn [email protected] Received February 22, 2023 / Accepted June 14, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In the context of linear stability analysis, considering unsteady base flows is notoriously difficult. A generalisation of modal linear stability analysis, allowing for arbitrarily unsteady base flows over a finite time, is therefore required. The recently developed optimally time-dependent (OTD) modes form a projection basis for the tangent space. They capture the leading amplification directions in state space under the constraint that they form an orthonormal basis at all times. The present numerical study illustrates the possibility to describe a complex flow case using the leading OTD modes. The flow under investigation is an unsteady case of the Blasius boundary layer, featuring streamwise streaks of finite length and relevant to bypass transition. It corresponds to the state space trajectory initiated by the minimal seed; such a trajectory is unsteady, free from any spatial symmetry, and shadows the laminar-turbulent separatrix for a finite time only. The finite-time instability of this unsteady base flow is investigated using the 8 leading OTD modes. The analysis includes the computation of finite-time Lyapunov exponents as well as instantaneous eigenvalues, and of the associated flow structures. The reconstructed instantaneous eigenmodes are all of outer type. They map unambiguously the spatial regions of largest instantaneous growth. Other flow structures, previously reported as secondary, are identified with this method as relevant to streak switching and to streamwise vortical ejections. The dynamics inside the tangent space features both modal and non-modal amplification. Non-normality within the reduced tangent subspace, quantified by the instantaneous numerical abscissa, emerges only as the unsteadiness of the base flow is reduced. § INTRODUCTION Hydrodynamic stability theory aims at characterising the stability of a given base flow to infinitesimal or finite-amplitude disturbances. In most academic cases, the base flow of interest is known analytically and is generally independent of time <cit.>. There are however physical contexts in which the choice of a physically relevant base flow is not obvious. Bypass transition to turbulence in shear flows falls into this category: there is ample experimental and numerical evidence that turbulent fluctuations emerge from the breakdown of laminar streamwise streaks of sufficiently strong amplitude <cit.> rather than from the destabilisation of the steady laminar base flow. Streamwise streaks, originally called Klebanoff modes, are loosely defined as spanwise modulations of the streamwise velocity field <cit.>. They are predominantly streamwise-independent structures supporting three-dimensional wiggles convected at different velocities <cit.>. Streaks are not associated mathematically to unstable eigenmodes of the purely laminar base flow, instead they emerge because of the non-normality of the associated linear operator <cit.> via a mechanism called lift-up. This mechanism transfers streamwise vorticity upstream into streaks further downstream <cit.>. Careful early experiments have suggested that their breakdown follows an instability mechanism <cit.>. The exact temporal dynamics of finite-amplitude streaks is however not trivial. In several numerical studies, a frozen (two-dimensional) finite-amplitude streak pattern was considered as a base flow, and its linear stability analysis was carried out by assuming that the perturbations are inviscid <cit.>. The unstable eigenfunctions identified break the translational invariance of the initial streaks. The main outcome of the stability analysis of streamwise-invariant streaks is the possibility for two different ways of breaking this streamwise invariance, either by symmetric (varicose) or anti-symmetric (sinuous) eigenmodes. Around that time, <cit.> made use of the concept of subcritical streak instability to justify the three-dimensionality of the self-sustaining process in all shear flows <cit.>. <cit.>, following <cit.>, showed that streamwise modulations of the streaks observed during transition, although possible as a linear instability of the frozen streaks, can also arise for lower streak amplitudes via non-normal amplification of streak disturbances over a finite-time. In a related study, the secondary instability of time-dependent streaks in channel flow was addressed by adopting a finite-time formalism by <cit.>. <cit.> studied the secondary instability of streaks via nonlinear impulse response. Linear stability features were later extracted directly from numerical data <cit.> by considering an instantaneous streamwise-independent base flow. More recently, the stability of streaks in turbulent flows was also considered by focusing on the associated mean flow rather than on instantaneous flow fields <cit.>. It remains hence an open question whether there are additional insights for stability analysis by considering fully unsteady three-dimensional base flows. This paper is devoted to a computational exploration of the possibilities offered by this approach. In the context of initial value problems, an initial condition at time t=t_0 is represented by a point in the associated state space. The knowledge of a given initial condition defines uniquely the base flow, i.e. the unsteady state space trajectory initiated by that particular initial condition. In principle, the arbitrary unsteadiness of the base flow is not an obstacle to modal linear stability analysis (LSA), at least when the base flow corresponds to an attractor defined over unbounded times. The generalisation of eigenvalues is given by (time-independent) Lyapunov exponents (LEs), defined as ergodic averages of the instantaneous divergence rate between trajectories <cit.>. The generalisation of the eigenvectors is given by the (time-dependent) covariant Lyapunov vectors (CLVs) <cit.>. Eventually, in the present study, an additional theoretical limitation is the requirement that the method be applicable to a base flow defined only over a finite-time interval. This requirement is made necessary by the convective nature of the boundary layer and the fact that any spatially localised perturbation to the Blasius flow has to exit a bounded computational domain in a finite time. In this context, most infinite-time concepts such as eigenvalues need to be formally redefined over the finite time interval of interest. While this does not pose any strong mathematical difficulty, it crucially determines the mathematical toolbox relevant for that problem. We are interested here in a base flow featuring streamwise streaks of finite length and width, with an unsteady dynamics. Since we wish to define the base flow in an unambiguous way, it is initialised at t=0 from a well-defined finite-amplitude perturbation to the original laminar Blasius flow. In the present context of identifying the mechanisms allowing for transition from a minimal level of disturbance, the selected initial condition is the laminar base flow, perturbed at t=0 by the so-called minimal seed <cit.>. The minimal seed is defined rigorously as the disturbance of lowest energy capable of triggering turbulence, or equivalently the point on the edge manifold closest to the laminar attractor in energy norm <cit.>. Its computation is based on a nonlinear optimization method <cit.> and in practice requires an optimization time interval (0,T_opt). The trajectory initiated by this flow field is called optimal edge trajectory. By construction it is an edge trajectory i.e. it belongs to the invariant set called the laminar-turbulent boundary: some infinitesimal perturbations to such trajectories lead to relaminarisation while others trigger turbulent flow. The concept of edge trajectory was originally introduced in bistable parallel shear flows <cit.>: the asymptotic fate of such edge trajectories form the edge state, a relative attractor in state space, whose stable manifold divides the state space in two disjoint and complementary basins. Its extension to boundary layer flows is trivial for parallel boundary layer flows <cit.> but less straightforward in spatially developing boundary layer flows like the Blasius boundary layer <cit.>. In such cases the concept of a turbulent attractor is not clearly defined, yet edge trajectories can still be identified, at least over finite times. In boundary layers, the edge concept becomes fragile on very long timescales because the laminar Blasius flow can develop instabilities to Tollmien–Schlichting waves over long time horizons <cit.>. In the absence of an asymptotic state, the stability of finite-time edge trajectories cannot be investigated using Lyapunov exponents and CLVs, all based on ergodic infinite-time averages. The generalisations of eigenvalues/LEs on finite times are, trivially, the finite-time Lyapunov exponents (FTLEs). Their large-time limits, when they are defined, coincide indeed with LEs <cit.>. The eigenvectors do not, however, admit any simple finite-time generalisation. We chose for this task the optimally-time dependent (OTD) modes introduced recently by <cit.>. The associated formalism has two advantages: it computes physically meaningful directions in the tangent space, and yields accurate numerical estimates of the FTLEs. OTD modes approximate the linearised dynamics <cit.> around the base flow trajectory in an optimal way, yet under the constraint that the modes remain orthogonal at all times. Orthogonality is not a property shared by CLVs. Handling an orthogonal basis is in practice a strong technical advantage over ill-conditioned bases. The trade-off is that the OTD modes do not fulfill the covariance property. Note that, when both are defined, the leading OTD mode still coincides with the leading CLV for sufficiently long times. The reduced linearised operator, obtained by projecting the original operator on the r first OTD modes, can be used to estimate the stability characteristics of the high-dimensional problem, otherwise prohibitively expensive to compute. In particular, the eigenvalues of this reduced-order operator yield an accurate approximation of the FTLEs of the full system <cit.>. Besides, whereas the OTD modes themselves are not interpretable physically, instantaneous eigenmodes can be reconstructed in physical space from the diagonalisation of the reduced order operator. As shown by <cit.> from specific examples, over shorter time horizons well-initialised OTD modes can capture the non-normality of the underlying dynamics. These properties make OTD modes an interesting tool specifically for transient phenomena. On a technical level, their implementation requires neither solutions of the adjoint system, nor data to be input, and no iterative scheme: the OTD modes are computed in real time together with the time-evolving base flow. They however need to be initialised at t=0. There is currently no accepted general way of choosing initial conditions for these modes, although it is expected that past some finite transient time the OTD directions naturally align with the most important directions of the system. OTD modes have been used recently in several hydrodynamic applications, including the identification of bursting phenomena <cit.>, the control of linear instabilities <cit.> and the stability of pulsating Poiseuille flow <cit.> as well as for faster edge tracking in high dimension <cit.>. The current investigation, motivated by these promising properties, is an opportunity to test a new computational framework for stability calculations considered until now as challenging. The present study revisits the optimal edge trajectory in the Blasius boundary layer by considering it as the new finite-time base flow, and by determining its stability characteristics using the new finite-time framework offered by OTD modes. In particular, the physical structure of the leading modes will be analysed at different times, with a focus on the influence of the time dependence of the base flow on the results. The structure of this paper is as follows. The OTD modes are introduced mathematically in a general context in Section 2. The computational set-up, the implementation, and the details of the reference edge trajectory are described in Section 3. Section 4 contains the stability analysis using the proposed methodology. Finally, the conclusions are given and discussed in Section 5. § THEORETICAL FRAMEWORK §.§ Linearisation around an arbitrary base flow The context of the current study is very general. Assuming that a spatially discretised flow field can be represented by n independent real-valued degrees of freedom with n≫ 1 (see e.g. <cit.>), we consider ℝ^n as the original high-dimensional space of reference. We suppose a non-autonomous dynamical system defined over a time interval [t_0,t_1): d Q/dt= f( Q,t), Q(t_0)= Q_0, where Q_0, Q∈ℝ^n, and f:ℝ^n→ℝ^n is a diffeomorphism. We suppose both t_0 and t_1 finite although t_1→+∞ is also possible. For a given choice of Q_0, we define the solution to Eq. (<ref>), namely Q̅:(t_0:t_1)→ℝ^n as the base flow whose stability we will now determine. Let q(t) represent a small perturbation to Q̅(t), small enough so that the dynamics can be linearised around Q̅(t) (mathematically q evolves in the tangent space associated with the dynamics). Then q is governed by the linearised equation d q/dt= L(Q̅,t) q, L(Q̅,t)=∇_ Q f(Q̅,t), where ∇_ Q f(Q̅,t) is the n × n (time-dependent) Jacobian matrix, evaluated along the base flow at time t. The OTD modes, to be introduced in Section <ref>, form a basis of time-dependent real-valued vectors (a complex-valued definition is also possible, but is not discussed here). They approximate in an optimal way the leading directions of the Jacobian operator. They are better understood after the notion of covariant vectors is discussed in Section <ref>. §.§ Covariance property An ideal basis for the linearised dynamics should allow one to split the whole n-dimensional tangent space into a direct sum of subspaces evolving along the flow, each one with its own specific dynamics <cit.>. The associated time-dependent directions spanning these subspaces are referred to as dynamically covariant. If the base flow Q̅ does not depend on time, the covariance property classically defines expanding and contracting eigenspaces, the covariant vectors are the associated eigenvectors, and that the temporal rate-of-change of their norm defines the eigenvalues. In the general case, these vectors are called covariant Lyapunov vectors (CLVs) or sometimes simply Lyapunov vectors. By definition, CLVs can be re-interpreted as zeros of the functional 𝒥=lim_δ t→ 01/(δ t)^2∑_i=1^n|| w_i(t+δ t)-(∇ F_t^t+δ t) w_i(t)||^2, where F_t^t+δ t:ℝ^n→ℝ^n is the infinitesimal forward propagator associated with Eq. (<ref>). The Jacobian matrix ∇ F_t^t+δ t maps a vector of the tangent space w_i(t) at time t to its image at the later time t+δ t in the corresponding tangent space. The issues associated with CLVs are two-fold. First, they are not necessarily mutually orthogonal at a given time, making the associated basis possibly ill-conditioned. Second, the only algorithms known to compute them are proven to be valid only on attractors on which the dynamics is ergodic <cit.>. CLVs are thus essentially an inappropriate computational tool for the study of transients. §.§ Optimally time-dependent modes <cit.> defined the OTD modes u_i, i=1,...,r as a computational compromise. These are minimisers of the functional 𝒥 in Eq. (<ref>), under the additional constraint that they form an orthonormal basis at all times. The orthonormality constraint simply reads ⟨ u_i(t), u_j(t)⟩=δ_ij, i,j=1,..,r, r≪ n, where ⟨·,·⟩ denotes the inner product associated with the L^2 norm and δ_ij is the classical Kronecker symbol. The time evolutions of the OTD modes and of a set of initially random unit vectors are depicted schematically in Fig. <ref> for illustration purposes. Random unit vectors would follow the tangent dynamics and all align rapidly with the most expanding direction, making them poor candidates to describe and analyse the dynamics of the tangent space. OTD modes follow the tangent dynamics but stay orthonormal at all times, avoiding any alignment issues which might occur using e.g. CLVs. This constraint destroys the interpretability of each OTD direction in terms of covariant dynamics. However, the orthonormality of the OTD modes is particularly appealing for reduced order modelling, for instance in the context of control <cit.>. As derived in <cit.>, the maximisation of 𝒥 in Eq. (<ref>) under the orthogonality constraint (<ref>) yields a system of coupled nonlinear evolution equations: d u_i/dt = L(t) u_i-∑_j=1^r [⟨ L(t) u_i, u_j⟩- A_ij(t)] u_j, i=1,..,r The nonlinearity is a direct consequence from the orthonormality constraint. The matrix 𝐀∈ℝ^r× r refers a priori to any skew-symmetric matrix. The discretised equations eq. (<ref>) for i=1,...,r form, together with eq. (<ref>), a closed (r+1)-dimensional system of real-valued ODEs. The u_i's depend on the instantaneous vector 𝐐̅(t), however 𝐐̅ itself is unaffected by the evolution of the u_i's. In <cit.>, a particular choice of 𝐀 was made: A_ij = {[ -⟨L u_j, u_i ⟩, j<i; 0, i=j; ⟨ L u_i, u_j ⟩, j>i, ] which was also considered here. An arbitrary choice of 𝐀 would lead to a fully coupled system so that each u_i appears in every equation in eq. (<ref>). However, under the current choice the set of r equations in eq. (<ref>) has a lower triangular form: the evolution of the i^th mode depends only on the modes from 1 to i, making the OTD formulation hierarchical. The resulting evolution equation for each OTD mode becomes d u_i/dt = L(t) u_i-⟨ L(t) u_i, u_i ⟩ u_i-∑_j=1^i-1 [⟨ L(t) u_i, u_j⟩ + ⟨ L(t) u_j, u_i⟩ ] u_j. The nonlinear system of r equations (<ref>) can be evolved forward in time together with Eq. (<ref>), from which the matrix L can be evaluated at all times. Eq. (<ref>) remains however independent of the evolution of each u_i. This results in an (r+1)× n-dimensional asymmetrically coupled dynamical system. The OTD modes retain a (short-time) memory of their initial conditions. They are in general not covariant, except for base flows such that all instantaneous eigenvectors remain normal to each other. §.§ The reduced linearised operator In order to analyse the linearised dynamics within the reduced subspace optimally spanned by the OTD modes, <cit.> introduced the reduced operator L_r defined by projecting the high-dimensional operator L onto the OTD directions: L_r_ij(t) = ⟨ u_i, L(t) u_j⟩ i,j=1,…,r. In particular all the instantaneous stability indicators defined in the next subsection will be derived from algebraic properties of the r × r matrix L_r evaluated at the relevant times. For a time-independent linearised operator L the space spanned by the modes {u_i}_i=1^r converges asymptotically to the most unstable eigenspace of L. Moreover, if L happens to be also symmetric, the OTD modes coincide with its eigenvectors at all times. §.§ Instantaneous stability indicators As emphasized in <cit.>, although they correspond to divergence-free vector fields the u_i's do not have a direct physical interpretation as flow fields. More meaningful vector sets can nevertheless be reconstructed instantaneously from the knowledge of the u_i's and the reduced operator. At every time, L_r(t) can be diagonalised as L_r= E^λΛ_r( E^λ)^-1 with E^λ and Λ_r=diag(λ_1(t),...,λ_r(t)). The new modes u_i^λ, i=1,...,r are defined (using the summation convention) by u_i^λ(t)= E^λ_ij(t) u_j(t). Unlike the u_i's, the u_i^λ's are not necessarily mutually orthogonal. They are interpreted as instantaneous eigenmodes. They are the only velocity fields used for visualisation in this paper. We emphasize that, although the u_i's are real-valued, the modes u_i^λ are complex-valued and come in pairs. Only the real parts, with arbitrary phase, of the associated velocity fields will be represented. The time-dependent numbers λ_1(t),...,λ_r(t), i=1,...,r are labelled instantaneous eigenvalues and are complex-valued. Another key scalar quantity is the instantaneous numerical abscissa σ(t), a positive number, defined as the largest eigenvalue of the symmetrised reduced operator ( L+ L^T)/2 <cit.>. σ corresponds to the largest possible growth rate at a given time, due to both normal and non-normal effects combined together. Whenever L is non-normal, σ is strictly larger than the real part of all λ_i's. For a given integer value r, it is a natural extension to define σ_r as the largest eigenvalue of the symmetrised reduced operator ( L_r+ L_r^T)/2. The gap g_r(t):=min_i|σ_r-Re(λ_i)|=σ_r-Re(λ_1) quantifies the non-normality of the reduced linearised operator at each instant. The values of g_r(t) bound from below the value of g_n(t) corresponding to the full high-dimensional system. In the remainder of the paper we will not make a difference between g_r and g_n and will simply use the notation g(t). Note that for arbitrary time-dependent operators and finite r, it is possible to have σ_r ≈maxλ_i even for a non-normal operator. §.§ Finite-time Lyapunov exponents For a general dynamical system in dimension n, characterised by a propagator F_t_0^t, the Cauchy-Green tensor 𝐂_t_0^t is defined as C_t_0^t( q_0) :=[∇ F_t_0^t( q_0)]^T [∇ F_t_0^t( q_0)]. The associated finite-time Lyapunov exponents (FTLEs) are defined directly from the eigenvalues γ_1>γ_2>...>γ_n of the Cauchy-Green tensor <cit.>. These eigenvalues are real and positive by virtue of the positive definiteness of the Cauchy-Green tensor. Each FTLE is defined, for an initial time t_0 and a horizon time T>0 <cit.>, as (Λ_t_0^t_0+T)_i=1/Tlog√(γ_i), i=1,...,n. <cit.> provided an analytical proof that for that for any integer r>0, the r-dimensional OTD-subspace aligns exponentially fast, i.e. for increasing T with the space spanned by the r most dominant left vectors of the Cauchy-Green tensor. The exact rate of convergence depends on the spectrum of the problem at hand. The OTD formulation is hence a robust direct method to estimate the r leading finite-time Lyapunov exponents (Λ_t_0^t_0+T)_i, i=1,...,r of the full system at any time t_0 <cit.>, provided the time horizon T is large enough. As a consequence the FTLEs (Λ_t_0^t_0+T)_i, i=1,...,r are evaluated simply by averaging over time the diagonal elements of L_r <cit.>. They are expressed as (Λ_t_0^t_0+T)_i ≈1/T∫_t_0^t_0+T⟨ u_i(τ), L_r(τ) u_i(τ)⟩ dτ, i=1,...,r, It can be useful to relate the OTD modes to other known vector sets from the literature beyond the CLVs. The Gram-Schmidt vectors are precisely involved in the classical algorithms used for computing finite-time Lyapunov exponents and hence LEs (see e.g. <cit.>). The OTD modes coincide with the so-called Gram-Schmidt vectors, at least in the limit where the Gram-Schmidt vectors are continuously re-orthogonalised <cit.>. The same modes have also been called sometimes backwards Lyapunov vectors <cit.>. Unlike the CLVs, the OTD modes depend on the choice of the inner product except for the leading mode. § COMPUTATIONAL SET-UP §.§ Direct numerical simulation The Blasius boundary layer is the incompressible flow over a semi-infinite flat plate. It develops at the leading edge of the plate in the absence of a streamwise pressure gradient. Let x,y,z denote the streamwise, wall-normal and spanwise directions, respectively. 𝐯 is the total velocity field, 𝐯_B=(u_B,v_B,0) that of the steady Blasius solution, then 𝐮:=𝐯-𝐯_B=(u,v,w) is the perturbation velocity field. All quantities are made non-dimensional using the free-stream velocity U_ ∞ and the boundary layer thickness δ^*(x):=∫_0^∞ (1-u_B(x)/U_∞)dy of the undisturbed (steady) Blasius flow. A local Reynolds number can be defined as Re_δ^*(x):=U_∞δ^*/ν with ν the kinematic viscosity of the fluid. The value of Re_δ^*_0=Re_δ^*(x=0) is imposed at the upstream end (x=0) of the computational domain, located at a finite distance downstream of the leading edge. The boundary conditions for the edge trajectory at the wall (y=0) are of no-slip and no-penetration type, u=v=w=0 , and at the upper domain boundary (y=L_y) of Neumann type to allow for a natural growth of the boundary layer, ∂ u/∂ y=∂ v/∂ y=∂ w/∂ y=0. A fringe region located at the downstream end of the domain damps outgoing velocity perturbations consistently with the streamwise periodic boundary conditions. The fringe is imposed as a volume force 𝐅(t,x,y,z) of the form 𝐅=γ (x)( 𝒰(x,y,z)-𝐯(t,x,y,z)), where γ(x) is a non-negative fringe function detailed in <cit.>. The streamwise component of 𝒰(x,y,z) is defined as 𝒰_x=U(x,y,z)+[U(x+x_L,y,z)-U(x,y,z)]S(x-x_blend/Δ_blend), where S(x_blend,Δ_blend) is a blending function connecting smoothly the outflow to the inflow, and U(x,y,z) solves the boundary layer equations. The wall-normal component of 𝒰 is obtained via the continuity equation. In the present work the fringe length is Δ_blend=600, x_L=2500 and γ_max=0.8. The present approach has been successfully applied in most works referenced in <cit.>, and in several later publications including <cit.>. The effect of the fringe on outgoing perturbations, allowing for the simulation of spatially developing flows in the presence of periodic boundary conditions was analysed in full mathematical detail in <cit.>. The temporal integration of the incompressible Navier–Stokes equations is performed using the pseudo-spectral solver SIMSON <cit.>. This direct numerical simulation (DNS) code solves the equations in the wall-normal velocity-vorticity formulation. The solution is advanced in time using a second-order Crank-Nicholson scheme for the linear terms and a fourth-order low-storage Runge-Kutta scheme for the nonlinear terms. The timestep is fixed to Δ t=0.2 in terms of U_∞ and δ_0^*. The velocity field is expanded along N_x Fourier modes in the streamwise direction x and N_z modes in the spanwise direction z, N_y Chebyshev modes are used in the wall-normal direction y using the Chebyshev-tau method. The evaluation of the nonlinear terms obeys the 3/2-rule for dealiasing. The additional equations (<ref>) ruling the evolution of the OTD modes are advanced in time using the same scheme, based on an explicit evaluation of the inner products at every collocation point at every timestep. The initial conditions for the modes i=1,..,r are spatially localised disturbance velocity fields, consistent with the localised nature of the perturbations to the streaks observed in bypass transition. The boundary conditions for equations (<ref>) are the same as for the original DNS. The choice of boundary conditions is particularly sensitive for the perturbation equations. Further details can be found in Appendix <ref>. The computational requirements for each individual OTD mode are the same of a full DNS. Although, as described in <cit.>, it would be possible to use a moving box technique to track localised disturbances over long time horizons using limited computational resources, this is not required here because of the limited tracking time. The reference frame is hence understood as the laboratory frame. The computational set-up for the edge tracking is similar to that in <cit.>. The computational domain Ω has dimensions [L_x,L_y,L_z]= [2500,60,100] and the velocity field is expanded on [N_x,N_y,N_z]=[2048,201,256] modes before dealiasing. This numerical resolution is comparable locally to that used in <cit.> and <cit.>. The computation of the OTD modes starts at initial time t=0 from the (spatially localised) minimal seed computed in <cit.>. It ends at t=800, at which time the localised perturbation has not yet left the computational domain. The computation of the OTD modes in (<ref>) depends on the definition of the inner product, chosen here as ⟨u,u'⟩ = ∫_Ω (uu'+vv'+ww')dΩ, where u=(u,v,w) and u'=(u',v',w') are any two flow fields with finite L^2 norm, and dΩ=dxdydz is the usual infinitesimal integration element over the numerical domain Ω. §.§ The optimal edge trajectory The minimal seed M refers to the perturbation closest in kinetic energy to the laminar Blasius boundary layer flow, and able to trigger subcritical transition. This particular optimal condition was selected because it gives rise to a fully nonlinear trajectory relevant for the method tested. Moreover, it is uniquely defined by the parameters for the optimisation algorithm, namely here the Reynolds number value Re_δ_0^*=240.458. That value of Re_δ(x=0) is chosen to match previous works <cit.>, in particular the original work by <cit.> where the non-dimensionalisation differs from the present one. Note that in parallel flows the Reynolds number entirely defines the dynamical system, however in spatially developing flows the Reynolds number is intrinsically linked to the streamwise coordinate. Consequently, the minimal seed is conditioned by the range of Reynolds numbers (streamwise distances) allowed for in the time evolution of the perturbations. This results in the minimal seed being dependent on the inlet Reynolds number, on the length of the computational domain and on the optimization time <cit.>. In <cit.> the chosen optimization time is T_opt=400 and the computational domain length L_x=500. M is computed iteratively using the nonlinear adjoint-based optimization framework of <cit.>, <cit.>. The maximised objective function is the energy gain at a given time T_opt, G(T_opt)=E(T_opt)/E(0) where E(t) is the perturbation kinetic energy at time t. The optimization framework follows <cit.> and is based on the implementation into the open-source solver Nek5000 originally implemented by <cit.>. The optimal state determined for a near-to-threshold initial energy E_0 was bisected using an edge tracking algorithm <cit.>, so that the computed trajectory approximates well an edge trajectory for t≤ 800, the bracketing trajectories differing by less than 2% in the observable used for edge tracking. This property is crucial for the stability study: initialising the base flow for the OTD analysis from outside the edge manifold would possibly result in a different transition scenario, as reported e.g. in <cit.>. Although the investigation in <cit.> warned against the possible interference between edge trajectories and unstable Tollmien-Schlichting waves over timescales 𝒪(10^4), no such phenomenon will be encountered with the present set-up, since the considered observation time is 𝒪(10^3). A state portrait is shown in Fig. <ref>, based on the three global quantities already used in previous studies <cit.>. Ω_x=(δ_0^*/δ)^1/2(1/V∫_V|ω_x|^2dv)^1/2, Ω_y=(δ_0^*/δ)^1/2(1/V∫_V|ω_y|^2dv)^1/2, W=(δ_0^*/δ)^3/2(1/V∫_V|w|^2dv)^1/2, The quantities ω_x and ω_y are the streamwise and wall-normal perturbation vorticity components, respectively, and the integration is carried over the computation domain of volume V. The prefactors in powers of (δ_0^*/δ) make use of the value of the boundary layer thickness evaluated at the center of mass, see <cit.>. In Fig. <ref>, the edge trajectory is highlighted using a thicker (green) line, and the time interval t∈[0,800] considered in this study is highlighted using a thicker green line (with equispaced dots every 50 time units). The thinner lines in red and blue correspond to trajectories closely bracketing the edge trajectory. It is useful to recall the main features of the unsteady base flow reported by <cit.>. For early times t ≤ 60 the dynamics is dominated by a three-dimensional version of the Orr mechanism <cit.>, where vortical disturbances initially tilted against the mean shear progressively untilt as time increases. For 60 ≤ t ≤ 200 the lift-up mechanism takes over and a pair of streamwise streaks forms. Both mechanisms are known to be non-modal, the stronger energy amplification being associated with the lift-up <cit.>. For t ≥ 200 the energy growth slows down. Snapshots of the velocity field along the optimal edge trajectory are shown in Figs. <ref> at times t=100, 280 and 720. From t ≥ 100 onwards, the edge trajectory consists of a localised pair of high- and low-speed streaks <cit.> with an undulation linked to oblique waves. It experiences a couple of streak-switching events around t≈ 500 and t ≈ 700. The streaks elongate with time but remain always localised in the streamwise and spanwise direction. By construction, typical infinitesimal perturbations of this unstable flow field will make it evolve either towards an incipient turbulent spot or towards the laminar state. It is precisely their state space location on the verge of bypass transition that makes edge trajectories a relevant choice as a base flow <cit.>. Imposing an optimality condition has the advantage of making the current trajectory well-defined. § RESULTS This section is devoted to the analysis of the stability properties of the optimal trajectory described in Section 3 using r=8 OTD modes. The choice of 8 modes aims at producing the largest possible subspace while keeping the simulations computationally feasible. The cost of each OTD mode is comparable to an additional DNS to be run in parallel to the original base flow. Moreover, the choice of number of modes is comparable with previous simulations of similar scale <cit.>. We restrain our study to the time interval t∈ [0,800]. §.§ Finite-time stability analysis §.§.§ Instantaneous growth rates We begin by reporting the real part of the instantaneous eigenvalues λ_i(t), i=1,..,r, computed over the whole trajectory. They are shown together with the instantaneous numerical abscissa σ(t) versus time in Fig. <ref>. The gap g(t)=σ(t) - Re(λ_1), which quantifies the instantaneous non-normality of the reduced operator, is displayed as a black line in Fig. <ref>. These quantities have all been defined in Section <ref>. The time series of these instantaneous growth rates can be grossly divided into two phases. In the initial phase for t ≲ 100, the two leading growth rates vary rapidly in time while the others are all negative. In a second phase starting at t ≈ 100, Re(λ_1) dominates in the range 0.2–0.3, with a slight decaying trend as time increases. All other eigenvalues remain close to zero in real part, never exceeding 0.1. A quick glance at the state portrait in Fig. <ref> suggests that this second phase corresponds to a clear slowdown of the dynamics of the base flow itself. If the dynamics is quasi-steady, it is expected that the stability properties of the edge trajectory mimic qualitatively the stability properties of steady/travelling edge states reported in other shear flow studies: one large dominating unstable eigenvalue, representing a strong instability in a direction transverse to the edge manifold, associated with many other eigenvalues of lesser magnitude responsible for the slow chaotic fluctuations within the edge manifold <cit.>. This expectation is largely confirmed by Fig. <ref> for t ≥ 100. A finer analysis of the fluctuations of the growth rates is possible both in the initial and the quasi-steady phases. This is achieved by focusing on the gap g(t), interpreted as a measure of instantaneous non-normality within the OTD subspace. For the initial times t ≲ 50, σ=Re(λ_1)=Re(λ_2)>0. After t=50, the gap g rises from zero to a maximum of about 0.4. It later decreases to smaller values of ≈ 0.1. As for the other λ_i's, they are all negative at t=0 but grow at the same pace and cross zero at t ≈ 100. At later times, all instantaneous growth rates stabilise, while Re(λ_1) decreases gently in a non-monotonic manner, and g(t) oscillates around low values ≈ 0.05–0.1. The fact that the peak of g(t) occurs before t=100 is consistent with the reported occurrence of purely non-normal Orr and lift-up mechanisms along the edge trajectory for these times <cit.>. The sensitivity of the edge trajectory appears high where the edge trajectory also experiences strong non-normal amplification. However, the fact that g ≈ 0, i.e. σ=λ_1 at the earliest times t ≤50 may be wrongly attributed to a lack of non-normal potential of L(t). To start with, this is a property of the instantaneous reduced operator L_r(t) computed for a given value of r, not necessarily of the full operator L(t). The reverse is yet true: non-normal features of the reduced-order operator L_r(t) carry over to L(t). Moreover, after trying several different initialisations this result was found to depend crucially on the choice of the OTD basis for t=0, at least over early times t≤ 50. This makes it difficult to draw general conclusions for short enough times, consistently with the study of <cit.>. This is possibly confirmed by the very transient behaviour of the eigenvalues λ_2 to λ_8. From t≈ 50 on, the non-normal potential within the OTD subspace is high again as expected, judging from the large values of g(t), and transient effects due to the initialisation of the OTD modes can be neglected. A peak at t ≈ 60, and a smaller one at t≈ 100, are evident in the data for σ(t) in figure <ref>(a). These times are perfectly consistent with the occurrence of both the Orr and the lift-up mechanisms described in <cit.>. Two additional bumps for both g(t) and σ(t) can also be seen at t≈ 550 and t ≈ 720. According to <cit.>, these two times correspond to streak switching events. This suggests that streak-switching events, themselves an inherent part of the self-sustained mechanism <cit.>, are linked to stronger non-normality than the rest of the edge trajectory. Eventually, Fig. <ref> also shows the norm of the time-derivatives of the three observables Ω_x, Ω_y and W used in Fig. <ref>. This quantity is defined as ξ via ξ(t) = √((dΩ_x/dt)^2+(dΩ_y/dt)^2+C(dW/dt)^2), where C is a unity-valued constant ensuring the correct dimensionality. It is plotted in fig. <ref> in connection with the time evolution of g(t). We analyse now these quantities by considering consecutive sub-intervals of the edge trajectory starting from the minimal seed: (i) t∈[0,60] (Orr mechanism in the base flow) corresponds to a very rapid evolution of the observables ξ(t). The OTD modes, however, take time to catch up with non-normality until t≈ 80, as shown by g(t). (ii) t∈[60,200] corresponds to the lift-up in the base flow effect associated with non-normal growth. Here, g(t) appears largest for t≈ 80 and decreases rapidly until t≈ 130, where a change in the slope of g(t) can be noticed. ξ (t) mirrors this behaviour, suggesting that non-normality is decreasing as the lift-up of the base flow ends. (iii) The trajectory has reached the relative attractor past t≥ 200. In this stage we observe that the slow-down of the dynamics indicated by ξ(t) corresponds to higher values of g(t), and vice versa. This can be seen in the intervals t∈ [300,400], where the dip in g(t) corresponds to a peak in ξ(t), and in t∈ [500,600], where an increase in g(t) corresponds to a dip in ξ(t). §.§.§ Characterisation as an outer mode instability In the original study on streak breakdown by <cit.>, where the base flow consists of a quasi-steady localised streak rather than a time-dependent one, a distinction was made between two types of modes. The main criterion is the wall-normal position of the energy of each mode with respect to the location of the critical layer, the latter being known from inviscid analysis. The modes with a critical layer close to the wall (such as Orr-Sommerfeld modes) are denoted as inner modes, while those with a critical layer in the free-stream are denoted as outer modes. Another characterisation of the inner vs. outer mode distinction, also suggested by by <cit.>, relies on the relation between the growth rate of the mode and the streak amplitude. Although the present context differs, notably because of the unsteady aspect of the streaks, such a characterisation can also be applied to the modes determined by our method. Fig. <ref> shows the values of the two largest instantaneous eigenvalues Re(λ_1,2) plotted vs. Ω_x, in Fig. <ref> (a), and vs. the volume average energy in the spanwise direction ||w||^2. These quantities are used as a proxy for the instantaneous amplitude of the streaky edge state. It can be directly compared to Fig. 7 from <cit.>, where the growth rate is plotted versus the streak amplitude (called A_u). The corresponding figure was used to define a classification of the instability mechanisms: inner mode refers to an instability mode present for arbitrary small values of A_u, in contrast with outer modes which are not found for vanishing streak amplitude. In the present case, positive growth rates are only found for non-vanishing values of the observable Ω_x ≥ 0.05. Interpreting Ω_x as an alternative definition of streak amplitude unambiguously indicates that the dominant instability of the edge state should be classified as an outer mode instability. §.§.§ Finite-time Lyapunov exponents Fig. <ref>(a) shows distributions of FTLEs (Λ_t_0^t_0+T)_i (i=1,..,8) computed within the interval t_0 ∈ [0,800]. Fig. <ref>(b) is similar except that the values of t_0 are restrained to the sub-interval t_0 ∈ [100,800]. In both plots the time horizon T takes increasing values from 10 to 70. Comparing the different values of T essentially confirms the robustness of the FTLE distributions with respect to the time horizon. The reason why all FTLES from i=1 to 8 are reported together is the frequent change in the ordering of the growth rates, occurring every time an eigenvalue crossing takes place <cit.>. The many negative occurrences in Fig. <ref>(a), as well as the largest occurences (≥0.3) can be attributed to the choice of initial conditions for the OTD modes, including accidentally co-aligned disturbances. A potential improvement of the initial conditions could be the computation of the eigenvectors associated with the minimal seed, by assuming no time dependency. This would result in eigendirections already within the initial tangent space. Even though there is no guarantee that these directions will remain in the tangent space at later times, they can be expected to be physically relevant at least for the initial times. These occurrences indeed disappear entirely in Fig. <ref>(a) after the initial first 100 time units have been discarded, consistently with the results of Section <ref>. Since the original bisection algorithm is essentially a shooting method <cit.>, we expect one of the FTLEs to be the signature of the instability of the edge manifold. In other words this FTLE is associated with an unstable direction pointing transversally to it. The other additional positive FTLEs have no choice but to be associated with the weak apparent unsteady dynamics taking place within the relative attractor, rather than transversally to it. This conclusion is consistent with the results of Section <ref>. The two peaks in Fig. <ref>(a), close to 0.15 and 0.3, correspond to a higher number of occurrences. They can be related respectively to the slow and fast separation of vortical disturbances, later to be shed from the main edge structure, see <cit.>. §.§.§ Local expansion rates When dealing with proper attractors defined over unbounded times, it is common to estimate numerically its dimension. Among the different possible definitions, the Kaplan-Yorke dimension D_KY is of interest, because it only requires the values of the leading Lyapunov exponents λ_i, once ranked in descending order λ_1>λ_2>...>λ_r. It is defined as D_KY=j+S_j/|λ_j+1|, where S_j the cumulative sum S_j=∑_i=1^jλ_i, and j is the only integer such S_j>0, but S_j+1<0. In the present case the long-time Lyapunov exponents λ_1,.... cannot be computed since the dynamics takes place over finite times. The above definition can however be generalised to finite-time problems by considering either the instantaneous or the finite-time exponents <cit.>. The current analysis is based on the sum S_j, rather than on the effective dimension D_KY which can be constructed from S_j in eq. <ref> only if j is large enough. Indeed with the present value of r=8, there are not enough negative exponents to define D_KY according to eq. (<ref>). Geometrically, S_j(t) is understood as the instantaneous rate-of-change of the volume of an infinitesimal state space element defined in the corresponding j-dimensional subspace <cit.>. In Fig. <ref> we show the cumulative sum S_j(t) as a function of time, computed in two different ways. Fig. <ref>(a) has S_j(t) based on the instantaneous growth rates Re(λ_i), i=1,..,j. Fig. <ref>(b) has S_t_0^t_0+T based on the FTLEs Λ_t_0^t_0+T, which are computed over an entire time interval. It is observed that, for j≤ 8 and t≤800, both cumulative sums never become negative. This confirms that the instantaneous and the finite-time Kaplan-Yorke dimension of the underlying relative attractor are both strictly larger than 8. Interestingly, S_j decreases with t_0, up to t=550, for all j's. For the last values of t_0 plotted, S_j even eventually decreases with j, which suggests that instantaneous eigenvalues with negative growth rate start to contribute to the instantaneous/FTLE spectrum at later times. From a geometric point of view, the fact that S_j stays always positive suggests that the volume of infinitesimal state space elements of the reduced r-dimensional space grows with time. This is in contrast with the full n-dimensional space where such a volume has to decrease, since the original dynamical system (<ref>) is dissipative. In other words, the present reduction, with the choice of r=8, does not incorporate enough dissipative modes, only active modes. Conducting a similar numerical experiment with much larger r is as of today too demanding in terms of memory requirements, at least for the Blasius flow. §.§.§ Summary The main learnings from the OTD stability analysis restrained to r=8 modes are the following: the dominant edge instability qualifies an outer mode mechanism linked with the wall-normal vorticity of the localised streak. Past the initial 50 time units where the analysis depends on the initialisation of the modes, several mechanisms can be identified from the three peaks in the FTLE spectrum. The dominant instability corresponds to an instability transverse to the edge manifold, while the others correspond to the slow variability of the edge trajectory itself: the dynamics of the perturbations mimic the dynamics inherent to the base flow itself, including the streak phenomenon. The local dimension of the tangent space exceeds the value of r=8. Finally, we observe that the non-normal amplification of disturbances increases when the change of the base flow in time becomes slower and vice versa. §.§ Modal structures Beyond global indicators characterising the tangent dynamics, a description of the modal structures in physical space is required. We recall (see Subsection <ref>) that the flow fields visualised correspond to the real part of the vectors u_i^λ defined in eq. <ref>. Two-dimensional visualisations are shown for two different times, namely t=280 and 720. The modes come in complex conjugate pairs for the considered times, therefore we only display here every other mode among the computed ones. The velocity field of the base flow at these two times, selected along the edge trajectory after the initial transient, is shown in Fig. <ref>. It consists of a wiggly finite-length streak flanked with shorter streamwise vortices. At these two times, both snapshots are comparable, the main differences being the longer streamwise extent together (of about 500δ_0^*) with a spanwise narrower structure of extent 40δ_0^* at the later time. Taking into account the dynamics of the base flow near these two times enriches the description. Near t=280 the formation of streaks by the lift-up mechanism is almost mature <cit.> and the dynamics relaxes towards quasi-steady motion. By contrast, in the time units following t=720, low and high-speed streak are on the verge of exchanging their spanwise position. The instantaneous eigenmodes for t=280 are first shown in Figs.  <ref>–<ref>. The representation, inspired by the experimental figures of <cit.>, is based on a pseudocolor plot of the streamwise velocity perturbation for the reference trajectory, overlapped with lines indicating 40%-100% of the maximum range of the vorticity normal to the planes at z=-4, y=2.5 and x=325. The planes are selected to intersect relevant regions of the main structure. We describe now the observed flow structures. The present method as well as the underlying modal decomposition are new in fluid mechanics apart from <cit.>. Therefore for pedagogic reasons we chose to display the flow fields of every computed instantaneous eigenmode, omitting the redundant conjugate modes. For t=280, the spatial structure of each of the 8 leading OTD modes superimposes well with the active part of the main structure, which consists of a sinuous streak of finite length. As a consequence the OTD modes inherit this sinuous structure. Importantly, no spatial symmetry has been imposed neither on the base flow nor on the disturbances modes. This differs from the classical study of <cit.> where the base flow has no streamwise dependence. The long-standing question about the symmetries of the leading eigenmodes, namely whether they are symmetric with respect to the plane z=0 (sinuous) or antisymmetric (varicose), becomes irrelevant here. In particular the varicose symmetry, which is consistent with the formation of hairpin vortices, is not characteristic of any of the modes investigated. The classical conclusion of <cit.>, namely that the sinuous instability of streaks is the most unstable mechanism of paramount importance for streak breakdown, remains valid. Further visualisation of the modes at t=280 highlights the shear layers in the flow, visible in the xy plane. The xz-plane shows that most of the activity of the mode is located within the active core of the streak and its upstream tail. The yz-plane confirms the localisation of the mode on the top shear layer. For all modes, energy is located mostly within the active core or upstream of it. This is in line with the former observation that secondary structures shed downstream of the edge state are not key ingredients of the self-sustained cycle <cit.>. Streamwise velocity profiles for the instantaneous eigenmodes are shown in Fig. <ref>. They suggest robust localisation close to the edge of the boundary layer. In all subfigures in Fig. <ref>, the y-location for the largest amplitude of the streaks is displaced towards larger values with increasing x: the head of the streaks characterising the edge trajectory appears tilted upwards. This is again consistent with the description of outer mode instability in <cit.>. The relevance of this region is furthermore consistent with the interpretation in <cit.>, where streak instability proceeds via outer modes localised near the edge of the boundary layer. As for the differences between the different modes u_1^λ,...,u_8^λ, at t=280 they are not very pronounced yet. Only u_1^λ stands out through a less pronounced tail of streamwise vorticity at the upstream edge. It was checked that perturbing the edge trajectory at t=280 by u_1^λ, with amplitude ± 10^-4, leads either to a turbulent flow or to relaminarisation. This confirms that this eigendirection is transverse to the edge manifold at the considered time. Most features discussed above are also attested at a later time t=720, just before streak switching takes place. There are however noticeable differences. At t=720, all the instantaneous eigenvalues are still positive, with λ_1 strictly larger than the other eigenvalues and λ_8 closer to zero. The leading OTD mode u_1^λ is still similar in shape to u_3^λ and u_5^λ, while u_8^λ clearly displays a different structure. u^λ_1 displays strong activity at the edge of the boundary layer, upstream of the active core, strictly above the corresponding shear layer of the base flow (it is most visible on the streamwise velocity component). More noticeable is the fact that the modal structures are lifted towards the edge of the boundary layer, see e.g. the xy plane of Fig. <ref>(a) for u^λ_1. The vortical structures associated with this mode form a larger angle with the wall than the base flow itself. It was again checked that the eigendirection u^λ_1 is transverse to the edge manifold at the considered time. The structures highlighted in u^λ_8 are of particular interest. They correspond to the region where a new high-speed streak is in the process of being spanned (see the supplementary material in <cit.> for further evidence). The corresponding OTD mode(s) should hence not only be understood as the manifestation of an instability of a simple instability-free base flow, instead it can be interpreted as precursor(s) of events that will anyway occur along the edge trajectory. The positive FTLEs associated with the corresponding instantaneous eigenmode are a signature of short-term unpredictability, they quantify the temporal volatility of the streak switching phenomenon. Further strengthening the discussion above, Fig. <ref> shows the same snapshots as in Fig. <ref> superimposed now with contours of λ_2 for the leading OTD mode. It can be seen in both Fig. <ref>(a) and (b) that the instability mode is mostly localised within the edge structure. The localisation within the active core is even clearer in Fig. <ref>(b). Furthermore, Fig. <ref> shows greater localisation in the side where a new high-speed streak is to be generated. Some elements of this analysis could have been anticipated. The OTD framework, in line with the whole concept of Lyapunov analysis, is a generalisation of modal stability analysis to arbitrarily unsteady base flows. Non-normal features can be captured provided an insightful initialisation of the OTD modes, yet these features are not expected to persist over longer time horizons, e.g. those involved in the evaluation of FTLEs. However, the Orr as well as the lift-up mechanism, which dominate the dynamics at early times, are intrinsically non-normal mechanisms of finite duration. In principle, a large number of eigenvectors is needed to capture transient growth accurately. This explains why so many modes possess a similar structure. This trend is aggravated by the fact that for small r, the captured non-normality is an estimate of the non-normality of the whole system. If the description in terms of few OTD modes can seem irrelevant at the earliest times when non-normality dominates, the situation becomes tractable again with small r as soon as the growth of the streaks slows down. The corresponding visualisations for t=280 and t=720 are displayed in Fig. <ref>–<ref> and Fig. <ref>–<ref>. At this stage the instantaneous eigenvalue distribution as well as FTLE distribution is more comparable with the usual spectrum of edge state solutions, see Fig. <ref>: one dominant unstable eigenvalue marking a direction locally transversal to the edge manifold, several weakly positive eigenvalues expressing the chaotic nature of the edge state fluctuations, and (not appreciable here because of the small value of r) a large set of stable eigenvalues expressing the attraction of the edge state within the edge manifold. One clear feature from physical space visualisations, regardless of the quantity plotted, is how the localised support of all OTD modes, except here for u_8^λ, superimposes exactly with the location of the edge state. This suggests that the present modes, if they contribute to an instability of the edge state, would not make the main coherent edge state spread spatially, at least at the level of the linearised dynamics. As far as the unsteady dynamics restricted to the edge manifold is concerned, this suggests that shift sideways are excluded near t≈ 280 whereas they are likely to occur at t≈ 720. Such sideways shifts have been reported in most edge states of boundary layer flows <cit.>, ASBL <cit.> as well as channel flow <cit.>. As in the present case, the shift phases are usually short and alternate with long shift-free phases. Another robust feature of all localised edge states concerns transition from the edge state to the turbulent state: the transition process consists of two consecutive steps: first a local intensification of the disturbances within the active core, followed by spatial spreading <cit.>. The consecutive nature of these two events would suggest that the spreading phase is nonlinear, while the intensification phase can be understood partially from the linear instability of the edge state. The fact that the spreading is reflected in the spatial structure of at least one instantaneous eigenmode u_8^λ at the later time t=720, suggests however that spanwise spreading can be partially predicted and described at this time by linear mechanisms. These new results suggests further study. § CONCLUSION AND OUTLOOKS We have used the recently developed framework of the Optimally Time-Dependent (OTD) modes to study the linearised dynamics about a segment from a well-defined unsteady base flow. The methodology was applied to a complex hydrodynamic case at the limit of our computational capabilities, and yields results in line with the expected physics. It even performs beyond expectations by revealing new physical phenomena. The physical system under investigation is the Blasius boundary layer flow. The original trajectory under scrutiny belongs by construction to the edge manifold delimiting bypass from natural transition. However, the study is restricted to timespans short enough such that Tollmien-Schlichting waves do not have time to affect the transition process. This unsteady trajectory is re-interpreted as an unsteady base flow, whose linear (modal) stability analysis is expected to contain information about the stability of localised streaks, as observed in instances of bypass transition. This choice of base flow, due to its three-dimensionality and its unsteady dynamics, represents an excellent test case for a new stability approach. Limiting ourselves, as a computational compromise, to a projection basis consisting of only 8 OTD modes, we have computed the instantaneous eigenvalues along the unsteady trajectory. The streaky base flow displays a couple of unstable complex conjugate eigenvalues which dominate the finite-time stability of the trajectory. The remaining eigenvalues investigated have a positive real part as well, yet with a smaller magnitude. This is consistent with the expectations for chaotic dynamics within the edge manifold, although the notion of chaos is usually kept for the infinite-time frameworks. Numerical evidence suggests that the leading instability mechanism(s) in this study correspond to an outer mode as described by <cit.>, even if the corresponding perturbations lack the long wavelength structure characteristic of streak eigenmodes reported so far <cit.>. We have also analysed the Finite-Time Lyapunov exponents (FTLEs) along the trajectory by considering several time horizons. The results confirm the presence of one fast unstable direction versus many slower state space directions. Moreover, we could confirm that the underlying invariant set has a finite-time fractal dimension strictly larger than 8. The leading modal structures obtained from the OTD modes are not trivial to describe, mainly due to the lack of spatial symmetry of the base flow. The main property exploited in this study regards the spatial localisation of the modes. Most of the modes computed for r=8 display, in an instantaneous fashion, the same localisation properties as the original base flow. The most unstable perturbations display a positive instantaneous growth rate, and their vortical activity is classically located in the region adjacent to the streaks, where the total shear is highest <cit.>. In particular, the perturbations in the xy-plane are tilted from the wall by an angle larger than the base flow, particularly at larger times. Some of the modal perturbations extracted also display vortical fluctuations upstream of the base flow, while one identified mode even displays localisation on the spanwise side of the base flow (at a later time only). It is suggested that the latter eigenmode plays an active role as precursor in streak-switching events, the same events that lead the localised edge state to propagate sideways. Downstream fluctuations are however absent from the leading instantaneous eigenmodes, suggesting that they are not fundamental to the temporal sustainment of the edge state <cit.>. Although the method originally targets a modal description of the relevant finite-time instabilities, it can also capture non-normal amplification mechanisms <cit.>. In practice the exact amount of non-normality predicted, as well as the associated energy amplification, are constantly underestimated for finite r compared to the full-dimensional problem, mainly because a larger number of instantaneous eigenmodes would be required to faithfully capture non-normal effects. Nevertheless these results confirm that non-normal effects also play a role in the streak breakdown phenomenon <cit.>. This study highlights the relatively large sensitivity, on short times, of the instantaneous eigenvalues to the initialisation of the OTD modes. It is expected from theoretical arguments <cit.> that FTLEs can be safely computed from the eigenvalues only past a transient time, which is a priori unknown and case-dependent. A detailed comparison between two different arbitrary initialisations suggests that, in the present case, only the early times prior to t ≈ 50 are highly dependent on the choice made for t=0 (cf. figure <ref>. Although this transient can be considered as short relative to the complete transition process, it still represents a clear limitation of the method as far as early times are concerned. At times larger than 50, the instantaneous eigenvalues λ_1, … evolve qualitatively similarly with time although instantaneously eigenvalues may differ between the two simulations. The corresponding trend is also valid for the numerical abscissa σ. If the dynamics belonged to an attractor, the time-averaged FTLEs would converge to the LEs, known to be independent of the initialisation <cit.>. Although the present case does not revolve around a genuine attractor in state space, the results in figure <ref> clearly suggest that the late-time dynamics can be considered as temporally converged. Note that for r large enough the discrepancy between different initialisations is expected to vanish even at finite times, for instantaneous eigenvalues as well as for the numerical abscissa. However, additional modes (and thus larger r) also imply a significant increase in computational time. On the technical level, several points require further discussion and study: * (i) the size of the OTD subspace cannot be determined a priori <cit.>. This is in particular relevant to capture the non-normality along the reference trajectory, where a large number of modes are required. Note that in cases with extensive systems, or “weak turbulence”, such as Kuramoto-Sivashinsky <cit.> just a few modes are required to entirely describe the most unstable subspace, whereas in pulsating Poiseuille flow more than 70 modes are required to fully describe the non-normal behaviour <cit.>. However, it has been shown that much lower number of modes r≈ 6 could already bring relevant physical insight <cit.> * (ii) eigenvalue crossing can make the OTD basis readapt multiple times * (iii) the modal structures arising from the OTD framework are not associated with a single mode in the sense of classical linear stability analysis. The projected OTD modes contain information about several different mechanisms taking place at the same time, in particular for very complex reference trajectories. To further clarify the potential of the OTD modes in the present complex flow case, we gather our main results in the following list : * first demonstration that the stability analysis of unsteady trajectories is technically possible for a complex large-scale system, without resorting to average Lyapunov exponents or Covariant Lyapunov vectors, even in the case where those might not be available. * evidence that one unstable mode dominates over all the others at all times, a feature not at all obvious for an aperiodic flow. * quantification of finite-time Lyapunov exponents along the edge trajectory, including the early times. * quantification of the growth of state space volumes as time progresses, showing that at later times the requested number of modes is reduced compared to earlier times. * first quantitative evidence for non-normal effects in an aperiodic flow. * occurrence of spanwise shifts detected in the higher-order modes at late times. * evidence that the sinuous symmetry prevails over the streamwise-independent structures throughout all the study. In particular varicose perturbations, known as an alternative way to break streamwise independence and popularised by hairpin vortex studies, appear absent from our study. * evidence that the new modes found in the present analysis can also be described as outer modes. Looking ahead, although for intermediate times the OTD modes capture the non-normal features of the underlying linear dynamics, for large times the proposed methodology (edge tracking together with linear stability analysis using OTD modes) is essentially a generalisation of modal stability analysis to unsteady cases. Persistent consequences of the non-normality include for instance the finite-time instabilities likely to occur during the Orr mechanism (for t<60) and the lift-up at later times. Both require further extensions of this methodology for a quantitative prediction. The optimal framework proposed by <cit.> is intrinsically non-modal, and it is well suited to the identification of the disturbance most amplified in finite time over an unsteady base flow. The corresponding adjoint-looping algorithm was used successfully by <cit.> in channel flow, except that the reference trajectory chosen was not an edge trajectory but a linear transient. It would be interesting to apply the same methodology on an unsteady edge trajectory and compare the results with the present ones, to see whether one of the methods can predict the finite-time growth of coherent structures not captured by the other technique. Moreover, the possibility of combining these several techniques together is interesting for future developments in stability analysis. M.B. and Y.D. would like to thank Hessam Babaee and Simon Kern for discussions about the OTD modes. Financial support by the Swedish Research Council (VR) grant no. 2016-03541 is gratefully acknowledged. The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) partially funded by the Swedish Research Council through grant agreement no. 2018-05973. The authors report no conflict of interest. § FURTHER COMPUTATIONAL DETAILS This appendix provides further details about the computation of the OTD modes using a pseudospectral approach, and in particular using the SIMSON code <cit.>. Consider the equations for the evolution of the OTD modes about a trajectory evolved with the Navier–Stokes equations: 𝐮̇_i = 𝐋_NS (𝐮_i)-⟨𝐋_NS (𝐮_i), 𝐮_i ⟩𝐮_i-∑_j=1^i-1 [⟨𝐋_NS (𝐮_i), 𝐮_j⟩ + ⟨𝐋_NS (𝐮_j), 𝐮_i⟩ ]𝐮_j, where 𝐋_NS denotes the linearised Navier–Stokes operator. Bold letters denote quantities in physical space, which are discretised into ℝ^n degrees of freedom. The linearised Navier–Stokes functional without external forcing applied to a field 𝐮_i reads, 𝐋_NS (𝐮_i) = - (U_b ·∇ ) u_i - (u_i ·∇) U_b - ∇ p_i + 1/∇^2 u_i. The additional constraint to the linearised Navier–Stokes equations is introduced in SIMSON in the form of an explicit forcing at each time step. Boundary conditions are implemented into the linear part of the solver, while the nonlinear terms are evaluated explicitly. In the present case, evaluating explicitly the inner products involving the linearised Navier–Stokes operator can produce erroneous results if the boundary conditions on the additional forcing term are not applied properly. The term 𝐋_NS(𝐮_i) needs to contain the boundary conditions corresponding to the linearised operator to provide a correct forcing term and L_r in the computations. In particular, it is necessary to apply Neumann boundary conditions on the free stream and Dirichlet boundary conditions at the wall u_i(y=0)=∂ u_i/∂ y(y=L_y) = 0 when recovering the wall-normal velocity from the 4th order equation arising from the velocity-vorticity formulation <cit.>. This differs from the main body of the implementation in SIMSON since the correct boundary conditions need to be applied in the explicit term involving 𝐋_NS(𝐮_i) as well as the implicit part of the solver. The OTD modes converge exponentially fast to the most unstable directions of the Cauchy-Green tensor <cit.>, and after a long time only depend on the point of the trajectory where they are computed <cit.>. However, the OTD modes depend on their initialization <cit.>. It has been observed that there is not an universal time for which the OTD subspace is converged to the most unstable directions. Nevertheless, relevant physical features may be observed from early times <cit.>. § INITIAL CONDITIONS FOR THE OTD MODES An additional point to consider is that, if one of the directions not part of the basis becomes unstable enough, the basis will need to re-adapt. This is due to eq. (<ref>) being evolved continuously, whereas the introduction of a different vector in the most unstable subspace occurs discontinuously <cit.>. The choice of the initial condition for the OTD modes plays therefore a crucial role for the OTD framework. Since our reference trajectory consists of several finite-time events of interest, our goal is to choose initial conditions which adapt as quickly as possible to the most unstable dynamics. We therefore chose initial conditions which are physically relevant to excite instability mechanisms on the edge trajectory. The initial condition for the first mode is exactly the perturbation to the Blasius boundary layer associated with the edge state. This represents infinitesimal perturbations of the same shape as the edge trajectory. The initial conditions for modes 2-8 correspond to pairs of counter-rotating vortices with different spatial extensions. The counter-rotating vortices are also rotated about the y axis to remove any symmetric constraint. This set of initial conditions is not orthogonal by construction and therefore a Gram-Schmidt algorithm is performed before initialising the OTD computations. The results reported in the body of the paper correspond to these initial conditions. To check the robustness of the results, alternative sets of initial conditions have been tested using r=4: (i) The 4 leading modes from the results in the main body of the paper and (ii) random noise. Using r=4 modes only appeared sufficient to illustrate the main aspects of the subsequent checks. A comparison for the wall-normal component of the leading projected OTD mode at t=120, obtained using the two different sets of initial conditions can be seen in Fig. <ref>. The figure shows an agreement about the general physical features of the perturbation, i.e. the high-speed streak flanked by two low-speed streaks is present in both cases. However, no exact match is observed. The convergence to an unique set of OTD modes is expected to be exponentially fast <cit.>, but the explicit times are strongly case dependent. The most unstable instantaneous eigenvalues are shown in Fig. <ref>. It can be observed that in the case of the random noise, the initial peak is lost. It is reasonable to assume that the unsteady base flow changes too fast during the initial times while the OTD subspace has not had enough time to adapt. On the other hand, the second peak identified at t∼ 80 is well identified with both sets of initial conditions. We should consider random noise as the worst choice of initial conditions, since it is entirely agnostic to the underlying reference trajectory. The results presented above further strengthen the importance of the choice of initial conditions. They indicates that, although the OTD approach is robust at large enough times, it remains dependent on the initialisation for times earlier than t ≈ 100. jfm
http://arxiv.org/abs/2306.07034v1
20230612113144
Enhanced Floating Isogeometric Analysis
[ "Helge C. Hille", "Siddhant Kumar", "Laura De Lorenzis" ]
cs.CE
[ "cs.CE" ]
automata,positioning [para]default = rel -1cm d D ^d×d IR AM #1#1 𝒟 ℒ 𝒢 ℐ 𝒥 𝒜 D ℚ AA : · 𝕀 def= ^-T ^inh _𝐞 _𝐩 _𝐩 _𝐞 _𝐩 ^d×d #1(<ref>) red rep cond
http://arxiv.org/abs/2306.04467v1
20230607143928
High-order Compact Gas-kinetic Scheme for Two-layer Shallow Water Equations on Unstructured Mesh
[ "Fengxiang Zhao", "Jianping Gan", "Kun Xu" ]
math.NA
[ "math.NA", "cs.NA" ]
HKUST1]Fengxiang Zhao [email protected] HKUST1,HKUST2]Jianping Gan [email protected] HKUST1,HKUST2,HKUST3]Kun Xucor [email protected] [HKUST1]Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, HongKong [HKUST2]Center for Ocean Research in Hong Kong and Macau (CORE), Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong [HKUST3]Shenzhen Research Institute, Hong Kong University of Science and Technology, Shenzhen, China [cor]Corresponding author For the two-layer shallow water equations, a high-order compact gas-kinetic scheme (GKS) on triangular mesh is proposed. The two-layer shallow water equations have complex source terms in comparison with the single layer equations. The main focus of this study is to construct a time-accurate evolution solution at a cell interface and to design a well-balanced scheme. The evolution model at a cell interface provides not only the numerical fluxes, but also the flow variables. The time-dependent flow variables at the closed cell interfaces can be used to update the cell-averaged gradients for the discretization of the the source terms inside each control volume in the development of the well-balanced scheme. Based on the cell-averaged flow variable and their gradients, high-order initial data reconstruction can be achieved with compact stencils. The compact high-order GKS has advantages to simulate the flow evolution in complex domain covered by unstructured mesh. Many test cases are used to validate the accuracy and robustness of the scheme for the two-layer shallow water equations. Two-layer shallow water equations; Gas-kinetic scheme; High-order compact reconstruction; Unstructured mesh High-order Compact Gas-kinetic Scheme for Two-layer Shallow Water Equations on Unstructured Mesh [ Received September 15, 1996; accepted March 16, 1997 ================================================================================================= § INTRODUCTION The shallow water equations (SWE) are useful in studying both large-scale ocean circulations and small-scale coastal and channel flows, such as tsunamis, pollutant transport, tidal waves, and dam break problems. However, real flows often exhibit stratification, which cannot be captured accurately by a single-layer SWE. For instance, the injection of freshwater into seawater creates plumes that are important for the coastal marine environment, with salinity stratification being a possible feature. In addition, the flow velocity at coastal area may vary significantly or exhibit stratification along the depth. To model the stratified water flow, the multi-layer SWE will be used with the superposition of coupled layers with the force interaction between them. This paper will focus on the development of high-order compact scheme for the two-layer SWE (TLSWE), which is the basis for the multi-layer SWE. In particular, the numerical scheme developed in this study for TLSWE can be naturally extended to multi-layer SWE with the inclusion of the interaction between layers as the source term and their dynamic effect in the calculation of numerical fluxes. Numerous numerical schemes have been developed for solving SWE with second-order accuracy <cit.>. High-order numerical methods have gained popularity in recent years due to their advantages in accuracy and computational efficiency <cit.>. As a result, several high-order numerical schemes have been proposed for solving SWE <cit.>. However, there are few works on numerical methods for the two-layers SWE. Most of them are still based on the 1-D model <cit.> or 2-D model on structured mesh <cit.>. Second-order schemes for 2D TLSWE on unstructured mesh have been developed <cit.>, with great difficulties due to the loss of hyperbolicity under certain conditions and the stiff coupling between layers with the product of flow variables and their derivatives <cit.>. Unstructured mesh is highly adaptable to complex geometries, making it a popular choice for numerical studies on real flow simulations <cit.>. This is particularly relevant for coastal hydrodynamics simulation, given the irregular and multiscale nature of coastal boundary geometries. However, to construct a high-order finite volume scheme on unstructured mesh presents a challenge due to the use of large stencils in the reconstruction <cit.>. Most high-order schemes for single-layer SWE on unstructured mesh are based on the discontinuous Galerkin (DG) formulation <cit.>, and there are few high-order finite volume schemes for solving TLSWE. The DG method updates the inner degrees of freedom (DOFs) from its weak formulation, and is widely used to solve compressible gas dynamics equations <cit.> due to its compact spatial discretization. However, for flows with discontinuities, additional numerical treatments, such as identifying troubling cells and limiting procedures, must be designed within the DG framework <cit.>. In this study, a high-order compact gas-kinetic scheme (GKS) will be constructed. The finite volume GKS updates both cell-averaged flow variables and their gradients from the moments of the time-accurate gas distribution function at a cell interface and compact initial reconstruction can be obtained. At the same time, the multistage and multiderivative method will be adopted for achieving high-order temporal accuracy with less stages <cit.>. The structure of this paper is as follows. Section 2 introduces the GKS for TLSWE. Section 3 discusses the high-order compact reconstruction on unstructured mesh and temporal discretization. In Section 4, the compact GKS is validated by studying shallow water flow in various cases. Finally, Section 5 is the conclusion. § TWO-LAYER SHALLOW WATER EQUATIONS AND GAS-KINETIC EVOLUTION MODEL This section will present the gas-kinetic evolution model for solving TLSWE. The corresponding GKS for TLSWE will be constructed based on the extension of the scheme for the single-layer SWE <cit.>, where the interaction between layers will be explicitly included in the scheme. §.§ Two-layer shallow water equations In <cit.>, three equivalent forms of TLSWE are presented. In this study, the conservative form of TLSWE will be adopted and the interaction between layers is included in the source term, ∂W/∂ t+ ∂F^x(W)/∂ x+ ∂F^y(W)/∂ y=S(W), where W = ( [ h_2; h_2U_2; h_2V_2; h_1; h_1U_1; h_1V_1; ]), F^x = ( [ h_2U_2; h_2U_2^2+1/2Gh^2_2; h_2U_2V_2; h_1 U_1; h_1U_1^2+1/2Gh^2_1; h_1U_1V_1; ]), F^y = ( [ h_2V_2; h_2U_2V_2; h_2V_2^2+1/2Gh^2_2; h_1V_1; h_1U_1V_1; h_1V_1^2+1/2Gh^2_1; ]), and S = ( [ 0; -Gh_2B_x-Gh_2h_1,x; -Gh_2B_y-Gh_2h_1,y; 0; -Gh_1B_x-χ Gh_1h_2,x; -Gh_1B_y-χ Gh_1h_2,y; ]). Here W are the flow variables, and F^x and F^y are the corresponding fluxes in the x and y directions. B is the bottom topography, G is the gravitational acceleration, and χ is the density ratio defined as χ=ρ_2/ρ_1, where ρ_1 and ρ_2 are the densities of the first and second fluid layer. The flow variables of the lower and upper layers are denoted as 𝐖_1 and 𝐖_2, respectively. The fluxes of the two layers are (𝐅_1^x,𝐅_1^y) and (𝐅_2^x,𝐅_2^y) with the corresponding source terms 𝐒_1 and 𝐒_2. Fig.<ref> presents a schematic of the two-layer shallow water flow. By adopting the form of TLSWE in Eq. (<ref>), the equations for each layer are similar as the single-layer SWE except the additional source term related to the interaction between layers. The source term makes the TLSWE conditionally hyperbolic <cit.>, which may cause difficulty in the construction of numerical scheme based on Riemann solver and flux splitting method. In addition, the source terms related to the interaction between layers are nonlinear, which have challenges in the discretization for the high-order schemes. In the gas-kinetic scheme, the dynamics in the TLSWE will be recovered by the time evolution of gas distribution function, and the effect of source term will be incorporated into the particle transport process. The numerical fluxes will be directly evaluated from the time-dependent gas distribution function. Since the governing equations of each layer in the TLSWE have similar forms, a general formulation for one of the layers will be presented in the following. §.§ Gas-kinetic evolution model The GKS is based on the time evolution solution of the gas distribution function for the flux evaluation <cit.>. The gas-kinetic BGK model can be written as <cit.> f_t +u·∇_x f +∇Φ·∇_u f=g-f/τ, where f is the distribution function f(x,t,u), u=(u,v) is the particle velocity, and g is the equilibrium state approached by f. τ is the relaxation time. ∇Φ is the acceleration of particle due to external force and is related to the source term in TLSWE, such as the force from bottom topography and the friction. The equilibrium state g is a Maxwellian distribution function <cit.>, g=h(λ/π)e^-λ(𝐮-𝐔)^2, where λ is defined by λ=1/Gh. Due to the conservation in relaxation process from f to g, f and g satisfy the compatibility condition, ∫g-f/τψdΞ=0, where ψ=(ψ_1,ψ_2,ψ_3)^T=(1,u,v)^T and dΞ=dudv. Based on the moments of the gas distribution function, the flow variables and their fluxes can be obtained. Due to the similar equations for different layers, the schemes for layer 1 and layer 2 can be formulated similarly. In the general scheme, the macroscopic flow variables and the fluxes can be obtained from the distribution function f as W =∫ f ψdΞ, and (F^x,F^y)^T =∫ f ψ𝐮dΞ. The source term S becomes S =-∫∇Φ·∇_u f ψdΞ, and ∇Φ is determined by ∇Φ=𝐒/h, where 𝐒 takes 𝐒_1=h_1(0,-GB_x-Gχ h_2,x,-GB_y-Gχ h_2,y)^T and 𝐒_2=h_2(0,-GB_x-Gh_1,x,-GB_y-Gh_1,y)^T for layer 1 and layer 2, respectively. The formal solution of the BGK model in Eq. (<ref>) with external forcing term is f(x,t,u)=1/τ∫_0^t g(x^',t',u^')e^-(t-t')/τdt' +e^-t/τf_0(x_0,u_0), where x is the numerical quadrature point on the cell interface for flux evaluation, and x can be set as (0,0) for simplicity in a local coordinate system with both normal and tangential directions as the x- and y-directions. The formal solution describes an evolution process for the distribution function. The trajectory of fluid particle is given by x=x^'+u^'(t-t^')+1/2∇Φ(t-t^')^2, and the velocity of the particle is u=u^'+∇Φ(t-t^'). The acceleration has a second-order effect (∼ t^2) on the particle trajectory, but has the first-order contribution (∼ t) to the particle velocity. The second-order in time and the ell-balanced explicit evolution solution f is obtained for SWE <cit.>. In this paper, the same evolution solution of f is used for the individual layer. The solution of f is f(x,t,u) =g(𝐱,0,𝐮)[ C_1+ C_2 ( 𝐚^l ·𝐮H(u) +𝐚^r ·𝐮(1-H(u)) ) +C_3A] +C_2g(𝐱,0,𝐮) [-2 α_k,mλ( ∇Φ^l H(u)+∇Φ^r(1-H(u)) ) · (𝐮-𝐔) ] +C_4[g^l(𝐱,0,𝐮)H(u)+g^r(𝐱,0,𝐮)(1-H(u))] +C_5g^l(𝐱,0,𝐮)[𝐚^l·𝐮 -2 α_k,mλ^l ∇Φ^l ·(𝐮-𝐔^l) ]H(u) +C_5g^r(𝐱,0,𝐮)[𝐚^r·𝐮 -2 α_k,mλ^r ∇Φ^r ·(𝐮-𝐔^r) ](1-H(u)), where α_k,m (k=1,2, m=1,2,3) are constants for a well-balanced scheme, (α_1,1,α_1,2,α_1,3)=(1,3/4,1/4) and α_2,m=1, m and k is related to taking moment, and the details are given in the Appendix of <cit.>. The coefficients C_i (i=1,2,⋯,5) are C_1 =1-e^-t/τ,   C_2=-τ(1-e^-t/τ)+te^-t/τ,   C_3=-τ(1-e^-t/τ)+t, C_4 =e^-t/τ,   C_5=-t e^-t/τ. The fluxes at the cell interface are evaluated by taking moments of the above gas distribution function and the total transport of mass and momentum within a time step can be further integrated in time. More details in the formulation can be found in <cit.>. §.§ Acceleration force modeling at the interface between two water layers The interaction between layers is modeled as the acceleration term in the kinetic equation. The spatial derivatives of the water column height determine the acceleration, where the values of the height derivatives can be obtained by the compact reconstruction at the cell interface. However, the possible discontinuity of the interface can trigger a sudden “pull” or “push” between water layers. For cases with discontinuities, the spatial derivatives of the water height from the reconstruction will not be used to calculate the force, and the “step effect” due to the discontinuity needs to be considered. The acceleration from a discontinuous interface will be modeled. Without loss of generality, for the momentum equation of layer 2 as an example, the corresponding acceleration is given by ∇Φ_2=-GB_x -χ Gh_1,x. For the continuous bottom topography B and water height h_1, the acceleration can be directly evaluated based on the functions of B_x and the reconstructed h_1,x. However, when the water height is discontinuous at a cell interface, such as the reconstructed dash lines in Fig. <ref>, the corresponding forcing term between layers will be modeled from a re-constructed continuous profile at the cell interface. The construction of this continuous profile will take into account the forcing interaction between neighboring cells. Firstly, let's construct the continuous line at the cell interface. In each cell, the continuous line connects the respective unique values of the water height on the cell interfaces x_j±1/2, which are denoted by h_1(x_j±1/2) as the black dots in Fig. <ref> with the values given later. The continuous line in the cell is obtained as P_j^1(x)=1/2(h_1(x_j+1/2)+h_1(x_j-1/2))+h_1(x_j+1/2)-h_1(x_j-1/2)/x_j+1/2-x_j-1/2(x-x_j), where P_j^1(x) is a linear interpolation based on the values at the cell interfaces of the cell. h_1(x_j+1/2) are modeled based on the discontinuous left and right states h_1(x_j+1/2)=ξ h^l_1(x_j+1/2) +(1-ξ)h^r_1(x_j+1/2), where h^l_1(x_j+1/2) and h^r_1(x_j+1/2) are the reconstructed values at the cell interface. ξ is a coefficient for the convex combination, and it is defined as ξ=1/2erfc( (U^l_1(x_j+1/2)+U^r_1(x_j+1/2))/2 ), where the function erfc(⋯) is the complementary error function, and U^l,r_1(x_j+1/2) are the left and right values of the velocity at the interface. erfc(⋯) makes a smooth transition from 2 to 0 when the independent variable covers (-∞,+∞) with a value erfc(0)=1. The above linear distribution in the cell has dynamically upwind-biased slope. In the smooth case, the updated derivative of the water height can be used in the evaluation of acceleration inside each cell. In order cope with both discontinuous and smooth cases, the final derivative of the water height is determined by the following nonlinear convex combination method h^l_1,x(x_j+1/2) =w_j+1/2 h^l_1,x(x_j+1/2) +(1-w_j+1/2)P^1_j,x(x_j+1/2), h^r_1,x(x_j+1/2) =w_j+1/2 h^r_1,x(x_j+1/2) +(1-w_j+1/2)P^1_j+1,x(x_j+1/2), where w_j+1/2 is a nonlinear weighting function to identify the smoothness of the solution. w_j+1/2 tends to 1 in the smooth region and to 0 in the discontinuous region. The value of w_j+1/2 is the same nonlinear weight as that in the high-order time stepping reconstruction scheme of <cit.>. In two dimensions, similar modeling of the derivative of the water height can be done. Different from the one-dimensional one, the modeled continuous line in Fig. <ref> is extended to a 2-D continuous plane. A smooth linear interpolation in the cell is determined by the following constraints. P^1(x^c_k,y^c_k)=h_1(x^c_k,y^c_k),  k=1,2,3, where (x^c_k,y^c_k) is the center of the cell interface, h_1(x^c_k,y^c_k) can be obtained by taking the arithmetic average of the values h_1(x_m,y_m), where m=1,2, on the Gaussian quadrature points of the corresponding cell side. § COMPACT GKS BASED ON HIGH-ORDER COMPACT RECONSTRUCTION In this section, the compact GKS for the TLSWE will be constructed, where the high-order compact reconstruction to obtain the initial values of flow distributions is implemented and the two-stage fourth-order (S2O4) temporal discretization is used. Since the two layers in the shallow water equations can be numerically treated in the same way, the evolutions for 𝐖_1 and 𝐖_2 will be presented by the discretization of 𝐖 below. §.§ Finite volume discretization Taking moments ψ on Eq. (<ref>), the flow variables in a cell Ω_j are updated by ∂W_j/∂ t=-1/|Ω_j|∫_∂Ω_jF·ndl +1/|Ω_j|∫_Ω_jSdΩ_j, where W_j is the cell-averaged flow variable, F=(F^x,F^y) is the time-dependent flux at cell interface, which can be obtained from the moments of the gas distribution function in Eq. (<ref>). The W_j is defined as W_j ≡1/| Ω_j |∫_Ω_jW(x) dΩ. The line integral of the flux in Eq. (<ref>) can be discretized by a q-point Gaussian quadrature formula, -1/|Ω_j|∫_∂Ω_jF·nd l = -1/|Ω_j|∑_l=1^l_0( |Γ_l| ∑ _k=1^q ω_k F(x_k)·n_l ) ≡ℒ^F_j(W), where |Γ_l| is the side length of the cell, l_0 is the total number of cell sides, such as l_0=3 for a triangular mesh, n_l is the unit outer normal vector, and q and ω_k are the total number of integration points and weights of the Gaussian integration formula. In order to evaluate the above numerical flux, the initial data W(x_k) is reconstructed using the compact spatial stencil, which are presented in Section 3.3. The cell-averaged S becomes 1/|Ω_j|∫_Ω_jSdΩ_j ≡ℒ^S_j(W). §.§ Discretization for source term The source term in the momentum equations includes two parts, the first one depends on the bottom topography, and the second one is related to the variation of the interface between layers and the water height of the up layer. The first part of the source term depending on the bottom topography is defined as 1/|Ω_j|∬_Ω_jSdΩ_j = h_j(0,-GB_j,x,-GB_j,y)^T ≡ℒ^S_1_j(W), where h_j is the cell average of h in Ω_j. High-order spatial and temporal discretizations of the first part can be implemented directly, as in the single-layer SWE in <cit.>. The second part of ℒ^S_j(W) is related to the variation of the water height. Taking the source term in the equation of h_1U_1 as an example, the spatial discretization becomes ℒ^S_2_j(W) ≡1/|Ω_j|∫_Ω_j -χ Gh_1h_2,xdx dy =-χ G 1/|Ω_j|∫_Ω_j h_1 dx dy ·1/|Ω_j|∫_Ω_j h_2,xdx dy +O(Δ X^2) =-χ G 1/|Ω_j|∫_Ω_j h_1 dx dy ·1/|Ω_j|∫_∂Ω_j h_2n_x dΓ +O(Δ X^2) =-χ G ∑_k=1^3∑_l=1^2w_k,lh_1(𝐱_k,l) ·1/|Ω_j|∑_k=1^3(∑_l=1^2 w_k,l h_2(𝐱_k,l) )n_k,x|Γ_k| +O(Δ X^2). where |Ω_j|, |Γ_k|, ω_l, n_x and 𝐱_k,l have the same definition as those in Eq. (<ref>), Δ X is the mesh cell size, w_k,l is the weight to obtain the numerical integration over Ω_j based on h_1(𝐱_k,l), and w_k,l=1/6. The second-order spatial discretizations is implemented in Eq. (<ref>). High-order discretization of ℒ^S_2_j(W) can be achieved by introducing more numerical integration points. However, considering the balance between accuracy and efficiency, the simple method given in Eq. (<ref>) is adopted for the spatial discretization of the second part of the source term in this paper. The compact GKS of the TLSWE is a well-balanced scheme. The well-balanced property is achieved through the balance of the time-accurate flux function at the cell interface and the spatial discretization of the source terms inside the control volume. In the previous study <cit.>, the well-balanced GKS for the single-layer SWE on triangular mesh has been developed, where a corresponding well-balanced evolution solution of the gas distribution function shown in Eq. (<ref>) is obtained. For the TLSWE, with the well-balanced initial conditions h_1+B=Const, h_2=Const, and (U_1,V_1)=(U_2,V_2)=(0,0), at the quadrature points on the cell interface, the initial conditions should be ∇ h_2=0 and ∇ (h_1+B)=0. With the adoption of water level reconstruction technique <cit.>, this initial condition can be preserved numerically. As a result, the compact GKS for the TLSWE can keep such a solution and the scheme is a well-balanced one. In the following, the solution update in the compact GKS on the triangular mesh will be presented. §.§ The time evolutions of flow variables and their derivatives By adopting the S2O4 time stepping method <cit.>, the fully discretized form of the TLSWE in Eq. (<ref>) over the cell Ω_j in a time step [t^n,t^n+1] is given by W^n+1/2_j= W^n_j+1/2Δ tℒ_j(W^n)+1/8Δ t^2∂/∂ tℒ_j(W^n), W^n+1_j= W^n_j+Δ tℒ_j(W^n)+1/6Δ t^2∂/∂ tℒ_j(W^n) +1/3Δ t^2∂/∂ tℒ_j(W^n+1/2), where ℒ_j=ℒ^F_j+ℒ^S_j includes the flux and source term contribution. In the current compact GKS, besides the update of cell-averaged flow variables in Eq. (<ref>), the cell-averaged derivatives can be updated as well by the Gauss's theorem as ∇𝐖_j(t^n+1) =1/|Ω_j|∫_∂Ω_j W(𝐱,t^n+1) nd S, with the discretized form ∇W_j^n+1 =1/| Ω_j |∑_l=1^l_0(|Γ_l| 𝐧_l∑ _k=1^q ω_kW^n+1(x_k) ), where |Ω_j|, |Γ_l|, l_0, ω_k and n_l have the same definition as those in Eq. (<ref>). The flow variables 𝐖(𝐱,t^n+1) should be provided at the inner sides of the cell boundary of the control volume at the time step t^n+1. Fig.<ref> shows the time-accurate flow variables and fluxes on the cell interface from the evolution solution of the gas distribution function in the compact GKS. In the discrete scheme, the discontinuous evolution solution 𝐖^l,r(𝐱,t^n+1) at the cell interface have been obtained in the GKS for the highly compressible Navier-Stokes solutions <cit.>. However, in the current study for the shallow water equations, a continuous evolution solution, namely 𝐖^l=𝐖^r, for the update the cell-averaged derivatives within the cell by Eq. (<ref>) seems work very well. In order to obtain a high-order time-accurate flow variable at the quadrature point in Eq. (<ref>), the macroscopic flow variable is evolved by two stages W^n+1/2(x)=W^n(x)+1/2Δ t W_t^n(x), W^n+1(x) =W^n(x)+Δ t W_t^n+1/2(x). §.§ High-order compact reconstruction In this section, the high-order compact spatial reconstruction for flow variables will be presented. Based on the cell averages and their derivatives, the high-order reconstruction can be obtained compactly with the stencils involving the closest neighboring cells only, as shown in Fig. <ref>. The compact stencil provides consistent domains of dependence between the numerical and physical ones. The reconstruction with the accuracy from fourth-order to sixth-order can be obtained on the compact stencils <cit.>. The fourth-order reconstruction will be used in this study. For the fourth-order reconstruction, P^3(x) polynomial is constructed as P^3(x)=∑_k=0^9 a_k φ_k(x), where a_k is the degrees of freedom (DOFs) of P^3(x), the total number of a_k is 10 and the complete polynomial basis with the highest order of 3 are included, and x=(x,y) is the coordinate. The basis function φ_k(x) can take the zero-averaged basis as 1,  δ x-δ x^(0),  δ y-δ y^(0),  1/2δ x^2-1/2δ x^2^(0),  δ xδ y-δ xδ y^(0),  1/2δ y^2-1/2δ y^2^(0),  ⋯. To fully determine P^3(x), the DOFs on the cells of the compact stencil is selected to give the constraints on P^3(x). (1/|Ω_l |∫_Ω_lφ_k(x) dx dy ) a_k=Q_l, (1/|Ω_l |∫_Ω_lφ_k,x(x) dx dy ) a_k=Q_l,x, (1/|Ω_l |∫_Ω_lφ_k,y(x) dx dy ) a_k=Q_l,y, where the same subscript k of φ_k and a_k on the left-hand side of the equations follow the Einstein summation. Q_l, Q_l,x and Q_l,y are the DOFs in the cells for any component of 𝐖. Due to arbitrary geometrical triangular mesh, the number of equations M in Eq. (<ref>) should be greater than the number of DOFs a_k to avoid an ill-conditioned system. For the fourth-order reconstruction, the set of DOFs S_0 is given by S_0={Q_l_1,Q_l_2,x,Q_l_2,y},  l_1=0,i,j,k,i_1,i_2,⋯,k_2, l_2=0,i,j,k. Eq. (<ref>) can determine a linear system of a_k, and it is written as a_0=Q_0, and ( [ A_1,1 A_1,2 ⋯ A_1,9; A_2,1 A_2,2 ⋯ A_2,9; ⋮ ⋮ ⋮ ⋮; A_9,1 A_9,2 ⋯ A_9,9; A^x_0,1 A^x_0,2 ⋯ A^x_0,9; A^y_0,1 A^y_0,2 ⋯ A^y_0,9; ⋮ ⋮ ⋮ ⋮; A^y_3,1 A^y_3,2 ⋯ A^y_3,9; ]) ( [ a_1; a_2; ⋮; a_9; ]) = ( [ Q_1-Q_0; Q_2-Q_0; ⋮; Q_9-Q_0; Q_0,xh; Q_0,yh; ⋮; Q_3,yh; ]) , where A_l,k and A_l,k are defined as A_l,k =1/|Ω_l |∫_Ω_lφ_k(x)  dx dy, A^i_l,k =h/|Ω_l |∫_Ω_l∂φ_k(x)/∂ r  dx dy,  r=x,y,  k=1,2,⋯,9. The system can be solved by the least square (LS) method. The solution of a_k, (k=1,2,⋯,9) is given by 𝐚=[(𝐀^T𝐀)^-1𝐀^T] 𝐐, where 𝐚 is the vector of DOFs without a_0, 𝐀 is the coefficient matrix in Eq. (<ref>), and 𝐐 is the vector of the RHS in Eq. (<ref>). To deal with discontinuities in the solution, the nonlinear reconstruction is needed. The nonlinear compact reconstruction is obtained based on the WENO method by nonlinearly combining the high-order polynomial P^3 and several lower-order polynomials, where the lower-order polynomials are determined based on the sub-stencils by using the LS method. The nonlinear reconstruction in the compact GKS has been developed in <cit.>, and the same techniques will be used here. § NUMERICAL VALIDATIONS The compact GKS for the two-layer SWE will be validated by the cases of two-layer shallow flow in this section. All the computations in this section are performed on 2-D triangular mesh. The time step used in the computation is determined by the CFL condition as Δ t=CFLΔ X/U_max, where Δ X is the size of the mesh cell, U_max=max{√(U_1^2+V_1^2)+√(Gh_1),√(U_2^2+V_2^2)+√(Gh_2)}, and CFL number takes 0.5. The gravitational acceleration is taken as G=9.81 if not specified. The collision time τ in the BGK model for inviscid flow at a cell interface is defined by τ=εΔ t + ε_num|h_l^2-h_r^2/h_l^2+h_r^2|Δ t, where ε=0.05, ε_num=5, and h^2_l and h^2_r are the pressures at the left and right sides of a cell interface. The reason for including the pressure jump term in the relaxation time is to enhance the artificial dissipation in case of bore wave. §.§ Accuracy test The accuracy of the compact GKS with high-order compact reconstruction is tested. In order to calculate the error in the numerical solution, an initial condition with analytical evolution solution is used directly h_1=0.9+0.02e^-50((x-1)^2+(y-1)^2), h_2=1-h_1, with a uniform velocity (U_1,V_1)=(U_2,V_2)=(1,1). The density ratio is taken as χ=1.0. The gravitational acceleration is G=9.81. The free boundary condition is taken. The analytical solution of this problem is given by h_1(t)=0.9+0.02e^-50((x-1-t)^2+(y-1-t)^2), h_2(t)=1-h_1(t), (U_1(t),V_1(t))=(U_2(t),V_2(t))=(1,1). The computational domain is taken as [0,2]×[0,2]. The triangular mesh is used. The L^1 errors of h_1 and h_2 at t=0.1 and the convergence orders are presented in Table <ref>. The convergence order of current compact GKS does not keep the 4th order, which is due to the second-order approximation is used when discretizing the source term in Eq. (<ref>). Although the optimal 4th-order convergence is not realized, the advantages of high resolution from the compact spatial reconstruction will be demonstrated in other complex flow problems. §.§ Well-balanced property The well-balanced property of the compact GKS on unstructured mesh is validated in the following. The initial condition is a two-dimensional steady state solution with non-flat bottom topography. The bottom topography is B(x,y)=0.5e^-50[(x-1)^2+(y-1)^2]. The steady state is h_1=0.8-B(x,y), h_2=0.2, and all the velocities are 0. The density ratio and the gravitational acceleration are taken as χ=1.0 and G=9.81, respectively. The computational domain is [0,2]×[0,2]. The triangular mesh with cell size Δ X=0.05 is used. The wall boundary condition is imposed on all the boundaries. The discretized bottom topography is shown in Fig. <ref>. The errors history of flow variables is plotted in Fig. <ref>. The error remains at the same level at different computational time. At very long computation times, the errors of water surface level and momentum are less than 1.0×10^-8. The current compact GKS is able to maintain an initial balanced steady state solution. §.§ Riemann problems of TLSWE In this section, the Riemann problems with a discontinuity at the interface between two fluid layers are studied to validate the compact GKS for TLSWE. Due to unequal densities of the two layers of fluid, the discontinuity at the interface will evolve and propagate. The first test was introduced to verify the stability of the numerical schemes for unsteady two-layer exchange flows <cit.>. It can also be used to evaluate the accuracy of different numerical schemes in computing unsteady solutions over a flat bottom. The initial water level is set as (h_1,h_2) = (0.5,0.5),     0≤ x<0.3, (0.55,0.45), 0.3≤ x≤1, and the uniform velocity (U_1,V_1)=(U_2,V_2)=(2.5,0) is given in the whole domain. In the computation, the 2-D computational domain is taken as [0,1]×[0,0.5], and the triangular mesh is used. The computational time is t=0.1. The density ratio is χ=0.98. The gravitational acceleration is taken as G=10 in this case. The coarse mesh with Δ X=1/100 used in the computation and the 3-D water surface obtained by the compact GKS are shown in Fig. <ref>. The solution of the evolved free surface has a square-wave structure with small variation. The current compact GKS captures this solution with no obvious numerical oscillations. In Fig. <ref> and Fig. <ref>, the water levels along the horizontal centerline of the computational domain is plotted, where the results on a finer mesh with Δ X=1/400 are also given to verify the mesh convergence solution from the current compact GKS. To quantitatively verify the correctness of the results obtained by the current scheme, the reference solution obtained by the 1-D model with a cell size of Δ X=1/10000 in <cit.> is also plotted. The compact GKS gives consistent solutions on both coarse and fine meshes. The resolution of the local solution structure on the fine mesh by the compact GKS is comparable to the reference solution. The second case is the Riemann problem with a large discontinuity at the interface between the two layers <cit.>. The initial value of water levels is given by (h_1,h_2) = (0.2,1.8),  0 ≤ x<5, (1.8,0.2),  5 ≤ x<10. The initial velocity is 0. The water density ratio is χ=0.98. The gravitational acceleration is taken as G=9.81. The computational domain is set as [0,10]×[0,1]. The triangular mesh with a cell size of Δ X=1/40 is used in the computation. The evolved results at t=1.0 obtained by the compact GKS is presented in Fig. <ref> and Fig. <ref>. In Fig. <ref> the result of h_1 on the 2-D triangular mesh is compared with the reference solution presented in <cit.>. Good agreement has been obtained. The water levels of the first layer together with the water surface and discharge are plotted in Fig. <ref>. §.§ Dam-break problems at different density ratios The two-layer dam-break flows are used to validate the compact GKS. The initial state is given as (h_1,h_2) = (0.357,1), 0≤ x<0.5, (0.357,0), 0.5≤ x≤1. The velocity is set as (U_1,V_1)=(U_2,V_2)=(0,0) in the whole domain, and the computational domain is [0,1]×[0,0.5]. The gravitational acceleration is G=9.81. Dam-break flows at two density ratios are studied. In the computation, a coarse triangular mesh with Δ X=1/100 and a fine triangular mesh with Δ X=1/400 are used. The first case is the dam-break flow at same density of the two layers, i.e., the density ratio with χ=1. The 3-D water level distributions of h_1+h_2 and h_1 at t=0.08 obtained by the compact GKS on the coarse mesh are shown in Fig. <ref>. The water levels and discharge distributions along the horizontal centerline are given in Fig. <ref>. The results on the coarse mesh are consistent with those on the fine mesh, and the water levels obtained by the current compact scheme are consistent with those in <cit.>. The second case is the dam-break flow of a light fluid over a dense one. The density ratio is χ=0.2. The 3-D water level distributions of h_1+h_2 and h_1 at t=0.08 obtained by the compact GKS are shown in Fig.<ref>. The 1-D water levels and discharge distributions along the horizontal centerline are given in Fig.<ref>. Due to the complexity of the solution, the fine mesh result has a better spatial resolution and gives the solution close to the reference ones in <cit.>. §.§ Channel flow with non-flat bottom This case is about the two-layer flow through a channel with non-flat bottom topography. The bottom topography is defined by B(x,y)=0.5e^-100(x-0.5)^2. The initial condition is given as h_1 =0.8-B(x,y),  h_2=0.4, U_1 =-0.2,            U_2=0.15. The channel covers a domain [0,1]×[0,0.25]. The reflecting boundary condition is applied at the channel walls. The free boundary condition is used on the left and right boundaries. The triangular mesh with a cell size of Δ X=1/200 is used in the computation. Fig. <ref> shows the results of water levels at t=0.1 and t=1.0, respectively. Due to the non-flat bottom topography, the interface between two layer fluids evolves from an initial smooth interface to a discontinuous one. The reference solution comes from solving the 1-D TLSWE on a uniform mesh with 1000 cells in <cit.>. At the early time, a smooth interface evolves, such as the left figures in Fig. <ref>, and the solution has good agreement with the reference solution. At a later time, a discontinuous interface emerges, such as the right figures in Fig. <ref>, and the position of the discontinuity obtained by the compact GKS has a good match with the reference solution. §.§ 2-D interface propagation The 2-D circular interface propagation Riemann problem is studied. The initial condition of the test case is given by (h_1,h_2) = (1.8,0.2),    (x-5)^2+(y-5)^2 <4.0, (0.2,1.8),   otherwise. The initial velocity is (U_1,V_1)=(U_2,V_2)=(0,0) in the computational domain [0,10]×[0,10]. The gravitational acceleration is G=9.81. The density ratio between layers is χ=0.98. The free boundary condition is adopted on all boundaries. The triangular mesh with a cell size of Δ X=1/10 is used in the computation. The 3-D water level distributions of h_1 and its distributions along the horizontal centerline at t=0, t=2.0 and t=4.0 are presented in Fig. <ref> and Fig. <ref>, respectively. The results show the circular propagation of the water column. §.§ 2-D dam-break in an irregular domain The 2-D dam-break problem in <cit.> is used in the current study to validate the compact GKS. Fig. <ref> shows the computational domain and the mesh. The length of the dam breach is 75 and it starts at y=95. The dam itself has a width of 10 and its left side is located at x = 95. At t=0 the stationary water surface has a discontinuity with h_l=10 and h_r=ϵ across the breach, and two values of ϵ=5 and ϵ=1× 10^-3 are used to simulate the wet and dry bed cases, respectively. For the wet case, the individual water levels of layer 1 and layer 2 are set as (h_1,h_2) = (9,1),  0≤ x<95, (5,0),  95≤ x. For the dry case, the individual water levels of layer1 and layer 2 are set as (h_1,h_2) = (9,1),  0≤ x<95, (ϵ,ϵ),  95≤ x. The boundary condition on the far right is the free boundary, and the other boundary conditions are the non-penetration slip wall boundaries. The mesh size far from the breach is h_mesh=2.5, and is locally refined by 3.3 times around the dam breach. The 3-D water surface heights at t=7.2 are shown in Fig.<ref> and Fig.<ref>. The discontinuous bore waves are captured without spurious oscillation. It clearly shows that the wave propagating speed is higher in the dry bed case. § CONCLUSION In this study, we have developed a compact high-order Gas-Kinetic Scheme (GKS) on a triangular mesh to solve the Two-Layer Shallow Water Equations (TLSWE). The compact scheme is highly accurate and robust in capturing discontinuous solutions. The gas evolution model at the cell interface in the kinetic scheme explicitly captures the dynamics from the particle free transport, collisions, and acceleration from the external forcing term on the particle trajectory. The time-accurate evolution solution provides not only the flow variable update inside each cell, but also the gradients of the flow variables. As a result, based on the updated flow variables and their gradients, compact stencil can be used in the reconstruction and the design of the compact scheme. The compact GKS has several key features in solving TLSWE. The high-order compact reconstruction on a triangular mesh is naturally obtained. The existence of the time derivative of the flux function uses less stages to get a scheme with high-order accuracy in time, such as the two-stages for the fourth-order time accuracy. This compact GKS provides accurate numerical solutions for the TLSWE and is ready for its engineering application in the coastal area ocean flow. § ACKNOWLEDGMENTS The current research is supported by CORE as a joint research centre for ocean research between QNLM and HKUST through the project QNLM20SC01-A and QNLM20SC01-E, the National Natural Science Foundation of China (No. 12172316), and Hong Kong research grant council 16208021 and 16301222. § REFERENCES ieeetr
http://arxiv.org/abs/2306.03743v1
20230606150100
A Chondritic Solar Neighborhood
[ "Isabella L. Trierweiler", "Alexandra E. Doyle", "Edward D. Young" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR" ]
Department of Earth, Planetary, and Space Sciences, University of California, Los Angeles, Los Angeles, CA 90095, USA Isabella L. Trierweiler [email protected] Department of Earth, Planetary, and Space Sciences, University of California, Los Angeles, Los Angeles, CA 90095, USA [email protected] Department of Earth, Planetary, and Space Sciences, University of California, Los Angeles, Los Angeles, CA 90095, USA A persistent question in exoplanet demographics is whether exoplanetary systems form from similar compositional building blocks to our own. Polluted white dwarf stars offer a unique way to address this question as they provide measurements of the bulk compositions of exoplanetary material. We present a statistical analysis of the rocks polluting oxygen-bearing white dwarfs and compare their compositions to rocks in the Solar System. We find that the majority of the extrasolar rocks are consistent with the composition of typical chondrites. Measurement uncertainties prevent distinguishing between chondrites and bulk Earth, but do permit detecting the differences between chondritic compositions and basaltic or continental crust. We find no evidence of crust amongst the polluted white dwarfs. We show that the chondritic nature of extrasolar rocks is also supported by the compositions of local stars. While galactic chemical evolution results in variations in the relative abundances of rock-forming elements spatially and temporally on galaxy-wide scales, the current sample of polluted white dwarfs are sufficiently young and close to Earth that they are not affected by this process. We conclude that exotic compositions are not required to explain the majority of observed rock types around polluted white dwarfs, and that variations between exoplanetary compositions in the stellar neighborhood are generally not due to significant differences in the initial composition of protoplanetary disks. Nonetheless, there is evidence from stellar observations that planets formed in the first several billion years in the Galaxy have lower metal core fractions compared with Earth on average. § INTRODUCTION The growing sample of exoplanets has inspired many studies detailing their compositions and interiors. Analyses of exoplanet compositions using mass and radius relationships or through extrapolating stellar abundances have led to a wide range of possible exoplanet compositions <cit.>, including Earth-like compositions, but also carbon-rich planets <cit.>, coreless super-Earths <cit.>, and mineralogies with no Earth-rock counterparts <cit.>. This hypothesized diversity of exoplanet compositions motivates us to benchmark the variety of putative non-Earth like planets against the compositions of exoplanetary rocks accreted by polluted white dwarfs (WDs). The metal pollution on WDs is caused by accretion of exoplanetary debris and provides direct measurements of bulk compositions of extrasolar rocks that are not susceptible to the same degeneracies as the mass/radius approach <cit.>. The vast majority of WD pollutants are rocky, with some fragments identified as specifically core or crust-like <cit.>. Some water-rich objects have also been identified, with possible parent bodies including Kuiper Belt analogs or exomoons <cit.>. We analyze the abundances from 31 oxygen-bearing polluted WDs. The presence of O, along with other major rock-forming elements such as Si, Mg, and Fe indicate that these WDs are accreting rocky material. We compare the abundances of the WD pollution to rocks throughout the Solar System, an approach motivated by previous WD studies <cit.>. We also carry out the same analysis for local stars, as a proxy for protostellar disk environments and as a broad representation of the system's rocky planet compositions <cit.>. For this purpose, we use the Hypatia catalog of stars, which includes elemental abundances for thousands of stars within ∼500 pc of the Sun <cit.>. Throughout, we compare WD and stellar compositions to solar system rocks using a reduced chi-squared goodness-of-fit test. While individual stars may show unusual amounts of particular elements, we find in this work that the majority of WD pollution is indistinguishable from chondrites in composition, when accounting for uncertainties in the measured abundances. The whole-rock compositions of CI chondrites are considered a proxy for the relative abundances of rock-forming elements of the Solar System, as they are the best compositional match to the Sun <cit.>, and we use them here as representative of chondrites in general. This paper is organized as follows. In Section <ref> we outline the χ^2 calculation used to test the goodness of fit of each set of abundances to CI chondrite. To demonstrate the method, we apply the χ^2 test to Solar System rocks in Section <ref>. We then carry out fits for the WD polluters in Section <ref> and for the Hypatia catalog stars in Section <ref>. We discuss the impact of galactic chemical evolution on polluted WD and Hypatia compositions in Section <ref> and present our conclusions in Section <ref>. § METHODS Throughout this work we compare observed abundances to the CI chondritic composition <cit.> by computing reduced χ^2 values (χ^2_ν). Measurement uncertainties for the WDs are propagated using a Monte Carlo approach. Uncertainties for the Hypatia catalog stars are gathered from the catalog <cit.>. For each star, we use the relative concentrations of Si, Fe, Al, Ca, Ni, and Cr, where available, all normalized to Mg. We do not include more volatile elements such as C, N, or O in the comparisons as we are primarily concerned with rock compositions in this work. Because a very diverse range of physical processes can vary volatile abundances during planet formation <cit.>, volatile abundances are not necessarily related to rock compositions. Excluding these elements therefore allows for more direct comparison of the underlying rock to Solar System samples. Additionally, while O is a major element in rocks, its abundance is correlated with the other included rock-forming elements in oxides, providing further motivation to exclude it from the χ^2_ν calculations. Starting with log abundances for each star and WD, we construct a random sample of abundances for each element assuming a normal distribution based on the reported logarithmic abundance ratios and their uncertainties. We then transform the distribution of logarithmic relative abundances to a distribution of number ratios for each element relative to Mg. The reported symmetric errors in the logs lead to asymmetric distributions in number ratios, so we select our assumed abundance ratios and uncertainties as the median, 16.5, and 83.5 percentiles from the distributions. Errors in ratios of elements are obtained by propagation of uncertainties in the individual elements using Monte Carlo sampling. To address the asymmetric uncertainties in the ratios of elements arising from reported symmetric errors in logs of the ratios for both the WDs and Hypatia catalog stars, we use the following equation to calculate the χ^2 goodness of fit for each element i relative to Mg: χ^2_i = (δ_i/σ_i)^2 (1 - 2A δ_i/σ_i + 5A^2 (δ_i/σ_i)^2 ), where δ_i is the difference between the observed and expected element ratio, σ_i is the average of the upper and lower errors, and A describes the asymmetry in the errors as A = (σ_+ - σ_-)/(σ_+ + σ_-), where σ_+ and σ_- are the asymmetrical measurement uncertainties for element i <cit.>. To find the reduced χ^2, we sum over all elements and divide by the degrees of freedom, taken to be the number of elements (excluding Mg) measured for the given star. We define the passing conditions (accepting the alternative hypothesis H_ a that the rocks are chondritic) for the χ^2_ν tests using the parameter α, the probability of randomly obtaining a χ^2_ν value greater than the one calculated for the observed abundances (e.g. the probability of incorrectly rejecting the null hypothesis, H_0, that the rocks are not chondritic). Following convention, we place the α limit at 0.05, so that any stars identified as chondritic compositions must have a χ^2_ν with an α < 0.05, implying a H_ a=1-α probability that the correspondence with chondrite is not due to random chance. Because our sample sizes are very small, we must account for errors in the χ^2_ν values. The error in χ^2_ν can be approximated as σ = √(2/n) <cit.>, where n is the number of data points for a given star's composition. We therefore define the critical reduced chi-square values as χ^2_ν, crit = χ^2_ν (α = 0.05) + 2 √(2/n), allowing for a 2σ error in χ^2_ν. These constraints give critical χ^2_ν values of ∼ 3 to 4, for n from 3-6 (excluding Mg), varying inversely with the number of elements observed for each star. For a given star, if the elements available define χ^2_ν ≲ 3 to 4, the data are taken as evidence for chondritic rocky parent bodies or planets. In order to identify outliers in the elemental abundances for each WD and Hypatia star, we apply a Dixon's Q test <cit.> with a confidence level of 95% (p=0.05). We choose this test as it is best suited for small sample sizes. For this test we convert abundances to (n_ Z/n_ Mg) / (n_ Z/n_ Mg)_ CI such that 1 represents a perfect fit to chondrite. Outlier elements are therefore the elements with the worst fits to chondrite (other than Mg), and we identify an outlier in six of the WDs. Stars which pass as chondritic when an outlier is ignored are considered “soft passes." § SOLAR SYSTEM ROCKS To test our ability to differentiate between different rock types using the methods of Section <ref>, we first apply our test for chondritic compositions to rocks in the Solar System, including bulk Earth (BE) and bulk silicate Earth (BSE, <cit.>), mid-ocean ridge basalt (MORB), continental crust (CC, <cit.>), bulk silicate Mars (BSM, <cit.>), and E chondrites (EH, <cit.>). For each element, we apply the mean uncertainty calculated from our sample of WDs for that element, with the resulting uncertainties generally ranging from 0.15-0.30 dex. We find that BE, BSE, BSM, and the E chondrites are indistinguishable from CI chondrites, while MORB and CC are very clearly not good matches to CI chondrite in these tests (Figure <ref>). Bulk Earth being indistinguishable from chondritic is in contrast with the distinction typically drawn between the two rock types in previous studies <cit.>, and is the result of propagating the large uncertainties associated with the element ratios for the WDs compared with the comparatively small differences in composition among the rock compositions. Throughout this work, we report compositions as consistent or inconsistent with chondrites, while recognizing that with current measurement uncertainties, chondrites, Earth, and Mars are all indistinguishable. However, this test is able to definitively differentiate between chondrite-like compositions and crust, the latter representing products of igneous differentiation of chondrites. § WHITE DWARFS Our WD sample includes 31 WDs with detections of oxygen together with other rock-forming elements (Table <ref>). The vast majority of the WDs in our sample have atmospheres that are helium-dominated, and a handful are hydrogen-dominated. The WDs are all within about 200 pc of the Sun. For each WD, we draw stellar properties, elemental abundances and uncertainties in abundances from the references listed in Table <ref>, supplemented by the Montreal White Dwarf Database (MWDD, ). We use the elements Si, Fe, Al, Ca, Ni, and Cr where available. We ratio all abundances to Mg and propagate uncertainties using a Monte Carlo approach outlined in Section <ref>. We analyze both the raw and steady-state adjusted abundances for the WD pollution. The steady-state adjustment accounts for differential settling rates for different elements in the atmosphere of a WD. Settling rates also depend on the dominate element in the atmosphere of the WD, and range from days to millions of years <cit.>. The steady-state settling factor we use is (n_Z/n_Mg)_SS = (n_Z/n_Mg) × (τ_Mg/τ_Z), where τ_ Mg and τ_ Z are the settling timescales for Mg and a given element Z, respectively. Settling timescales for the WDs in our sample are collected from the MWDD, using the WD parameters listed in Table <ref>. These adjustments are clearly necessary for the H-dominated WDs, where settling is generally much more rapid. Suitability of the adjustment to abundances in the He-dominated atmospheres is less clear. We note that the stated steady-state factor is a simplistic approach to account for settling, which does not account for potential effects such as mixing in the WD atmosphere <cit.> For each WD, we compare the abundances normalized to Mg to the abundances measured in CI chondrites following the method outlined in Section <ref>. Figure <ref> shows this comparison for all WDs in our sample and for both the raw (top) and steady-state adjusted values (bottom). Hydrogen-dominated WDs are marked with “H". From left to right on each plot, the element ratios are Cr/Mg, Ni/Mg, Ca/Mg, Al/Mg, Fe/Mg, and Si/Mg. The dark grey panels in Figure <ref> indicate WDs which do not pass the χ^2_ν test for chondritic composition. The lighter shaded panels show the “soft pass" WDs, where ignoring an identified outlier allows the WD to pass as chondritic (see Section <ref>). Solar system rocks are shown for comparison (see Section <ref> for discussion of Solar System fits). Figure <ref> shows the χ^2_ν parameters for the WDs for the raw data versus steady-state abundances, separated by the dominant element in the WD atmosphere. We also group the WDs by the number of observed elements considered in the statistical comparison (n), to illustrate the dependence of χ^2_ν on n. Increasing n generally lowers both the calculated and critical χ^2_ν values. The condition for passing as chondritic at n=3 is χ^2_ν∼ 4.2 and at n=6 is χ^2_ν∼ 3.3. We find that 15 of the 31 WDs pass the χ^2_ν test as good matches to chondritic composition when using the raw abundances. One additional WD passes as chondritic when its outlier element is ignored. A larger fraction of pollution passes as chondritic with the steady-state adjustment (21/31 pass). Because the steady-state adjustment does not improve the fits for every WD (Figure <ref>), some WDs that pass as chondritic using the raw data do not pass in the steady-state case. We note that a larger proportion of WDs passing as chondritic in the steady-state case does not a priori mean the WDs are most likely to be in the steady state phase of accretion. In any case, over half of the WDs in the sample are consistent with chondritic compositions using either the raw or steady-state compositions. We find no compelling evidence for basaltic crust (MORB) or continental crust rocks among the polluted WDs. When carrying out the same χ^2_ν calculation for each WD relative to the other Solar System rock types considered here (Section <ref>), no WDs are better fit by MORB or continental crust relative to CI chondrite, even those with χ^2_ν values relative to chondrite of 100 and greater. §.§ White Dwarf Mineralogy Classification In addition to the χ^2_ν test, we also follow the common practice of representing rock chemistries as “normative mineralogies" in which elemental concentrations are converted to volumetric fractions of fictive minerals (, see also Supplement for details). We recast the WD pollution by projecting the observed abundances to a normative mineralogy composed of the relative abundances of Mg-endmember Olivine (OLV), Orthopyroxene (OPX) and Clinopyroxene (CPX). These minerals comprise a reasonable normative mineralogy used to classify ultramafic (e.g., peridotite) rocks, and chondrites are broadly similar to ultramafic rocks. The fractions of these minerals in terms of moles depend on the relative numbers of Mg, Si and Ca atoms comprising the rocks. By inverting the mineral formulae for these reference minerals where OLV = Mg_2SiO_4, OPX = Mg_2Si_2O_6, and CPX = CaMgSi_2O_6, one obtains the function that transforms relative atomic abundances of Mg, Si, and Ca to the relative molar abundances of the minerals, which in matrix form is [ n_ OLV; n_ OPX; n_ CPX ] = [ 1 -1 1; -1 2 3; 0 0 1 ]×[ n_ Mg; n_ Si; n_ Ca ]. The molar abundances of the normative minerals are converted to approximate volume fractions (as is common for reporting rock mineralogies) using nominal molar volumes for OLV, OPX, and CPX, or 4.37, 6.26, and 6.60 J/bar (J/bar = 0.1 cm^3/mole). Fe and other less abundant elements are not included in this projection. Including Fe in this projection shifts the positions of the data somewhat, but does not substantially change the results. Figure <ref> shows the WD pollution represented as the relative volume fractions of OLV, OPX, and CPX implied by each composition. For each polluted WD, we take Monte Carlo draws of Mg, Si, and Ca using the reported values and corresponding uncertainties as the parent populations, and calculate the resulting normative mineral abundances. CPX is constrained only by the relative amount of Ca in the pollution, and exhibits comparatively little scatter as Ca uncertainties are generally small. We note that the volumetric fractions resulting from this method are not necessarily physical. Because this is a projection, some of the WD abundances result in negative amounts of OLV, OPX, or CPX, leading to scatter beyond the bounds of the positive ternary coordinate system. <cit.> previously used this method to report exotic mineralogies for WD pollution, however we find that the uncertainties in Si and Mg are sufficiently large as to produce hopelessly large spreads in OLV and OPX abundances, so that it is impossible to constrain the mineralogy of the implied rocks (Figure <ref>). A similar spread in mineral abundances is derived from the steady-state data. We therefore conclude that categorizing rock pollution in WDs into rock types based on normative abundances of OLV, OPX, and CPX abundances, or similar normative mineralogies, is not possible. § HYPATIA CATALOG STARS Our Solar System exhibits a diversity of rock types originating from the same protoplanetary material, underscoring that samplings of rock can end up with very different compositions relative to the average starting material (e.g., crust vs. chondrites in Section <ref>). To benchmark the “final" exoplanetary rocks sampled by polluted WDs against protoplanetary material, we analyze the abundances of rock-forming elements in nearby stars by applying our compositional fitting method to stars in the Hypatia catalog <cit.>. These stars should reflect protoplanetary material, to the extent that stellar abundances have been shown to broadly reflect compositions of planets around their stars <cit.>. The stellar sample therefore represents a potential average of planet building materials, rather than the final rock compositions of individual rocky parent bodies sampled by the WDs. We select Hypatia catalog stars with Mg and at least two other elements among Si, Fe, Al, Ca, Ni, or Cr. All uncertainties are obtained directly from the catalog, where they are listed as either the uncertainty reported in the original study or the mean uncertainty of multiple studies, where stars are observed by multiple methods. Given the range of stars included in the Hypatia catalog, we explore how stellar type and distance may impact overall abundances. About 6500 stars in the catalog are classified as F, G, K, or M stars. In Figure <ref>, we show the range of distances from the Sun in each classification. M stars in the sample tend to be much closer to the Sun (<∼ 50 pc) compared to the rest of the Hypatia stars. For the purposes of this work, we do not attempt to fully account for potential biases in the Hypatia catalog stars arising from the number of separate stellar surveys included in the catalog, but instead point out a few factors that are relevant to our compositional tests. First, in Figure <ref> we plot the distributions of elemental abundances relative to solar abundances, colored by stellar type. In general we find the distributions are centered around solar abundances, however we note a peak in Ca in M stars at lower abundances relative to other stellar types as well as a larger fraction of F stars with low Al than other stellar types. <cit.> point out potential biases for both of these elements, including a lack of Al abundance measurements at higher metallicities, which may be altering the distribution. Additionally, most of the low [Ca/H] stars were drawn from the same single survey which may be inducing a spurious, non-physical bias in the [Ca/H] abundances. We also note that abundance uncertainties in the Hypatia catalog are strongly peaked at about 0.05 dex. Distance appears to have a strong influence on the uncertainties, with a larger range of uncertainties for stars closer to the Sun, though it is unclear if this is a physical effect or due to the stellar samples included. Stars within about 500 pc have a large range of uncertainties, up to 1.75 dex, while stars that are farther away have a nearly flat distribution of uncertainties at around 0.05 dex. From the Hypatia catalog we obtain abundances relative to solar abundances for each element, in the form [Z/H]= log_10(Z/H)_* - log_10(Z/H)_⊙. We convert these relative abundances to molar ratios using the following equation: n_Z/n_Mg = 10^[Z/H] + log_10(Z/H)_⊙/10^[Mg/H] + log_10(Mg/H)_⊙ = 10^[Z/H] + A(Z)/10^[Mg/H] + A(Mg), where A(Z) = log_10(Z/H)_⊙ + 12 is the solar abundance of the element Z, as defined in <cit.>. Uncertainties in the stellar abundances are propagated through this conversion using a Monte Carlo approach. We calculate the χ^2_ν goodness of fit parameter for Si, Fe, Al, Ca, Ni, and Cr, where available in each of the Hypatia catalog stars. For the elements considered in this work, we find median uncertainties of ∼ 0.05 dex for the raw abundances relative to solar. To avoid invalid values for χ^2_ν, we replace any uncertainties of 0 with the median uncertainty for the corresponding element. Figure <ref> shows the abundances for 35 randomly selected Hypatia stars. As with the WDs, white panels indicate stars that pass as chondritic, light grey panels show stars that pass when an outlier is ignored, and dark grey panels do not pass as chondritic even if outliers are ignored. We find that outliers do not make a big difference, and that about 75% of stars pass as chondritic whether or not outliers are ignored. Similar to the WDs, we find that many of the stars that do not pass as chondritic are high in Mg, so that the abundances fall systematically below chondritic values. Because the uncertainties of the abundances vary strongly with the distance of each Hypatia star, we also compute fractions of chondritic stars considering only stars within 150 pc. We find that the results for the truncated sample are very similar to those for the full sample, with about 74% of stars providing good matches to chondrites. §.§ Hypatia Mineralogy Classification Projecting the Hypatia catalog stellar abundances into normative mineralogy ternary space, we find, as with the WDs, the uncertainties are too large to constrain the volumetric proportions of minerals in a meaningful way. To illustrate this, Figure <ref> shows the abundances relative to chondrite for one of the Hypatia catalog samples, HIP 26834, yielding an excellent fit to a chondritic bulk composition. Figure <ref> shows that the uncertainties in abundances are relatively low for this star, but they nonetheless create a very large spread in OLV and OPX fractions (Figure <ref>). Similar to the WDs, calculating the normative mineralogy for all of the Hypatia catalog stars results in a large spread in OLV and OPX values that reflect only uncertainties. This is consistent with <cit.>, who find that much smaller measurement uncertainties than those of current observations are required to differentiate between unique planetary structures using stellar data. §.§ Abundance Ratio Trends in Hypatia Catalog Stars The Hypatia catalog stars exhibit some systematic trends in element abundances due to galactic chemical evolution (GCE) <cit.>. In particular, we note decreasing abundances of α elements relative to iron with increasing [Fe/H], where the latter is a non-linear proxy for time. This trend is well studied in the Milky Way and other galaxies in the local universe, and is broadly due to increased injection of Fe into the interstellar medium (ISM) at later times due to the delayed effects of Type Ia supernovae. The late injection alters the α element-to-Fe ratios established by core-collapse supernovae that dominated the ISM at earlier times <cit.>. Of the elements considered in this study, Fe, Cr, and Ni abundances accelerated with time in the Galaxy as a result of late-forming Type Ia supernovae accounting for about half of their overall production. The α elements Mg, Si, and Ca, on the other hand, are produced in Type II core-collapse supernovae, and increase more steadily with time in the Galaxy. Aluminum is somewhat separate from these two groups; it is also produced by Type II supernovae like the α elements, but the yield depends more strongly on the metallicity of progenitor stars <cit.>, and therefore exhibits a relatively small acceleration in abundance with time. The α elements and Al are lithophile elements while Fe, Cr, and Ni are siderophile. We note that the Hypatia catalog contains a few thousand stars in relatively close proximity to the Earth, and that trends in stellar composition therefore don't include the the wide ranges in ages or environmental affects that are observed in larger surveys <cit.>. We find that the fits to chondrite are influenced by the evolving lithophile/siderophile ratios. In Figure <ref>, we show the fractional difference between the observed abundances and chondrite for the Hypatia stars that do not pass as chondritic. For the chondritic stars, all of these distributions are centered at zero. However, Figure <ref> shows that Fe, Cr, and Ni abundances relative to chondrite are lower than those of the lithophile elements by about a factor of two. This suggests that Type Ia products are inflating the χ^2_νs of non-chondritic stars relative to the α elements. Quantitatively, we find that of the ∼ 2000 Hypatia catalog stars that do not pass as chondritic, 71% have a siderophile element as their worst fitting abundance ratio. Of this subset of stars, 71% pass as chondritic if Fe, Cr, and Ni abundances are ignored, meaning that when stars have anomalous siderophile abundances relative to the chondrite, they typically fail as chondritic because of the siderophiles. Meanwhile, 4% of stars with lithophiles as the worst fitting element pass as chondritic when lithophiles are ignored. In other words, the majority of stars that fail with anomalous lithophile elements are not failing solely because of the lithophile elements. We do not see these same patterns in the WD data. Amongst the WDs, 7/15 of the failures in the raw data and 2/10 of the failures in the steady-state are due to siderophiles. We do not find higher recovery rates amongst the siderophile failures when removing siderophiles elements. In Figure <ref> we show four plots of abundance ratios of the Hypatia stars, illustrating the effect of GCE on the goodness of fit to chondrite. The overall trends in relative abundances of lithophile and siderophile elements are plotted as [Mg/H] (lithophile, α nuclide) against [Fe/H] (siderophile, and a proxy for time) in panel A and the corresponding [Mg/Fe] ratios against [Fe/H] in panel B. As a zero-order approximation of chemical evolution in the local neighborhood, we categorize the trends in the data into two stages of pre- and post injection of Fe, Cr, and Ni by Type Ia supernovae. The break between trends is around [Fe/H] ∼ -0.5, corresponding to ∼ 8 billion years before present <cit.>. The pre-Type Ia arrow in Figure <ref> shows the general trends in α nuclides (lithophiles), represented here by Mg, relative to siderophile abundances at low metallicities prior to the influence of Type Ia supernovae on the ISM. The post-Type Ia arrow shows the influence of Type Ia supernovae on higher metallicity stars after the influence of Type Ia supernovae on the ISM. The line in panel B shows the induced correlation between [Mg/Fe] and [Fe/H] that would be expected if Mg abundances were completely independent of Fe. At lower metallicity, we find that Mg and Fe abundances increase at very nearly the same rate, resulting in nearly constant [Mg/Fe] with metallicity. The increase in ISM Fe at later times flattens the growth of Mg vs Fe, resulting in a negative slope in [Mg/Fe] with metallicity. In panel A, we fit the low and high metallicity ranges and find a slope of 0.98 for the low end and 0.88 for the high end, with uncertainties in the slopes of less than 0.005. The decrease in slope is a reflection of the influence of the Type Ia supernovae at later times. For panel B, we find slopes of -0.03 and -0.19 for the low and high metallicity ranges, respectively. We again show the [Mg/Fe] ratio as a function of metallacity in panel C, with points colored by whether the star passes as chondritic in the χ^2_ν tests, as well as the occurrence levels for chondritic and non-chondritic stars. The contours illustrate the somewhat different distributions of the chondritic and non-chondritic stars. We find that stars with very low metallicity, or Fe abundances, are those that are often classified as non-chondritic. Finally, in lithophile-lithophile space (panel D of Figure <ref>), we find that the Hypatia catalog stars have a range of ratios centered on the Sun (the white star in Figure <ref>D). Consistent with GCE models, no overarching trends in ratios are seen in this case, and we find very little separation between the ratios of the Hypatia catalog stars that are considered chondritic and those of non-chondritic stars. We conclude that older, lower metallicity stars are less likely to be consistent with a chondritic composition. In the χ^2_ν tests, stars that are statistically distinct from chondritic more often have low Fe, Cr, and Ni compared with solar, indicating that deviations from chondritic compositions are in part attributable to the delayed effects of Type Ia supernovae. The Hypatia catalog stars are all in relative close proximity to the Sun, so while we find that the rock-forming element ratios in most of the stars are consistent with chondrites, it is possible that this conclusion would not apply to older populations of stars or stars located outside of the local disk of the Milky Way due to the effects of GCE on lithophile/siderophile ratios. § DISCUSSION A summary of the fractions of bodies that are consistent with chondritic compositions is shown in Table <ref>. The leave-out-outliers (“LOO") column includes samples that pass as chondritic using the χ^2_ν test when an element identified as an outlier is ignored (Section <ref>). We find that outliers do not significantly affect the fractions of stars that are consistent with chondritic composition. Ignoring outliers changes the classification from non-chondritic to chondritic for one WD, and shifts the fraction of Hypatia catalog stars consistent with chondritic composition by less than 1%. In Figure <ref> we show the distribution of χ^2_ν values calculated for the Hypatia catalog sample and the raw and steady-state adjusted WD data. The χ^2_ν for all of the populations is most strongly peaked at low values, consistent with chondritic compositions. This suggests that the majority of extrasolar rocks in the solar neighborhood are built from material similar in composition to that which formed the Solar System. The overwhelming fraction of Hypatia stars with chondritic rock-forming element ratios suggests that any deviations from chondrite-like compositions observed in exoplanets are more likely to be a result of the specific processing during planet formation rather than the result of large differences in the initial protoplanetary source material from chondritic. M stars in the Hypatia data set exhibit a tail to higher χ^2_ν values, though the majority of M stars still pass as chondritic. The difference in the M dwarf distribution relative to the others is evidently a result of different treatments of errors at near and far distances (M dwarfs are nearer) and potential systematic offsets in Ca. Many of the WDs and Hypatia catalog stars that did not pass as chondritic have high relative Mg concentrations; their abundance ratios in Figures <ref> and <ref> (lower panel) all fall below the 1-1 line for chondritic composition due to an excess in the Mg concentration used as the denominator in all ratios. Because we ratio to Mg, high Mg can systematically draw chondritic abundance ratios away from chondrite, inflating the χ^2_ν values. Examples amongst the WDs (raw data) include WD1415+234, SDSSJ2339-0424, SDSSJ1242+5526, WD1232+563, SDSSJ0738+1835, WD1350-162, and WD1929+012. If outliers are ignored, then G241-6 and SDSSJ1043+0855 are also fall into this list. For the WDs, applying the steady-state adjustment brings some, though not all, of the elements from the apparently Mg-rich WDs back to or above chondritic abundances. We find, therefore, that for WDs with excess Mg, deviations from chondritic in these cases is due in no small measure to the effects of settling. We now explore how this study of local stars and WDs fits in to both the overarching metallicity gradients in the Galaxy and the current landscape of inferred exoplanet compositions. §.§ Galactic Chemical Evolution The Milky Way experiences spatial and temporal variations in stellar compositions, begging the question of how representative the pervasive chondritic compositions we see in the solar neighborhood are with respect to time and place in the Galaxy. In Section <ref> we showed that galactic chemical evolution (GCE) has implications for the relative lithophile to siderophile ratios in stars over time, though the bulk of the Hypatia catalog stars are still consistent with chondrites. Here we evaluate the significance of chondritic rock-forming element ratios in the context of large scale variability in the Galaxy, outside of the solar neighborhood. Spatial metallicity gradients in the Milky Way exist both radially and vertically (Galactic latitude) as a result of GCE. The disk midplane tends to have more metal-rich stars than above or below the plane and the disk itself exhibits a negative gradient, with generally higher metallicities towards the galactic center <cit.>. Radial compositional changes may arise as annuli of the Milky Way are differentially enriched by supernovae and stellar feedback. For example, <cit.> find from cosmological simulations that the older, inner disk receives more material from Type Ia supernovae, leading to lower [Mg/Fe] compared to the outer disk. They also find that some azimuthal scatter in abundances is to be expected, though the scatter is relatively low (<∼ 0.05 dex). While variations in [Fe/H] of > 1 dex are found across the entire galactic disk, much smaller variations in [Fe/H] of about ± 0.2 dex are found for stars within 2-3 kpc of the Sun <cit.>. Radial variations in metallicity may be further damped by radial migration and mixing of stars throughout the disk. Overall metallicity is expected to rise with time in the Galaxy. For example <cit.> showed that in the solar neighborhood at galactocentric radii of about 8 kpc, changes in [Fe/H] of about 1.5 dex are to be expected over 14 Gyr. However, the majority of this increase in metallicity occurs within the first few Gyr of galactic evolution, with changes of less than 0.5 dex [Fe/H] from about 2 Gyr onwards. Therefore, while significant compositional changes occurred very early in the Milky Way's evolution, or very close to the galactic center, we do not expect to find demonstrable effects of GCE in rock-forming element ratios among stars in the stellar disk at galactocentric radii between ∼ 4 kpc and ∼ 10 kpc as seen today. Given these trends in GCE, we now assess the impact on polluted WDs by estimating their formation times. The polluted WDs in our sample have cooling ages of about 50-600 Myr and masses between ∼ 0.5-0.75 M_⊙ (Table <ref>). These WD masses translate to initial stellar masses of ∼ 1-3 M_⊙ <cit.> and stellar lifetimes of ∼ 300 Myr - 10 Gyr <cit.>. Meanwhile, radioactive dating has constrained the age of the Milky Way to ∼ 13.8 Gyr <cit.>. Given overall metallicity changes are most significant in the first few Gyr of the Galaxy, this suggests that GCE trends could only manifest in the lowest-mass WD in our data set (GaiaJ0218+3625), with progenitor masses approximately that of the Sun, and corresponding lifetimes exceeding 10 Gyr. Within our sample, we do not find evidence that lower mass WDs are worse fits to chondrite, however it is possible that such trends may be evident in future, larger samples of WDs. We conclude that GCE could significantly alter the relative abundances of rock-forming material available for planet formation at early times and outside of the disk of the Milky Way. We explore a possible implication of this in the next section. We find that the majority of the current population of polluted WDs are derived from sufficiently young progenitors and nearby to the Sun that we do not expect GCE to have a strong affect on their compositions. §.§ Iron core mass fractions One of the possible consequences of GCE for exoplanets is the resulting core-mass fractions due to variations in lithophile/siderophile ratios. The mass fractions of iron-rich cores of rocky planets have been shown to be related to the iron mass fractions deduced from their host stars <cit.>. Consistent with these studies, we calculate the iron mass fractions of planets that might have formed from the material polluting WDs or around the Hypatia catalog stars as f_ Fe = m_ Fe / (m_ Fe + m_ Mg_2SiO_4 + m_ MgSiO_3 + m_ SiO_2), where m_i is the abundance of the species i relative to H or He multiplied by the formula weight <cit.>. The relative abundances by number of the silicate species are obtained from a linear transformation such that MgSiO_3 =2Si-Mg, Mg_2SiO_4= Mg-Si, and SiO_2 is any remaining Si. The mass of O in the rock is therefore derived from the Si and Mg abundances, which corrects for any O excesses due to water (for the WD sample) or O production in the Hypatia catalog stars. The mass fractions of Fe can be equated with the metal core fractions of planets given the expectation of small concentrations of Fe in the silicate <cit.>. The top panel of Figure <ref> shows f_ Fe calculated from the abundances of the raw and steady-state adjusted WD data, and the Hypatia stellar abundances. The bottom panel illustrates the distribution of f_ Fe resulting for four example WDs when uncertainties in observation are propagated through the transformation. The Hypatia catalog stars with higher metallicities define a slightly skewed distribution of iron mass fractions, with a well-defined mode of about 32%, indistinguishable from the core mass fraction of the Earth, a tail towards lower values, and an approximate 1 σ spread of about ± 5 %. The lower metallicity Hypatia catalog stars define a peak in the distribution of f_ Fe∼ 20 ± 5 % (Figure <ref>). A similar variation in iron mass fractions was calculated by <cit.> for stars in the thin and thick disks and halo of the galaxy. The difference in most probable iron mass fractions obtained from the higher and lower metallicity Hypatia catalog stars suggests the possibility that planets formed in first several billion years in the Milky Way may have tended to have smaller metal cores compared with Earth, while planets formed later are generally similar to Earth in their metal core fractions. This is broadly due to the increase in siderophile elements at later times relative to lithophiles. The WD sample size is much smaller, and plagued by larger uncertainties. The bottom panel of Figure <ref> shows the spread in f_ Fe obtained for each WD after taking 100 random draws of of Si, Mg, and Fe abundances from a parent population defined by the WD medians and uncertainties in each element. The 1σ uncertainty in iron mass fraction ranges from about 4-15%, with a median of ∼7%. In any case, the raw data define iron mass fractions peaking at 20-30% while the steady-state adjusted data yield a peak at 30-40%. While we do calculate low (f_ Fe<10%) iron mass fractions for some WDs, lacking further information about the ages or initial metallicities of these stars prevents us from identifying whether the low iron is due to the systematic effects of GCE. As a whole, these data suggest the majority of planets that might have formed from these polluting materials have metal core mass fractions that are not significantly different from an Earth-like planet. Following the discussion in Section <ref>, abundance measurements for older WDs (likely lower-mass WDs), or WDs outside of the thin disk of the galaxy could help identify whether the lower core mass fractions are influenced by GCE. § CONCLUSIONS In this work, we show that about half of polluted WDs, and well over half of the Hypatia catalog stars, have compositions that are consistent with chondrites. We use the χ^2_ν goodness of fit to test the composition of each star, with a threshold of α =0.05 to select matches to chondritic composition, and allow for a 2σ error in the χ^2_ν to account for our small sample size of observed elements. We use Monte Carlo methods to propagate the uncertainties in the observed abundances of the WDs and Hypatia catalog stars. We find that many Solar System rocks, including bulk Earth and bulk silicate Earth, bulk silicate Mars, and E chondrites, are indistinguishable from CI chondrite given current uncertainties in WD pollution measurements. Additionally, we find that we are not able to characterize either Hypatia catalog stellar abundances or WD pollution by normative mineralogies due to the impossibly large uncertainties obtained by propagating measurement uncertainties. The polluted WD data indicate that the bulk of exo-rocks are consistent with chondritic compositions. This is supported by the compositions of rocks implied by the Hypatia catalog stars, which suggest most material in the solar neighborhood formed in protoplanetary disks with rock-forming element ratios similar to our Sun. The Hypatia catalog stars do suggest, however, that galactic chemical evolution can lead to exoplanet compositions statistically different from the Solar System in the first few billion years of the Galaxy or in galactic substructures with considerably different metallicities. One implication of this is that earlier in the evolution of the Milky Way, rocky planets may have formed with substantially less massive metal cores than Earth. Our methods do not suggest any of the WD polluters are composed of crust, either MORB or continental crust. No stars in our sample are better fits to MORB or continental crust than chondrite, even WDs with the largest deviations from chondritic composition (χ^2_ν≫ 10). We conclude that the relative abundances of rock-forming elements in polluted WDs and local stars are relatively homogeneous, which suggests that the majority of extrasolar rocks in the solar neighborhood originate from chondrite-like compositions. The authors thank Pratik Gandhi (University of California, Davis) for helpful discussions on galactic chemical evolution. We also thank the referees for their comments, which improved the manuscript. This work was supported by NASA 2XRP grant No. 80NSSC20K0270 to EDY. The research shown here acknowledges use of the Hypatia catalog Database, an online compilation of stellar abundance data as described in <cit.>, which was supported by NASA's Nexus for Exoplanet System Science (NExSS) research coordination network and the Vanderbilt Initiative in Data-Intensive Astrophysics (VIDA). aasjournal
http://arxiv.org/abs/2306.08901v1
20230615070923
A cosmic-ray database update: CRDB v4.1
[ "D. Maurin", "M. Ahlers", "H. Dembinski", "A. Haungs", "P. -S. Mangeard", "F. Melot", "P. Mertsch", "D. Wochele", "J. Wochele" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.IM", "hep-ex", "hep-ph" ]
psf D. Maurin, [email protected] LPSC, Université Grenoble-Alpes, CNRS/IN2P3, 53 avenue des Martyrs, 38026 Grenoble, France Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark Department of Physics, TU Dortmund, Otto-Hahn-Straße 4a, 44227, Dortmund, Germany Institute for Astroparticle Physics (IAP), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany Bartol Research Institute, University of Delaware, Newark, DE 19716, USA Institute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, 52056 Aachen, Germany The cosmic-ray database, , has been gathering cosmic-ray data for the community since 2013. We present a new release, 4.1, providing many new quantities and data sets, with several improvements made on the code and web interface, and with new visualisation tools. relies on the database management system, and libraries for queries and sorting, and web pages and protocol for displays. A interface enables user queries from command line or scripts. A new (pip-installable) CRDB python library is developed and extensive jupyter notebook examples are provided. This release contains cosmic-ray dipole anisotropy data, high-energy p̅/p upper limits, some unpublished LEE and AESOP lepton time series, many more ultra-high energy data, and a few missing old data sets. It also includes high-precision data from the last three years, in particular the hundreds of thousands AMS-02 and PAMELA data time series (time-dependent plots are now enabled). All these data are shown in a gallery of plots, which can be easily reproduced from the public notebook examples. contains 314902 data points from 487 publications, in 4092 sub-experiments from 126 experiments. 4.1 D. Maurin, M. Ahlers, H. Dembinski et al. A cosmic-ray database updatemailto:[email protected]@lpsc.in2p3.fr: CRDB v4.1 D. Maurin1 M. Ahlers2 H. Dembinski3 A. Haungs4 P.-S. Mangeard5 F. Melot1 P. Mertsch6 D. Wochele4 J. Wochele4 Received / Accepted ============================================================================================================================================== § INTRODUCTION Owing to the quantity and variety of data gathered in cosmic-ray (CR) physics, a central shared database (DB) assuring data quality, completeness, and traceability is an asset for the community. Although the oldest datasets have a historical value mostly, the low-energy data still trace and give a unique perspective on the 11-year Solar cycle <cit.>, and may also be of unforeseen use in the future. The Cosmic-Ray DataBase[<https://lpsc.in2p3.fr/crdb>] () team has been distributing a growing body of CR data since its first public release in 2013 <cit.>. In a recent update, 4.0 <cit.>, existing data on (groups of) ultra-heavy elements (Z>30), upper limits on anti-nuclei (Z≤-2), and a selected sample of ultra-high-energy (UHE) CRs from ground-experiments were included. In 4.0, the DB structure and the submission data format were also revised, and users were provided with a interface to extract both CR data and solar modulation levels (in their own codes and scripts), with overall more flexibility and more keywords to select the data queried. In this release, 4.1, beside uploading data from the last three years (from AMS-02, CALET, DAMPE, PAMELA, etc.), we take advantage of an agreement with our colleagues from the [<https://kcdc.iap.kit.edu/>] DB <cit.> to complete our sample of UHECR data. We also add energy-dependent anisotropy data, including and extending those presented in <cit.>. We also correct the meta-data and provide a few unpublished low-energy leptons and positron fraction data from the LEE, AESOP and AESOP-LITE balloon flights (operated over a 50 year time period). Because an incredibly large body of time-dependent data has been released by the AMS-02 experiment, we provide a new interface to ease the visualisation of these time series; these data are now the most numerous by far in . One of the main novelty of this release is a new standalone python library for the plotting of data, which should further ease their distribution and use by the community at large. We also took the opportunity of this release to fix some mistakes in the data, meta-data, and to improve the code (behind the scene) and the web interface; the most important changes are documented and available on 's webpage, and briefly described later on. The paper is organised as follows: Sect. <ref> recalls the DB structure and the few changes made in this release; Sect. <ref> presents the web interface and its novelties, and also introduce the new public python library to query and display data (outside of the website); Sect. <ref> highlights the new data added in this version; we conclude in Sect. <ref>. § DATABASE STRUCTURE In , data are separated in two broad categories, namely the data (CR data points and data uncertainties) and the meta-data (data about the data): the latter include the data taking periods, the description of the experiment, links to the associated publications, etc. The DB structure, shown in Fig. <ref>, has only slightly changed since our last release. Its most important features are recalled below, and we use MONOSPACE font to easily identify the DB table names and keys. §.§ Data points and energy axis (DATA table) Data points are described in the DATA table (see Fig. <ref>). Each entry has a unique ID and corresponds to a measured VALUE or upper limit (if boolean IS_UPPER_LIMIT set to 1) within an energy bin [E_BIN_L, E_BIN_U] or at the mean energy bin value E_MEAN[If only E_MEAN is provided in the publication, we set E_BIN_L = E_BIN_U = E_MEAN. If both E_BIN_L and E_BIN_U are provided but not E_MEAN, we set E_MEAN=(E_BIN_L×E_BIN_U)^1/2. Finally, some experiments define their last energy bin as all events above a given energy: in that case, we manually set an upper bin value at least 100 times the lower bin value.]. The data point is also associated to a sub-experiment and publication via its SUBEXP_PUBLI_ID key (whose value points at a SUBEXP_PUBLI table entry, see Sect. <ref>). To cover the different energy types provided in the original publications, the energy axis (E-AXIS) of each data point must be set to ETOT, EK, R, EKN, or ETOTN. These types correspond to and are given in unit of, respectively, total energy E_ tot in GeV, kinetic energy E_ k=E_ tot-m in GeV, rigidity R=pc/(Ze) in GV, kinetic energy per nucleon E_ k/n=E_ k/A in GeV/n, and total energy per nucleon E_ tot/n=E_ tot/A in GeV/A. For the data, enables asymmetric statistical (VALUE_ERRSTAT_L and VALUE_ERRSTAT_U) and systematic (VALUE_ERRSYST_L and VALUE_ERRSYST_U) uncertainties[For old data, the distinction was usually not made between the two, and because old measurements were mostly limited by their statistics, the quoted uncertainties in the publications are ascribed to VALUE_ERRSTAT_L and VALUE_ERRSTAT_U.]. §.§ Quantities and conversions (CR_QUANTITY table) The measured quantity is either a single CR quantity NUM_ID or a ratio of two CR quantities NUM_ID/DEN_ID, where NUM_ID and DEN_ID point to entries in the CR_QUANTITY table. These entries are identified by an ID (set manually), a SYMBOL, and a NAME. The keys A, Z, and M_AMU (for the atomic mass number, charge, and mass in a.m.u) are non-null for isotopes, only the key Z can be filled for elements, and all keys are set to zero for groups of elements (or compound quantities) and dipole anisotropy data. In queries, the data conversion from one energy axis to another is enabled (see Table A.1 in ). The conversion is exact for individual fluxes of CR isotopes or leptons and for ratios of leptons, and also for p̅/p (this last conversion was not implemented in the previous release), but it is impossible for generic ratios, compound quantities, or anisotropy data. Nevertheless, an approximate conversion can still be enforced for fluxes of elements (or group of elements) if these quantities have a CR isotope proxy; this proxy is enabled via the PROXY_ID key in the CR_QUANTITY table (this key was previously in a separate and redundant table that we removed in this release). §.§ Meta-data for experiments and modulation level (EXP, SUBEXP, and SUBEXP_IMAGE tables) Definition and description. CR data are taken from experiments described in the EXP table (see Fig. <ref>). Each experiment has a TYPE (balloon, ground, or space), a unique ID (set internally in the DB), a name (EXPNAME), a starting year (DATE), and optionally a website (HTML); we stress that the experiment name is mainly used to better regroup and sort sub-experiments in the Experiments/Data website tab. Sub-experiments (SUBEXP table) have an ID and are attached to a single experiment (EXP_ID). They enable to tag and distinguish, for a same experiment: (i) data obtained from different data taking periods; (ii) data taken from distinct sub-detectors or reconstructed from different analysis types; (iii) data obtained using external third-party models or different assumptions. Sub-experiments have a NAME[In , we decided that the format of this name should be a concatenation of: (i) the experiment name EXPNAME (e.g. PAMELA, NUCLEON); (ii) if necessary, a hyphen-separated sub-detector characteristic (e.g., PAMELA-CALO) or specific technique used (e.g. NUCLEON-KLEM); (iii) data taking periods in parenthesis; (iv) if relevant, the Monte Carlo generator used to analyse the data (e.g. IceTop SIBYLL-2.1 for UHECR data). The two exceptions to form the sub-experiment names are for the case of a combined analysis, i.e. names based on the concatenation of the two experiment names, e.g. IceCube+IceTop (2010/06-2013/05), and unnamed balloons (concatenation of Balloon and their flight dates). For the dates, the chosen format is YYYY/MM (shortened to YYYY if the month is unknown), with a single date for a shorter-than-a-month data taking period, e.g. Balloon (1966/05), or two dates otherwise, e.g. IMP7 (1973/05-1973/08); if the month is unknown, we only quote the year (or range of years).], a short DESCRIPTION (detector or detection technique), additional INFO (e.g. location for balloon flights, GPS coordinates for ground-based detectors, etc.), and an IMAGE_ID (see next). For each sub-experiment, we also provide a single value (set to zero by default) for a possible energy-scale relative uncertainty (ESCALE_RELERR). In this release, we also added the new SUBEXP_IMAGE table (see Fig. <ref>). Previously, the detector images were kept in a separate directory with file names based on the EXPNAME and or sub-experiment NAME keys. In the new table, we have the image itself (DATA key) with its unique ID key, along with a brief description if needed (DESCRIPTION key). This allows to avoid storing duplicate images and makes checks on the completeness of the presence of images for all sub-experiments easier. Solar modulation level. Especially important for the interpretation of low-energy data (below a few hundreds of GeV), we must provide (i) the DISTANCE to the Sun of the sub-experiment—almost all experiments are at 1 a.u., but a few satellites (Ulysses and Voyager) have also taken data at different position inside and outside the Solar cavity— and (ii) the exact list of start-stop DATES of the data taking periods[The format is YYYY/MM/DD:HHMMSS-YYY/MM/DD:HHMMSS, or a semi-column separated list of similarly formatted time periods if necessary. If the exact time is unavailable, we enforce HHMMSS=000000 for the start date and HHMMSS=235959 for the stop date. If the day is unknown, we enforce DD=01 (start) or last day of the month (stop), and if the month is unknown, we enforce MM=01 (start) and MM=12 (stop).]. These two pieces of information allow to calculate and fill SMALL_PHI, the average modulation level over the corresponding data taking periods, in the force-field approximation <cit.>. Actually, SMALL_PHI contains different estimates of ⟨ϕ(t)⟩, all calculated from the same neutron monitor data[<http://www01.nmdb.eu>], but based on slightly different modellings: the values tagged [Uso05] and [Uso17] are based on monthly average public values[<http://cosmicrays.oulu.fi/phi>] from <cit.>, while those tagged [Ghe17] are based on daily average values from <cit.>. In , all queried data are returned with their calculated SMALL_PHI value, but users are obviously free to discard or re-calculate it—by default, the returned values are [Ghe17], which can be also calculated for any time period from the Solar modulation tab (see Sect. <ref>). §.§ Meta-data for publications (PUBLI table) Almost all data in are taken from peer-reviewed publications. The main exceptions are data from balloon flights before the 1990's, which were published in the proceedings of the biennial International Cosmic-Ray Conference only. Each publication is stored in the PUBLI table (see Fig. <ref>) with a unique ID (set internally) and an HTML key, taken to be the publication ADS (Astrophysics Data System) identifier (e.g. https://ui.adsabs.harvard.edu/abs/2014A A...569A..32M2014A&A...569A..32M). This identifier allows to retrieve and fill in a standardised manner the REF and BIBTEX keys via the ADS API[<https://github.com/adsabs/adsabs-dev-api>]. The original publications are stored in (for the administrators) but cannot be made publicly available because of publication rights. Because some data sets are sometimes re-analysed and reported in a new publication, the obsolete one has its SUPERSEDED_BY key set to ID of the new one (it is left empty if it is not superseded). This allows us to enforce that queries to always return the most recent data, discarding the deprecated ones. We nevertheless keep track of these superseded data in the `Experiments/Data' tab (see Sect. <ref>), where old and new publications are shown. §.§ Tying data and meta-data (SUBEXP_PUBLI table) The full description of the data requires the data themselves, the sub-experiment that measured them, and the publication where they appeared. The SUBEXP_PUBLI bridge table (see Fig. <ref>) allows to tackle situations where several sub-experiments are reported in the same publication. Each data set, with a unique ID, is tied to a sub-experiment (SUBEXP_ID) and a publication (PUBLI_ID). In addition, in this table, we keep track of the date at which each dataset was uploaded in (DATE_UPLOAD), and also of all CR QUANTITIES whose data were provided in this publication. While both these keys are unused in data queries, they are useful for maintenance and cross-checks of the DB. § WEB INTERFACE AND QUERIES runs on free open source softwares with a classical solution: operating system, HTTP server, database, and scripting language. The server is hosted at the LPSC laboratory, and has been recently changed to have a more recent version of the operating system, the DB, and the version. The DB RAM was extended from 512 MB to 2048 MB to handle the larger requests from the newly added time-series data (see Sect. <ref>). The website is organised in tabs providing different entry points to explore the DB data and meta-data. The webpages use (asynchronous and ) web development technique for efficiency and speed. In addition to the few improvements made on the existing website tabs, we added two new ones in this release (see Sect. <ref>). To query, sort, and show the DB content, the web interface relies on , -ui, .cluetip, and . There are two ways for users to query data: either from the Data extraction tab (see below) or from a direct command-line call (bypassing the website) via the interface (also see below). The latter functionality has been fully exploited in this release, with the development of a new dedicated python library. This library is described and used to generate a gallery of plots in Sect. <ref>. §.§ Web pages: content and novelties We briefly describe below the content and noteworthy improvements made on the tabs. For this release, we also added a new tabs to list a few caveats and tips related to the data preparation and transformations. * Welcome tab: entry point of the website, where the DB content, tools, people involved, code status, etc. are highlighted. In this release, we also added a gallery of plots to advertise the variety of data in . * Caveats/Tips tab: there are a few subtleties in the way the data (and meta-data) are handled in . Indeed, at the collection stage, the information on the data is sometimes partial, and somewhat subjective choices need to be made to be able to implement them nonetheless. Then, at the query stage, combinations and conversions are enabled, with some degree of approximation as well. Users probably do no pay a lot of attention to these details, and this is probably fine most of the time. Whereas the details and caveats about these procedures are made explicit in the publications <cit.>, the most relevant ones are gathered here in one place. This should help users identify data for which going back to the original publication is necessary. * Data extraction tab: queries of user-selected CR quantities with various options (sub-experiment names, dates, energy unit, etc.). The retrieved data include the ones matching exactly the query but also, if selected, extra sets based on energy conversions (Table A.1 of ) and data combinations (App. A of ); we added in this release the trivial but forgotten transformation rule to get Y/X from data published as X/Y. The data retrieved are then plotted and listed in a pop-up window and can be downloaded in various formats: in this release we added an extra option, `csv (as import)', enabling to retrieve the data and all their meta-data (format similar to the one described in the Submit data tab, see below). We also added a tick box for the `Refine search criteria' box in the Data extraction tab, to display the data versus time instead of energy. * Experiments/Data tab: sorted list of experiments with their associated sub-experiments, including in particular a picture of the detector, their associated publications and quantities measured. In this release, to improve the sorting and readability of the numerous unnamed balloon flight series (i.e. balloon launched multiple times over years by the same team and analysed in several publications), we regrouped them into fewer and more informative names, e.g. Nuclear emulsions 1950-1968, Muon Telescope 1957-1995, etc. * REST/CRDB.py tab: details how to query from a stand-alone script, with the same options as the ones provided in the Data extraction tab (datasets retrieved from the website or from the interface with the same selection and options are the same). We also provide a simple command-line example (to run in a terminal) using curl. This capability is taken advantage of and extended in this release thanks to a new standalone python library to retrieve and display data, for instance from a python notebook, see Sect. <ref>). * Solar modulation tab: gives access, for any time interval, to the force-field modulation level (see Sect. <ref>). Behind the scene, a cron scheduler downloads NM data daily from NMDB[<http://www01.nmdb.eu>]. It also calculates the associated ϕ_ FF, whose values can be retrieved for a selected time period and resolution (from 10 minute up to a month), either directly from this tab, or from a interface. In this release, we fixed several minor bugs (as listed on the website), and more importantly, we fixed the broken REST interface and the daily update[All missing ϕ_ FF values were completed, and we also recalculated modulation levels, starting from 2015, for the THULE station (because of updated NM values in NMDB) and ROME station (using the correct number of NM tubes, which changed in 2017).]. * Submit data tab: how to format and send a csv file to . * Useful links tab: online resources related to CR data. * Admin tab: maintenance tools to check broken or inconsistent entries and missing meta-data, detailed procedure to upload data in the DB. This tab is restricted to authenticated users (i.e. maintainers). §.§ Python access to CRDB (and notebook) The CRDB provides a REST interface, which can be used from any programming language to automate downloading and processing data in scripts and programs. A tutorial on how to do this is available[<https://github.com/crdb-project/tutorial>]. Since Python is the dominant scripting language for data processing, we further provide a ready-made solution for Python users that simplifies and standardises queries from scripts. Users of this library do not need to learn the REST API, this is done internally by the library. The corresponding Python package called crdb[<https://github.com/crdb-project/crdb>] can be downloaded with the standard tool pip from the Python Package Index[<https://pypi.org/project/crdb>]. The main function is crdb.query, which performs a query to the database through keyword arguments, which are internally validated so that user errors are caught early and clear error messages are returned. The tabular output of a query is transformed by this function into a structured Numpy array <cit.>, which allows for efficient fast processing in Python. Each query is automatically cached to disk for 30 days, which accelerates repeated calls to crdb.query and reduces the load on the server; this often occurs during the development of a script or program. Further utility functions allow users to easily generate lists of citations for the data sets they queried from the DB. All functions are well documented, the documentation can be accessed with Python's internal help() command. The Python package also provides a command-line interface, which allows users to perform queries and store the results in one of the ASCII formats supported by the CRDB data extraction system. In this case, the query is specified using command-line arguments, that mirror those of crdb.query. Example code on how to make standard plots in Python can be found in the gallery, and we show in Figs. <ref> and <ref> a few plots illustrating the variety, coverage, and completeness of 's data. More plots are shown in the next section, and all of them are available from 's public gallery notebook[<https://github.com/crdb-project/tutorial/blob/main/gallery.ipynb>]. § NEW DATASETS IN 4.1 In addition to regular data updated since the last release (Sect. <ref>), the content of has evolved in several directions. In this release, we (i) add dipolar anisotropy data (Sect. <ref>); (ii) take advantage of a partnership with to gradually move from limited sample to completeness of UHECR data (Sect. <ref>); (iii) include high-energy upper limits on antiproton fluxes from ground experiments (Sect. <ref>); (iv) correct and complete low-energy lepton data from the LEE, AESOP, and AESOP-Lite balloons flown over 50 years (Sect. <ref>); (v) expand time series data thanks to the recently released AMS-02 daily and PAMELA monthly data (Sect. <ref>). §.§ Data uploaded since 4.0 Many data from AMS-02, CALET, DAMPE, etc. have been published since our last release. These data sets should have ideally been uploaded in shortly after their publication, but were only prepared for this release. We also took the opportunity of this release to upload a few old datasets that were not yet in . Rather than a detailed and cumbersome description of all these new data sets, which are listed in Table <ref>, we prefer to highlight below some of their most salient features. To start with, the first 7 years of AMS-02 data <cit.>, along with other publications by the AMS collaboration <cit.>, all uploaded in this release, now provide the most comprehensive set of data from a single experiment. These data are in the GV to TV rigidity range, and correspond to fluxes and ratios of leptons, antiprotons, and nuclei from H to Si, plus Fe. Moreover, in addition to the above AMS-02 data, we have uploaded the recent CALET <cit.>, DAMPE <cit.>, ISS-CREAM <cit.>, and NUCLEON <cit.> data, which provide the most precise set of direct measurement data in the TeV domain and above; these data are key to investigate possible breaks and features in the spectra, and the consistency between direct and indirect measurement data. Some of the new data sets uploaded also explore in a unique way the composition of ultra-heavy CRs (UHCR). Indeed, recent ACE-CRIS data <cit.> unveil the isotopic content of CR elements Z=30-38, complementing the elemental fractions measured by Tiger and SuperTiger (already in ); a further extension to the range 41≤ Z≤56 should be available soon by SuperTiger <cit.>. For even heavier (and rarer) elements, very few experiments have provided data so far. In addition to Ariel6, HEAO3-HNE, UHCRE-LDEF, and Trek data (already in ), we added the skylab data <cit.>. The last piece of UHCR data that we decided to add in this release are those from the OLIMPIYA experiment. The latter uses olivine crystals contained in stony-iron meteorites (pallasites) as CR detectors. At variance with satellite experiments that provide measurements of UHCR GCRs accumulated over an exposure time of a few years, the OLIMPIYA experiment provides measurements of GCRs accumulated over up to hundreds of Myr—these two complementary techniques allow to have a glimpse on the GCR time evolution. The OLIMPIYA data uploaded in this release[We stress that, owing to constraints from the DB structure and display, we have to define a data taking period, a position in the Solar system, and an energy for these data, although it is inadequate: the former is set to the publication date, the position to 1 au, and the energy to 1.5 GeV/n (as set for the other UHCR experiments, see ), i.e. a value at which GCR fluxes are maximal and are likely to be responsible for most of the tracks.] are taken from <cit.> that supersedes a previous analysis presented in <cit.>[Between these two publications, several effects that could affect the relative yield of nuclei registered have been investigated and accounted for: anomalies near the meteorite edge related to the the annealing of the measured tracks <cit.>; fragmentation in the meteorite which explains all events in the 84≤ Z≤ 89 range, but has no impact for the other charges <cit.>.]. §.§ Anisotropy data Ground-based detectors with high event statistics allow the study of anisotropies in the arrival directions of CRs. Of particular interest is here the dipole anisotropy predicted by diffusion theory, that allows us to study the nearby CR source distribution and diffuse CR transport in our local magnetic environment <cit.>. While the true dipole anisotropy is represented by an amplitude and two phases, the data-driven reconstruction method of ground-based observatories allows only the reconstruction of the projection of the dipole vector onto the equatorial plane. Conventionally, this projection is characterised by the (projected component of the) amplitude and the phase in right ascension. These new dipole anisotropy data are indicated in the DB by the entries DipoleAmplitude and DipolePhase; we have chosen a convention where DipolePhase∈ [-180^∘, 180^∘]. The dipole data in terms of total energy ETOT is shown in Fig. <ref>. Note that the limited statistics of CR experiments in the PeV–EeV energy region has so far only yielded upper limits on the dipole anisotropy. In the DB, we indicate this by providing both the best amplitude and its upper limit as separate entries. As visible in Fig. <ref>, the dipole amplitude and phase data from different observatories can show strong deviations beyond statistical uncertainties. This is related to hidden (and often unquantified) systematic effects, corresponding to the partial sky coverage of experiments and reconstruction method. Furthermore, experimental collaborations oftentimes provide a number of updates of their anisotropy studies as the event statistics accumulate. We have chosen to include all the data publicly available, but note that the later data sets are usually meant to supersede the earlier ones. Finally, note that some of the (especially older) data have been extracted from publications, which give rather limited information on the methodology used. We have chosen to include these at face value, but recommend to exercise caution when using these data for quantitative studies. The experiments and associated references for all these data are gathered in Table <ref>. §.§ UHECR data from Considering the vast amount of academic databases and search engines for locating and accessing published scientific data, unified access to published datasets and spectra is still in the early stages. This is due to the large variety of experiments and thus the large variety of measured data. In cooperation with , the `KASCADE Cosmic-ray Data Centre' () is taking a step towards simplification, by embedding the UHECR data from , i.e. data from extensive air shower experiments, into . The advantage of such an extensive collection of UHECR data is that data from other experiments can be obtained relatively quickly. is already a demonstrator and partner of PUNCH4NFDI[<https://www.punch4nfdi.de/>], the consortium of particle, astroparticle, astro-, hadron and nuclear physics within the German National Research Data Infrastrucutre, NFDI, which is aimed to unify the methodical approach of open data in this field. The is a web-based interface where initially the scientific data from the completed air-shower experiment KASCADE-Grande was made available for the astroparticle community as well as for the interested public Besides a DataShop to download the reconstructed data of KASCADE-Grande and the meta-data, offers more than 100 cosmic ray spectra from about 25 different ground-based high-energy CR experiments published between 1984 and 2021 for download. The data sets available cover an energy range from about 10^12 eV to more than 10^20 eV for all-particle spectra (keyword AllParticles in ) as well as for mass groups like p, He up to Fe or heavy and light respectively, derived from the unfolding procedure for different high-energy interaction models like QGSJet, EPOS and SIBYLL, mostly embedded in the CORSIKA simulation package. CORSIKA[< https://www.iap.kit.edu/corsika/>] (COsmic Ray event SImulation for KAscade) has been written especially for KASCADE and extended since then to become the world’s standard simulation package in the field of cosmic ray air shower simulations. While the KASCADE-Grande experimental data in are accessible also via an API, the spectra points and metadata, stored in a postgres database, can only be selected and displayed on the website after registration. Thus, a partnership with CRDB was set up with the aim of creating a basis for this data exchange and to provide the community with a common interface to this merged spectra data. The data sets are now being reformatted to meet the requirements of , to supplement its very extensive content with data from ground-based air shower experiments. The spectra uploaded on at the time of this release are listed in Table <ref>; they represent about ∼ 25% of the full data being prepared, and a sample of these data can be seen in Fig. <ref>. To match the requirements of UHECR measurements, the data quantity list DATA_QTY had to be extended by two more groups, the He-C-group and the Si-Fe-group. To find out more about the real meaning of the particle spectra like helium, oxygen and so on, their mixtures as well as the mixtures of different high-energy interaction models, users should refer to the original papers. §.§ Upper limit on high-energy p̅/p With the angular resolution of ground cosmic-ray detectors reaching below the degree level in the 90's, it became possible to observe a deficit of events from the direction of the Moon or the Sun (∼ 0.5^∘): the Moon or Sun shadow technique was used first to calibrate their angular resolution and pointing accuracy. Actually, the position of the shadow is offset from the true location of the blocking bodies owing to the deflection of cosmic rays in the geomagnetic field, with the shadow shifted westward (resp. eastward) for positively (resp. negatively) charged particles. This allowed several experiments to set upper limits on the p̅/p ratio above TeV energies <cit.>. These upper limits were added in , along with the older upper limits obtained from the observed charged ratio of muons <cit.>. These new datasets are shown in Fig. <ref> and listed in Table <ref>. §.§ LEE, AESOP, and AESOP-Lite balloon flights From 1968 to 2011, the LEE (Low Energy Electrons) balloon-borne instrument <cit.> was launched over 35 times. LEE provided the longest series of CR electron measurements (e^-+e^+) over a time period that covers about four solar cycles. This data is particularly relevant to the study of the solar modulation of electrons with energies up to about 20 GeV. In 4.1, we reorganized the existing LEE data from 1968 to 1994. Data points taken from figures were updated with the actual values when private communication with the authors was possible. Data post-1994 were also added to the database. Indeed, the spectra for the years 1997 to 2000 were never fully published. However, flight data were analyzed using the same method as that outlined in <cit.>, and the spectrum values at 1.2 GeV only were published in <cit.>. The full spectra for these years were provided by the authors (Paul Evenson, 2023) and uploaded in . These data are shown in the top panel of Fig. <ref> along with other measurements from experiments at similar energies. We also show on this plot times series of He (second panel), NM count rates (third panel), and Solar modulation values calculated from these count rates (fourth panel). From 1994 to 2011, the AESOP (Anti-Electron Sub Orbital Payload) balloon-borne instrument <cit.> flew at multiple occasions with the primary objective to study the charge-sign dependence of the solar modulation of electrons from a few hundreds MeV to a few GeV. In 4.1, we reorganized the existing AESOP e^+/(e^-+e^+) data and updated the 1994 flight (private communication with the author John Clem, 2023). The AESOP-Lite apparatus is the successor of LEE and AESOP. Its primary objectives are to search for the origin of low-energy electrons in the electron spectrum between 20-300MeV, and to provide a baseline electron spectrum at 1 au for the measurements of the Voyager probes currently transmitting data from outside the heliosphere. The e^-, e^+, and e^+/(e^-+e^+) data from the AESOP-Lite's maiden flight from Sweden in 2018 <cit.> were added to ; future data will be added too. The metadata of all these balloon flights were updated using information from the original publications. When not available, the information from the stratospheric balloon flight catalogue StratoCat[<https://stratocat.com.ar/indexe.html>] was used. The list of the balloon flight names as encoded in along with the associated publications are listed in Table <ref>. §.§ AMS-02 and PAMELA time series In previous releases, a few time series were already included: yearly averaged (1994-2014) proton fluxes from EPHIN <cit.>, monthly or Carrington rotation average (2006-2014) proton fluxes from PAMELA <cit.>, and 6 month average (2006-2009) electron fluxes from PAMELA <cit.>. Thanks to its large acceptance and high statistics, AMS-02 was able, for the first time, to provide daily averaged fluxes of H, He, and He/H from 2011 to 2019 <cit.>, and e^- from 2011 to 2021 <cit.>: these data are now the dominant body of data in , with about 200 000 data points over ∼ 3000 days. We also added the recently published He time series of PAMELA from 2006 to 2013. Owing to its smaller acceptance and statistics, the data were averaged over one Carrington rotation (∼ 1 month) in the first three years <cit.>, and over three Carrington rotations later because of a random failure of a few front-end chips in the tracking system [...] particularly significant after 2009 <cit.>; this corresponds to ∼ 3000 new data points in (in E_k/n and R), as retrieved from the CRDB@ASI database[<https://tools.ssdc.asi.it/CosmicRays/>] <cit.>. We also added a few positron fraction data points taken from three different time periods <cit.>: the latter paper also provides 3-month averages (2006-2016) of the e^+/e^- ratio, but normalised to the unspecified 2006 value, so we did not add them in . To better visualise these data, we added a new query option in the web interface to plot data as a function of time (instead of energy). The direct benefit is to enable showing the evolution of data from similar energy bands over long time periods. This is illustrated with Fig. <ref>, available from the gallery notebookfoot:gallery. § CONCLUSIONS AND FUTURE RELEASES We have presented in this paper 4.1, an update of the CR database hosted at LPSC. On the technical side, this update involved a migration of server and a slight simplification of the DB structure. On the code side, a few minor bugs have been fixed, the queried data can now be returned in a more complete csv format (which includes all meta-data), and we fixed a missing combination rule for the data. On the web interface side, we added a new plotting capability to display CRs as a function of time, and added two new tabs: one lists all caveats related to the preparation of the data uploaded in and to the (sometimes approximate) transformation rules made on the queried data; the other provides a gallery of plots advertising and illustrating the diversity of data. Actually, this gallery and many other plots can be generated from our new public python library, and notebook examples are provided in the git pagefoot:gallery. On the content side, we enlarged the scope and content of , with the addition of dipole anisotropy data, high-energy upper limits on p̅, a large number of UHECR datasets, and also time series data. The latter include recently released AMS-02 daily and PAMELA monthly data, but also yearly data from LEE/AESOP/AESOP-Lite balloons taken over a 50 year period. We also updated data with all the GCR data published in the last three years, also adding a couple of older data that had slipped our attention until now. The path to future developments is not very clear and also depends on the feedback from the community. Indeed, now accounts for most galactic and extragalactic CR data in terms of quantities that can be cast as 1D data vectors (as opposed to skymaps or higher-dimension datacubes). Missing datasets should consist mostly of old time series from satellite experiments, which are both difficult to track and retrieve from the publications: owners and authors of such datasets are welcome to get in touch with us. If need be, other quantities related to UHECR data could also be added in the future, like ⟨ln A⟩. In any case, looking at present and future high-precision CR data, we stress that the current format to store uncertainties in is already limited and should probably be improved at some time in the future. Indeed, data from the last generation of CR detectors already come with broken-down contributions from various systematics, whereas only the total systematics can be stored in . This issue will worsen when covariance matrix of uncertainties will start to be released as well (as is already the case for instance for the most recent Pierre Auger data). The team will continue uploading newly published CR data, but we also encourage collaborations to prepare their data ( submission format) if they wish them to quickly be distributed via . Comments, questions, suggestions, and corrections on are welcome and are to be sent at mailto:[email protected]@lpsc.in2p3.fr. We warmly thank the continuous support and feedback from many of our colleagues, who point out typos and mismatches in . We also thank the AMS-02 collaboration for providing their data as csv tables (<https://ams02.space/publications>), which greatly eases the preparation and upload of these data in . This research has made use of NASA’s Astrophysics Data System Bibliographic Services. This work was partially supported by NASA award 80NSSC19K0746. We acknowledge the NMDB database (<www.nmdb.eu>), founded under the European Union's FP7 programme (contract no. 213007) for providing data; NM data from Oulu are provided by the Sodankyla Geophysical Observatory (see also <https://cosmicrays.oulu.fi/readme.html>) and those from Thule by the University of Delaware Department of Physics and Astronomy and the Bartol Research Institute. aa
http://arxiv.org/abs/2306.05700v2
20230609063937
Finite-Time Analysis of Minimax Q-Learning for Two-Player Zero-Sum Markov Games: Switching System Approach
[ "Donghwan Lee" ]
eess.SY
[ "eess.SY", "cs.GT", "cs.LG", "cs.SY" ]
Magnonic frequency comb in the magnomechanical resonator Chun-Hua Dong^1,2,3 July 31, 2023 ======================================================== The objective of this paper is to investigate the finite-time analysis of a Q-learning algorithm applied to two-player zero-sum Markov games. Specifically, we establish a finite-time analysis of both the minimax Q-learning algorithm and the corresponding value iteration method. To enhance the analysis of both value iteration and Q-learning, we employ the switching system model of minimax Q-learning and the associated value iteration. This approach provides further insights into minimax Q-learning and facilitates a more straightforward and insightful convergence analysis. We anticipate that the introduction of these additional insights has the potential to uncover novel connections and foster collaboration between concepts in the fields of control theory and reinforcement learning communities. Reinforcement learning, Q-learning, finite-time analysis, convergence, Markov game, switching system § INTRODUCTION Reinforcement learning (RL) addresses the problem of optimal sequential decision-making for unknown Markov decision processes through experiences <cit.>. Recent successes of RL algorithms surpassing human performance in various challenging tasks have sparked a surge of interest in both the theoretical and experimental aspects of RL algorithms <cit.>. Among many others, Q-learning <cit.> stands out as one of the most fundamental and popular RL algorithms, with extensive studies conducted on its convergence over the past decades. Classical analysis primarily focuses on asymptotic convergence <cit.>. However, recent advancements have been made in finite-time convergence analysis <cit.>, which quantifies the speed at which iterations progress towards the solution. Most existing results consider Q-learning dynamics as nonlinear stochastic approximations <cit.> and utilize the contraction property of the Bellman equation. Recently, <cit.> proposed a novel perspective on Q-learning based on continuous-time or discrete-time switching system models <cit.> and established asymptotic or finite-time analysis using tools from control theory <cit.>. This switching system perspective captures unique characteristics of Q-learning dynamics and enables the conversion of finite-time convergence analysis into stability analysis of dynamic control systems, which will also play an important role in developing main results in this paper. In this paper, the main goal is to study Q-learning algorithms for the two-player zero-sum Markov game <cit.>, which is a more general Marokv decision process <cit.>, where two decision-making agents coexist and compete with each other. There exist two categories of the two-player Markov games, the alternating two-player Markov game and simultaneous two-player Markov game. In the alternating two-player Markov game, two agents engaged in decision making take turns in selecting actions to maximize and minimize cumulative discounted rewards (referred to as “return”), respectively. On the other hand, in the simultaneous two-player Markov game, the two agents take actions simultaneously to maximize and minimize the return. Hereafter, these two agents will be called the user and the adversary. The user's primary goal entails maximizing the return, while the adversary strives to hinder the user's progress by minimizing the return. Specifically, in the alternating Markov game, the user initiates its decision at each time step without knowledge of the adversary's action. Afterwards, the adversary can observe the user's action and subsequently take its action based on the observed user's action. Consequently, the adversary holds more advantages over the user in the alternating Markov game. On the other hand, the two agents have fair chances in the simultaneous Markov game. The objective of the Markov game is to determine the pair (π^*,μ^*) of user's optimal policy, π^*, and adversary's optimal policy μ^*. It is worth noting that although two-player Markov games represent a relatively restricted category within the realm of multi-agent environments, they possess individual significance. Moreover, the two-player Markov game includes Markov decision processes as a special case and serve as an initial stage for delving into the study of more general multi-agent Markov games. The main objective of this paper is to establish a finite-time analysis of the minimax Q-learning method introduced in <cit.> for solving two-player zero-sum Markov games. A comprehensive finite-time error analysis of minimax Q-learning as well as the corresponding value iteration is established. To facilitate the analysis of both the value iteration and Q-learning, we employ the switching system models proposed in <cit.>. Contributions The main contributions of this paper can be summarized as follows: * This paper presents a finite-time analysis of minimax Q-learning, which has not been explored in the existing literature to the best of the author's knowledge. It is worth noting that the available existing results in the literature primarily focus on aspects such as asymptotic convergence <cit.> or convergence of modified algorithms <cit.>. * This paper reveals new prospects for the recently developed switching system framework <cit.>, which provides additional insights into minimax Q-learning. It also provides a conceptually simpler and more insightful convergence analysis of minimax Q-learning. We expect that the introduction of this additional insight holds the potential to unveil new connections and foster synergy among notions in control and RL communities. Furthermore, it can present additional opportunities for the development and analysis of new or other RL algorithms. It is important to emphasize that although the switching system model presented in <cit.> has been utilized as a foundational tool, the main analysis and specific proof techniques employed in this work substantially differ from those in <cit.>. In order to establish our central proof, we have encountered challenges that are far from trivial to overcome. To be more precise, the switching system model presented in <cit.> permits a linear comparison system that serves as a lower bound for the original system, playing a pivotal role in the finite-time analysis. Conversely, the switching system model employed in this study exhibits a distinct structure, utilizing the max-min operator in lieu of the standard max operator found in traditional Q-learning. Consequently, the switching system under consideration in this work does not admit a linear comparison system. Hence, the application of similar techniques as those employed in <cit.> is not feasible. Moreover, it is worth noting that this paper only addresses an i.i.d. observation model, where constant step-sizes are employed to simplify the overall analysis. The i.i.d. observation model is commonly utilized in the existing literature, serving as a standard setting <cit.>. Moreover, the proposed analysis can be extended to encompass the more intricate Markovian observation model, utilizing techniques presented in previous works such as <cit.>. However, extending the analysis to the Markovian observation scenarios can considerably complicate the main analysis and potentially obscure the fundamental ideas and insights of our proposed approach. For this reason, in the interest of maintaining clarity and coherence, this paper will not cover the Markovian observation scenarios. Related works The seminal work by Littman <cit.> introduced minimax Q-learning, a Q-learning algorithm designed for zero-sum two-player Markov games, which serves as the main focus of our study. Subsequently, Littman and Szepesvari <cit.> established the asymptotic convergence of minimax Q-learning towards the optimal value derived from game theory. Hu and Wellman <cit.> extended minimax Q-learning to multi-agent environments, presenting Nash Q-learning, a variant that addresses general-sum games by incorporating Nash equilibrium computation within the learning rule. Bowling <cit.> elucidated the convergence conditions of the algorithm, while Hu and Wellman <cit.> examined its convergence behavior and highlighted the restrictive nature of the convergence assumptions. Littman et. al. <cit.> introduced friend-or-foe Q-learning for general-sum Markov games, demonstrating stronger convergence properties compared to Nash Q-learning. Moreover, Littman <cit.> further examined convergence of Nash Q-learning and its behavior under different environments. Lagoudakis and Parr <cit.> studied a value iteration version of minimax Q-learning and proposed a least-squares policy iteration algorithm to solve two-player Markov games. In recent research, Diddigi et al. <cit.> presented a novel generalized minimax Q-learning algorithm and provided a proof of its asymptotic convergence utilizing stochastic approximation techniques under the assumption of iterates' boundedness. Fan et al. <cit.> extended the minimax Q-learning algorithm by incorporating deep Q-learning techniques <cit.> and established a finite-time error bound. Zhu and Zhao <cit.> also employed deep Q-learning techniques in the context of minimax Q-learning and demonstrated its asymptotic convergence in tabular learning scenarios. Additionally, several notable studies on Markov games, while not directly focused on minimax Q-learning, offer valuable insights. Srinivasan et al. <cit.> and Perolat et al. <cit.> investigated actor-critic algorithms tailored for multi-agent Markov games. Perolat et al. <cit.> delved into an approximate dynamic programming framework for two-player zero-sum Markov games. Furthermore, Perolat et al. <cit.> explored the generalization of various non-stationary RL algorithms and provide theoretical analyses. Wei et al. <cit.> examined online reinforcement learning algorithms designed for average-reward two-player Markov games. Lastly, Zhang et al. <cit.> presented a comprehensive survey on the multi-agent Markov game and multi-agent reinforcement learning. It is important to acknowledge that although significant progress has been made through these prior works over the years, the existing literature primarily focuses on aspects such as asymptotic convergence <cit.> or the convergence of modified algorithms <cit.>. To the authors' best knowledge, a rigorous finite-time convergence analysis of minimax Q-learning has yet to be thoroughly investigated. § PRELIMINARIES AND PROBLEM FORMULATION §.§ Notation The adopted notation is as follows: ℝ: set of real numbers; ℝ^n: n-dimensional Euclidean space; ℝ^n × m: set of all n × m real matrices; A^T: transpose of matrix A; A ≻ 0 (A ≺ 0, A≽ 0, and A≼ 0, respectively): symmetric positive definite (negative definite, positive semi-definite, and negative semi-definite, respectively) matrix A; I: identity matrix with appropriate dimensions; λ_min(A) and λ_max(A) for any symmetric matrix A: the minimum and maximum eigenvalues of A; | S|: cardinality of a finite set S; tr(A): trace of any matrix A; A ⊗ B: Kronecker’s product of matrices A and B. §.§ Markov decision problem For reference, we first briefly introduce the standard Markov decision problem (MDP) <cit.>, where a decision making agent sequentially takes actions to maximize cumulative discounted rewards in environments called Markov decision process. A Markov decision process is a mathematical model of dynamical systems with the state-space S:={ 1,2,… ,| S|} and action-space A:= {1,2,…,| A|}. The decision maker selects an action a ∈ A with the current state s, then the state transits to a state s' with probability P(s,a,s'), and the transition incurs a reward r(s,a,s'), where P(s,a,s') is the state transition probability from the current state s∈ S to the next state s' ∈ S under action a ∈ A, and r(s,a,s') is the reward function. For convenience, we consider a deterministic reward function and simply write r(s_k,a_k ,s_k + 1) =:r_k,k ∈{ 0,1,…}. A deterministic policy, π : S→ A, maps a state s ∈ S to an action π(s)∈ A. The objective of the Markov decision problem (MDP) is to find a deterministic optimal policy, π^*, such that the cumulative discounted rewards over infinite time horizons is maximized, i.e., π^*:= _π∈Θ𝔼[.∑_k=0^∞γ^k r_k|π], where γ∈ [0,1) is the discount factor, Θ is the set of all admissible deterministic policies, (s_0,a_0,s_1,a_1,…) is a state-action trajectory generated by the Markov chain under policy π, and 𝔼[·|π] is an expectation conditioned on the policy π. The Q-function under policy π is defined as Q^π(s,a)=𝔼[ . ∑_k=0^∞γ^k r_k|s_0=s,a_0=a,π], s∈ S,a∈ A, and the optimal Q-function is defined as Q^*(s,a)=Q^π^*(s,a) for all s∈ S,a∈ A. Once Q^* is known, then an optimal policy can be retrieved by the greedy policy π^*(s)=_a∈ AQ^*(s,a). §.§ Two-player zero-sum Markov game In this paper, we consider a two-player zero-sum Markov game, where two decision making agents sequentially take actions to maximize and minimize cumulative discounted rewards (return), respectively. Hereafter, these two agents will be called the user and the adversary. The user's primary goal entails maximizing the return, while the adversary strives to hinder the user's progress by minimizing the return. There exist two categories of the two-player Markov games, the alternating two-player Markov game and simultaneous two-player Markov game. In the alternating two-player Markov game, two agents engaged in decision making take turns in selecting actions to maximize and minimize the return, respectively. On the other hand, in the simultaneous two-player Markov game, the two agents take actions simultaneously to maximize and minimize the return. Specifically, in the alternating Markov game, the user initiates its decision at each time step without knowledge of the adversary's action. Afterwards, the adversary can observe the user's action and subsequently take its action based on the observed user's action. Consequently, the adversary holds more advantages over the user in the alternating Markov game. On the other hand, the two agents have fair chances in the simultaneous Markov game. In this paper, we mainly focus on the alternating two-player Markov game because it simplifies the overall concepts and derivation processes, and all the results in this paper can be easily extended to the simultaneous Markov games. In the alternating Markov game, the state-space is S:={ 1,2,… ,| S|}, the action-space of the user is A:= {1,2,…,| A|}, and the action-space of the adversary is B:= {1,2,…,| B|}. The user selects an action a ∈ A at the current state s, and the adversary can observe the user's action a, and selects an adversarial decision b ∈ B. Then, the state transits to the next state s' with probability P(s'|s,a,b), and the transition incurs a reward r(s,a,b,s'), where P(s'|s,a,b) is the state transition probability from the current state s∈ S to the next state s' ∈ S under actions a ∈ A, b ∈ B, and r(s,a,b,s') is the reward function. For convenience, we consider a deterministic reward function and simply write r(s_k,a_k,b_k ,s_k + 1) =:r_k,k ∈{ 0,1,…}. The user's stationary deterministic policy, π : S→ A, maps a state s ∈ S to an action π(s)∈ A. The adversary's stationary deterministic policy, μ : S× A→ B, maps a state s ∈ S and the user's action a ∈ A to an adversarial action μ(s,a)∈ B. The user does not have an access to the adversary's action, while the adversary can observe the user's action before making its decision. It is known that there exists an optimal stationary deterministic policy <cit.> for both the user and adversary. The objective of the Markov game is to determine the user's optimal policy, denoted as π^*, and the optimal adversarial policy, denoted as μ^*: (π ^*,μ^*) : = _π∈Θmin _μ∈Ω𝔼[ . ∑_k = 0^∞γ ^k r_k |π ,μ], where γ∈ [0,1) is the discount factor, Θ is the set of all admissible deterministic policies of the user, Ω is the set of all admissible deterministic policies of the adversary, (s_0,a_0,b_0,s_1,a_1,b_1,…) is a state-action trajectory generated under policies π, μ, and 𝔼[·|π,μ] is an expectation conditioned on the policies π and μ. The Markov game considered in this paper can be potentially applied to the following scenarios: * Adversarial decision making: There exists an intelligent adversary that can take adversarial actions to prevent the user from achieving their goal. Under this situation, the user wants to find an optimal policy that can achieve the best possible performance against the adversarial behaviors. * Robust decision making: The environment changes arbitrarily, and the user want to find a robust optimal policy that can achieve the best possible performance in the worst case scenarios. Some fundamental tools and notions used in Markov decision problem <cit.>, such as the value function and Bellman equation, can be also applied to the two-player Markov game. In particular, the optimal Q-function is defined as Q^* (s,a,b): = max _π∈Θmin _μ∈Ω𝔼[ . ∑_k = 0^∞γ ^k r_k |s_0 = s,a_0 = a,b_0 = b,π ,μ] which satisfies the optimal Q-Bellman equation Q^* (s,a,b) = R(s,a,b) + γ∑_s' ∈ SP(s'|s,a,b)max _a' ∈ Amin _b' ∈ B Q^* (s',a',b')_: = (FQ^* )(s,a,b) The corresponding user's stationary optimal policy is given by π ^* (s) = max _a ∈ Amin _b ∈ B Q^* (s,a,b) and, the adversary's stationary optimal policy is μ ^* (s,a) = min _b ∈ B Q^* (s,a,b) Using the optimal Bellman equation (<ref>), the Q-value iteration (Q-VI) is the recursion Q_k + 1(s,a,b) = (FQ_k)(s,a,b), (s,a,b) ∈ S× A× B with any Q_0∈ℝ^| S× A× B|. It is known that the Q-VI converges to Q^* <cit.>. §.§ Switching system In this paper, the proposed analysis mainly relies on the so-called switching system models <cit.> in the control community. Therefore, we briefly introduce the notion of switching systems here. Since the switching system is a special form of nonlinear systems <cit.>, we first consider the nonlinear system x_k+1=f(x_k), x_0=z ∈ℝ^n, k∈{1,2,…}, where x_k∈ℝ^n represents the state and f:ℝ^n →ℝ^n denotes a nonlinear mapping. An essential concept when dealing with the nonlinear system is the equilibrium point. A point x^*∈ℝ^n in the state-space is said to be an equilibrium point of (<ref>) if it has the property that when the system's state begins at x^*, it remains at x^* <cit.>. For(<ref>), the equilibrium points are the real roots of the equation f(x) = x. The equilibrium point x^* is said to be globally asymptotically stable if, for any initial state x_0 ∈ℝ^n, x_k → x^* as k →∞. Next, let us consider the particular system, called the linear switching system, x_k+1=A_σ_k x_k, x_0=z∈ℝ^n, k∈{0,1,…}, where x_k ∈ℝ^n is the state, σ∈ℳ:={1,2,…,M} is called the mode, σ_k ∈ℳ is called the switching signal, and {A_σ,σ∈ℳ} are called the subsystem matrices. The switching signal can be either arbitrary or controlled by the user under a certain switching policy. Especially, a state-feedback switching policy is denoted by σ_k = σ(x_k). A more general class of systems is the affine switching system x_k+1=A_σ_k x_k + b_σ_k, x_0=z∈ℝ^n, k∈{0,1,…}, where b_σ_k∈ℝ^n is the additional input vector, which also switches according to σ_k. Due to the additional input b_σ_k, its stabilization becomes much more challenging. § CONVERGENCE OF Q-VI VIA SWITCHING SYSTEM MODEL In this section, we provide a proof of convergence of Q-VI drawn from switching system models. This approach yields additional perspectives on Q-VI, supplementing the existing analysis in the literature. Moreover, it will serve as a fundamental basis for the convergence analysis of minimax Q-learning in the subsequent sections. To represent Q-VI compactly through the switching system model, we need to introduce some vector and matrix notations. §.§ Assumptions and definitions Throughout the paper, we will use the following compact notations for dynamical system representations of Q-VI: R_a,b : = [ [ R(1,a,b); R(2,a,b); ⋮; R(| S|,a,b); ]] ∈ℝ^| S| , P_a,b : = [ [ P(1|1,a,b) P(2|1,a,b) ⋯ P(| S||1,a,b); P(1|2,a,b) P(2|2,a,b) ⋯ P(| S||2,a,b); ⋮ ⋮ ⋱ ⋮; P(1|| S|,a,b) P(2|| S|,a,b) ⋯ P(| S||| S|,a,b); ]] where R_a,b∈ℝ^| S| is the expected reward vector conditioned on the action pair (a,b)∈ A× B, and P_a,b∈ℝ^| S|× | S| is the state transition probability matrix conditioned on the action pair (a,b)∈ A× B. Moreover, let us define the associated notations P: = [ [ P_1,1; ⋮; P_| A|,| B|; ]] ∈ℝ^| S| × | S× A× B| , R: = [ [ R_1,1; ⋮; R_| A|| B|; ]] ∈ℝ^| S× A× B| , Q: = [ [ Q_1,1; ⋮; Q_| A|,| B|; ]] ∈ℝ^| S× A× B| , where Q_a,b∈ℝ^| S| is a vector with [Q_a,b ]_s = Q(s,a,b) and P∈ℝ^| S× A| × | S|. The Q-function is encoded as a single vector Q ∈ℝ^| S× A× B|, which enumerates Q(s,a,b) for all s ∈ S, a ∈ A, and b ∈ B. The single value Q(s,a,b) can be extracted by Q(s,a,b) = (e_a ⊗ e_b ⊗ e_s )^T Q, where e_s ∈ℝ^| S|, e_a ∈ℝ^| A|, and e_b ∈ℝ^| B| are s-th basis vector (all components are 0 except for the s-th component which is 1), a-th basis vector, and b-th basis vector, respectively. For any given Q ∈ℝ^| S× A× B|, let us denote the greedy policy with respect to Q by i(s,a):=_b∈ B Q(s,a,b)∈ B. Then, we define the corresponding action transition matrix as Γ_Q : = [ [ e_i(1,1)^T ⊗ e_1^T; e_i(1,2)^T ⊗ e_2^T; ⋮; e_i(| S|,| A|)^T ⊗ e_| S|| A|^T; ]] ∈ℝ^| S× A| × | S× A× B| where e_i(s,a)∈ℝ^| B|, and e_i ∈ℝ^| A× S|. This notation has been introduced in <cit.>, and it is useful to express Q-VI in the vector and matrix form using the relation Γ_Q Q = [ [ min _b ∈ B Q(1,1,b); min _b ∈ B Q(1,2,b); ⋮; min _b ∈ B Q(| S|,| A|,b); ]] ∈ℝ^| S× A| Similarly, for any given Q' ∈ℝ^| S× A|, let us denote the greedy policy with respect to Q' by i(s): = max _a ∈ A Q'(s,a) ∈ A. Then, we define the corresponding action transition matrix as Π_Q : = [ [ e_i(1)^T ⊗ e_1^T; e_i(2)^T ⊗ e_2^T; ⋮; e_i(| S|)^T ⊗ e_| S|^T; ]] ∈ℝ^| S| × | S|| A| where e_i(s)∈ℝ^| A| and e_s ∈ℝ^| S|. Then, we can similarly prove that Π_Q' Q' = [ [ max _a ∈ A Q'(1,a); max_a ∈ A Q'(2,a); ⋮; max _a ∈ A Q'(| S|,a); ]] ∈ℝ^| S| Combining the two notations, one can prove the relation Π _Γ _Q QΓ _Q Q = [ [ max _a ∈ Amin _b ∈ B Q(1,a,b); max _a ∈ Amin _b ∈ B Q(2,a,b); ⋮; max _a ∈ Amin _b ∈ B Q(| S|,a,b); ]] ∈ℝ^| S| Another important property of the notations is that PΠ_Γ_Q QΓ_Q ∈ℝ^| S× A× B| × | S× A× B| is the transition probability matrix of the state-action pair (s,a,b) under the policy (π,μ) where π (s): = _a ∈ Amin_b ∈ B Q(s,a,b) ∈ A and μ(s,a):=_b∈ B Q(s,a,b)∈ B. Using these notations, the Bellman equation in (<ref>) can be compactly written as Q^* = γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* + R In what follows, an equivalent switching system model, that captures the behavior of Q-VI, is introduced, and based on it, we provide a proof of convergence of Q-VI from the switching system perspective. §.§ Convergence of Q-VI via switching system model In this section, we study a discrete-time switching system model of Q-VI and establish its finite-time convergence based on the stability analysis of switching systems. Using the notation introduced in <ref>, the update of Q-VI can be rewritten as Q_k+1= R+γ PΠ_Γ_Q_kQ_kΓ_Q_kQ_k , Combining (<ref>) and (<ref>) leads to (Q_k + 1 - Q^* ) = γ PΠ _Γ _Q_k Γ _Q_k _: = A_Q_k (Q_k - Q^* ) + γ P(Π _Γ _Q_k Q_k Γ _Q_k - Π _Γ _Q^* Q^* Γ _Q^* )Q^* _: = b_Q_k which is a switched affine system where A_Q_k and b_Q_k switch among matrices from {γ PΠ_Γ_QΓ_Q: Q ∈ℝ^| S× A× B|} and vectors from {γ P(Π_Γ_QΓ_Q - Π_Γ_Q^*Γ_Q^*)Q^*: Q ∈ℝ^| S× A× B|} based on the changes of Q_k. Hence, the convergence of Q-VI now relies on analyzing the stability of the aforementioned switching system. The main challenge in proving its stability arises from the presence of the affine term b_Q_k. Without it, we could easily establish the exponential stability of the corresponding deterministic switching system under any switching policy. Specifically, we have the following result. For arbitrary H_k, k≥ 0, the linear switching system Q_k+1 - Q^* = A_H_k (Q_k - Q^*), Q_0 - Q^*∈ℝ^| S× A× B|, is exponentially stable such that Q_k+1- Q^*_∞≤γQ_k - Q^*_∞, k ≥ 0, and Q_k- Q^*_∞≤γ ^k Q_0 - Q^*_∞, k ≥ 0, The above result follows immediately from the key fact that A_Q_∞≤γ, which we formally state in the lemma below. For any Q ∈ℝ^| S× A× B|, A_Q_∞≤γ, where the matrix norm A _∞ :=max_1≤ i ≤ m∑_j=1^n |A_ij | and A_ij is the element of A in i-th row and j-th column. Note ∑_j | [A_Q ]_ij | = γ∑_j |[PΠ _Γ _Q Γ _Q ]_ij | = γ, which comes from the fact that PΠ _Γ _Q Γ _Q is a stochastic matrix, i.e., its row vector is a stochastic vector. This completes the proof. However, due to the presence of the additional affine term b_Q_k in the switching system (<ref>), it is not immediately evident how to directly obtain its finite-time convergence. To overcome the challenge posed by the affine term, we will utilize two simpler upper and lower bounds, provided below. For all k≥0, we have γ PΠ _Γ _Q_k Q^* Γ _Q_k (Q_k - Q^* ) ≤ Q_k + 1 - Q^* ≤γ PΠ _Γ _Q^* Q_k Γ _Q^* (Q_k - Q^* ) First of all, the lower bound can be derived though the inequalities Q_k + 1 - Q^* = A_Q_k (Q_k - Q^* ) + b_Q_k = γ PΠ _Γ _Q_k Q_k Γ _Q_k Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* ≥ γ PΠ _Γ _Q_k Q_k Γ _Q_k Q_k - γ PΠ _Γ _Q_k Q^* Γ _Q_k Q^* ≥ γ PΠ _Γ _Q_k Q^* Γ _Q_k Q_k - γ PΠ _Γ _Q_k Q^* Γ _Q_k Q^* = γ PΠ _Γ _Q_k Q^* Γ _Q_k (Q_k - Q^* ), where the inequalities come from the definitions of Γ_Q and Π_Q. Similarly, for the upper bound, one gets Q_k + 1 - Q^* = A_Q_k (Q_k - Q^* ) + b_Q_k = γ PΠ _Γ _Q_k Q_k Γ _Q_k Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* ≤ γ PΠ _Γ _Q^* Q_k Γ _Q^* Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* ≤ γ PΠ _Γ _Q^* Q_k Γ _Q^* Q_k - γ PΠ _Γ _Q^* Q_k Γ _Q^* Q^* = γ PΠ _Γ _Q^* Q_k Γ _Q^* (Q_k - Q^* ) This completes the proof. Based on the upper and lower bounds presented in <ref>, one can now establish the convergence of Q-VI through the following lemma. We have the following bounds for Q-VI iterates: Q_k+1 - Q^* _∞≤γQ_k - Q^* _∞, ∀ k ≥ 0. Since γ PΠ _Γ _Q_k Q^* Γ _Q_k (Q_k - Q^* ) ≤ Q_k + 1 - Q^* ≤γ PΠ _Γ _Q^* Q_k Γ _Q^* (Q_k - Q^* ) from <ref>, it follows that (e_a ⊗ e_b ⊗ e_s )^T γ PΠ _Γ _Q_k Q^* Γ _Q_k (Q_k - Q^* ) ≤ (e_a ⊗ e_b ⊗ e_s )^T (Q_k + 1 - Q^* ) ≤ (e_a ⊗ e_b ⊗ e_s )^T γ PΠ _Γ _Q^* Q_k Γ _Q^* (Q_k - Q^* ). If (e_a ⊗ e_b ⊗ e_s )^T (Q_k + 1 - Q^* ) ≤ 0, then |(e_a ⊗ e_b ⊗ e_s )^T (Q_k + 1 - Q^* )| ≤ |(e_a ⊗ e_b ⊗ e_s )^T γ PΠ _Γ _Q_k Q^* Γ _Q_k (Q_k - Q^* )|, where e_s ∈ℝ^| S| and e_a ∈ℝ^| A| are the s-th and a-th standard basis vectors, respectively. If (e_a ⊗ e_b ⊗ e_s )^T (Q_k + 1 - Q^* ) > 0, then |(e_a ⊗ e_b ⊗ e_s )^T (Q_k - Q^* )| ≤ |(e_a ⊗ e_b ⊗ e_s )^T γ PΠ _Γ _Q^* Q_k Γ _Q^* (Q_k - Q^* )|. Therefore, one gets Q_k + 1 - Q^* _∞≤max{γ PΠ _Γ _Q_k Q^* Γ _Q_k (Q_k - Q^* )_∞ ,γ PΠ _Γ _Q^* Q_k Γ _Q^* (Q_k - Q^* )_∞} ≤ max{γQ_k - Q^* _∞ ,γQ_k - Q^* _∞} = γQ_k - Q^* _∞ , which completes the proof. As a direct consequence of <ref>, convergence of Q-VI can be derived as follows: Q_k - Q^* _∞≤γ^k Q_0 - Q^* _∞ In this section, we have presented a discrete-time switching system model of Q-VI and proved its convergence for alternating two-player zero-sum Markov games. It is important to note that all the derivations in this section can be readily extended to Q-VI for simultaneous two-player zero-sum Markov games. However, for the sake of simplicity in our presentation, we only focus on the alternating case. The presented switching system model serves as the basis for analyzing the minimax Q-learning algorithm in the following sections. § MINIMAX Q-LEARNING In this section, we study minimax Q-learning algorithm given in <ref> to solve the alternating two-player zero-sum Markov game. <Ref> is slightly different from the original minimax Q-learning proposed in <cit.> for simultaneous two-player Markov games by replacing the max operator over the set of all stochastic policies with the max operator restricted to the discrete action set A. However, it is worth noting that all the analyses presented in this paper remain applicable to the original minimax Q-learning approach for simultaneous two-player Markov games, requiring only minor adjustments. Through the minimax Q-learning, both the user and adversary can learn their optimal policies. However, we will focus on the user's role in this paper. We will address the following scenario: while learning, the user has an access to the adversary's action b∈ B, which is generated by an exploratory behavior policy, meaning that the adversary does not intervene and disrupt the user. On the other hand, after the learning period, the adversary intervenes and hides its decision to the user. Once Q^* is found, the user takes action according to π ^* (s): = _a ∈ Amin _b ∈ B Q^* (s,a,b), and it leads to the best performance against the optimal adversary behaviors. In <ref>, we consider a constant step-size α∈ (0,1), and assume that {(s_k,a_k,b_k,s_k')}_k=0^∞ are i.i.d. samples under the behavior policies β and ϕ, where the behavior policy is the policy by which the RL agent actually behaves to collect experiences. For simplicity, we assume that the state at each time is sampled from the stationary state distribution p, and in this case, the state-action distribution at each time is identically given by d(s,a,b) = p(s)β (a|s)ϕ (b|s), (s,a,b) ∈ S× A× B. Throughout, we make the following assumptions. d(s,a,b)> 0 holds for all s∈ S,a ∈ A,b ∈ B. The step-size is a constant α∈ (0,1). [Unit bound on rewards] The reward is bounded as follows: max _(s,a,b,s') ∈ S× A× B× S |r (s,a,b,s')|≤ 1. [Unit bound on initial parameters] The initial iterate Q_0 satisfies Q_0 _∞≤ 1. The above assumptions are crucial for the proposed finite-time analysis. <ref> ensures that every state-action pair can be visited infinitely often, facilitating sufficient exploration, which is a standard assumption in the literature <cit.>. This assumption can be used when the state-action occupation frequency is given, and has been also considered in <cit.> and <cit.>. <cit.> considers another exploration condition, called the cover time condition, which states that there exists a certain time period, within which all the state-action pair is expected to be visited at least once. Slightly different cover time conditions have been used in <cit.> and <cit.> for convergence rate analysis. <ref> and <ref> impose unit bounds on the reward function and the initial iterate Q_0, and are introduced for the sake of simplicity in analysis, without sacrificing generality. The constant step-size in <ref> has been also studied in <cit.> and <cit.> using different approaches. The following quantities will be frequently used in this paper; hence, we define them for convenience. * Maximum state-action occupation frequency: d_max := max_(s,a,b)∈ S× A× B d(s,a,b) ∈ (0,1). * Minimum state-action occupation frequency: d_min:= min_(s,a,b) ∈ S× A× B d(s,a,b) ∈ (0,1). * Exponential decay rate: ρ:=1 - α d_min (1-γ). It can be proven that under <ref>, the decay rate satisfies ρ∈ (0,1). The reason behind referring to ρ as an exponential decay rate is that the finite-time error bound, which will be derived in the remaining part of this paper, decays exponentially at the rate of ρ. Similar to Q-VI, we will represent minimax Q-learning in <ref> as a switching system model in this section. However, a key distinction lies in the fact that <ref> can be viewed as a stochastic Q-VI, where the Q-function for each state-action pair is updated asynchronously through stochastic state-action pair explorations. Therefore, it becomes essential to incorporate the state-action occupation frequency, which is linked to exploration, into the switching system model. Specifically, the state-action occupation frequency is encoded using the following matrix notations: D_a,b : = [ [ d(1,a,b) ; ⋱ ; d(|S|,a,b); ]] ∈ℝ^| S| × | S| , D: = [ [ D_1,1 ; ⋱ ; D_| A|,| B|; ]] ∈ℝ^| S× A× B| × | S× A× B| . Note also that under <ref>, D is a nonsingular diagonal matrix with strictly positive diagonal elements. In our analysis, the boundedness of Q-learning iterates <cit.> plays an important role in our analysis. If the step-size is less than one, then for all k ≥ 0, Q_k _∞≤ Q_max : = max{ 1,Q_0 _∞}/1 - γ. From <ref>, we can easily see that Q_max≤1/1-γ. The boundedness has been established for standard Q-learning in <cit.>, but not for minimax Q-learning. For this reason, we provide its proof in Appendix <ref> for the completeness of the presentation. §.§ Minimax Q-learning as a stochastic affine switching system Using the notation introduced, the update in <ref> can be rewritten as Q_k + 1 = Q_k + α{ DR + γ DPΠ _Γ _Q_k Q_k Γ _Q_k Q_k - DQ_k + w_k }, where w_k = (e_a_k ⊗ e_b_k ⊗ e_s_k )r_k + γ (e_a_k ⊗ e_b_k ⊗ e_s_k )(e_s_k' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k - (e_a_k ⊗ e_b_k ⊗ e_s_k )(e_a_k ⊗ e_b_k ⊗ e_s_k )^T Q_k - (DR + γ DPΠ _Γ _Q_k Q_k Γ _Q_k Q_k - DQ_k ), = (e_a_k ⊗ e_b_k ⊗ e_s_k )δ _k - (DR + γ DPΠ _Γ _Q_k Q_k Γ _Q_k Q_k - DQ_k ), is the stochastic noise, where all randomness in <ref> is encoded into a single vector, (s_k,a_k,b_k,r_k,s_k') is the sample in the k-th time-step, and δ _k : = r_k + γ (e_s_k' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k - (e_a_k ⊗ e_b_k ⊗ e_s_k )^T Q_k is called the TD-error. Moreover, by definition, the noise term has a zero mean conditioned on Q_k, i.e., 𝔼[w_k|Q_k]=0. Invoking the optimal Bellman equation (γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D)Q^* + DR = 0 in (<ref>), (<ref>) can be further rewritten by (Q_k + 1 - Q^* ) = { I + α (γ DPΠ _Γ _Q_k Q_k Γ _Q_k - D)}_: = A_Q_k (Q_k - Q^* ) + αγ DP(Π _Γ _Q_k Q_k Γ _Q_k - Π _Γ _Q^* Q^* Γ _Q^* )Q^* _: = b_Q_k + α w_k . which is a linear switching system with an extra affine term, b_Q_k:=αγ DP(Π _Γ _Q_k Q_k Γ _Q_k - Π _Γ _Q^* Q^* Γ _Q^* )Q^*, and stochastic noise α w_k. Using the notation, the minimax Q-learning iteration can be concisely represented as the stochastic affine switching system Q_k + 1 - Q^* = A_Q_k (Q_k - Q^* ) + b_Q_k + α w_k, Therefore, the convergence of minimax Q-learning can be reduced to analyzing the stability of the above affine switching system. However, proving its stability poses a significant challenge due to the presence of affine and stochastic terms. In the absence of these terms, we can establish the exponential stability of the corresponding deterministic switching system, irrespective of the switching policy (as demonstrated in <ref>). However, (<ref>) includes additional affine terms and stochastic noises, making it unclear how to derive its finite-time convergence directly. To address this issue, we employ two simpler comparison systems that bound the trajectories of the original system and are more amenable to analysis. The construction of these comparison systems draws inspiration from <cit.> and <cit.>, capitalizing on the unique structure of the Q-learning algorithm. Unlike previous works in <cit.>, our focus lies in the discrete-time domain and finite-time analysis. Moreover, <cit.> presents a finite-time analysis of standard Q-learning through the framework of discrete-time switching system models. In contrast to <cit.>, the discrete-time switching system model in this paper exhibits a distinct structure including the min operator in its updates. Consequently, establishing finite-time convergence becomes considerably more challenging. In other words, it is not feasible to adopt analogous approaches as those outlined in <cit.>, which will be elaborated in the remaining parts of this paper. In the following subsections, we will present the two comparison systems, called the upper and lower comparison systems. §.§ Lower comparison system Let us consider the following stochastic switching system: (Q_k + 1^L - Q^* ) = (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q_k^L - Q^* - D} )(Q_k^L - Q^* ) + α w_k, Q_0^L-Q^*∈ℝ^| S× A× B|, where the stochastic noise w_k in the lower comparison system is the same as that in the original system (<ref>). We refer to this system as the lower comparison system. It is important to note that, unlike the original system (<ref>), the above system does not include the affine term. Furthermore, its main property is that if Q_0^L - Q^* ≤ Q_0-Q^* initially, then Q_k^L - Q^* ≤ Q_k-Q^* holds for all k ≥ 0. Suppose Q_0^L- Q^*≤ Q_0 - Q^*, where ≤ is used as the element-wise inequality. Then, Q_k^L- Q^*≤ Q_k-Q^*, for all k≥0. The proof is done by an induction argument. Suppose the result holds for some k ≥ 0. Then, (Q_k+1- Q^*)= (Q_k - Q^* ) + α D{γ PΠ _Γ _Q_k Q_k Γ _Q_k Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* - Q_k + Q^* } + α w_k ≥ (Q_k - Q^* ) + α D{γ PΠ _Γ _Q_k Q_k Γ _Q_k Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q_k Q^* - Q_k + Q^* } + α w_k ≥ (Q_k - Q^* ) + α D{γ PΠ _Γ _Q^* Q^* Γ _Q_k Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q_k Q^* - Q_k + Q^* } + α w_k = (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q_k - D} )(Q_k - Q^* ) + α w_k ≥ (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q_k - D} )(Q_k^L - Q^* ) + α w_k ≥ (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q_k^L - Q^* - D} )(Q_k^L - Q^* ) + α w_k = Q_k+1^L-Q^*, where the third inequality is due to the hypothesis Q_k^L- Q^*≤ Q_k-Q^* and the fact that A_Q^* is a positive matrix (all elements are nonnegative). The proof is completed by induction. §.§ Upper comparison system Now, let us introduce the stochastic linear switching system (Q_k + 1^U - Q^* ) = (I + α{γ DPΠ _Γ _Q^* (Q_k^U - Q^* )Γ _Q^* - D} )(Q_k^U - Q^* ) + α w_k, Q_0^U-Q^*∈ℝ^| S× A× B|, where the stochastic noise w_k is kept the same as the original system. We will call it the upper comparison system. Similar to the lower comparison system (<ref>), the above system does not include the affine term. Moreover, if Q_0^U-Q^*≥ Q_0-Q^* initially, then Q_k^U-Q^*≥ Q_k-Q^* for all k ≥ 0. Suppose Q_0^U-Q^*≥ Q_0-Q^*, where ≥ is used as the element-wise inequality. Then, Q_k^U-Q^*≥ Q_k-Q^*, for all k ≥ 0. Suppose the result holds for some k ≥ 0. Then, Q_k + 1 - Q^* = (Q_k - Q^* ) + α D{γ PΠ _Γ _Q_k Q_k Γ _Q_k Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* - Q_k + Q^* } + α w_k ≤ (Q_k - Q^* ) + α D{γ PΠ _Γ _Q^* Q_k Γ _Q^* Q_k - γ PΠ _Γ _Q^* Q^* Γ _Q^* Q^* - Q_k + Q^* } + α w_k ≤ (Q_k - Q^* ) + α D{γ PΠ _Γ _Q^* Q_k Γ _Q^* Q_k - γ PΠ _Γ _Q^* Q_k Γ _Q^* Q^* - Q_k + Q^* } + α w_k ≤ (I + α{γ DPΠ _Γ _Q^* Q_k Γ _Q^* - D} )(Q_k - Q^* ) + α w_k ≤ (I + α{γ DPΠ _Γ _Q^* Q_k Γ _Q^* - D} )(Q_k^U - Q^* ) + α w_k ≤ (I + α{γ DPΠ _Γ _Q^* (Q_k^U - Q^* )Γ _Q^* - D} )(Q_k^U - Q^* ) + α w_k = Q_k+1^U-Q^*, where the third inequality is due to the hypothesis Q_k^U-Q^*≥ Q_k-Q^* and the fact that A_Q_k is a positive matrix. The proof is completed by induction. § FINITE-TIME ANALYSIS OF MINIMAX Q-LEARNING Building upon the constructions of the lower, upper, and original switching systems, we now proceed to establish the convergence of minimax Q-learning. Considering that the original system (<ref>) associated with <ref> is confined within the bounds set by the upper and lower comparison systems, respectively, its convergence can be proved by ensuring the convergence of these comparison systems. To begin, our attention is directed towards the lower comparison system in (<ref>). §.§ Lower comparison system It is worth noting that the lower comparison system (<ref>), being a switching system with stochastic noises, still presents challenges in establishing its convergence due to the complex dependence of the system matrices on the state. To address this difficulty, we propose an additional pair of lower and upper comparison systems that effectively bound the lower comparison system described in (<ref>). First and foremost, we will focus on the upper comparison system, called the lower-upper comparison system, which provides an upper bound for the lower comparison system. The formulation of this system is as follows: (Q_k + 1^LU - Q^* ) = (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} )_: = A(Q_k^LU - Q^* ) + α w_k, which is a stochastic linear system. We will prove that the trajectory of this lower-upper comparison system upper bounds that of the lower comparison system in (<ref>). Suppose Q_0^L- Q^*≤ Q_0^LU - Q^*, where ≤ is used as the element-wise inequality. Then, Q_k^L- Q^*≤ Q_k^LU-Q^*, for all k≥0. The proof is done by an induction argument. Suppose the result holds for some k ≥ 0. Then, (Q_k + 1^L - Q^* ) = (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q_k^L - Q^* - D} )(Q_k^L - Q^* ) + α w_k ≤ (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} )(Q_k^L - Q^* ) + α w_k ≤ (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} )(Q_k^LU - Q^* ) + α w_k = Q_k+1^LU-Q^*, where the second inequality is due to the hypothesis Q_k^L- Q^*≤ Q_k^LU-Q^* and the fact that A_Q^* is a positive matrix (all elements are nonnegative). The proof is completed by induction. Defining x_k := Q_k^LU - Q^* and A:= (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} ), (<ref>) can be concisely represented as the stochastic linear system x_k + 1 = A x_k + α w_k, x_0 ∈ℝ^n, ∀ k ≥ 0, where n:= | S× A× B|, and w_k∈ℝ^n is a stochastic noise. We will first prove the convergence of this linear system, whose proof is given in Appendix <ref>. For any k ≥ 0, we have 𝔼[ Q_k^LU - Q^*_2 ] ≤3α ^1/2 | S× A× B|/d_min^1/2 (1 - γ )^3/2 + | S× A× B| Q_0^LU - Q^*_2 ρ ^k. The first term on the right-hand side of (<ref>) can be made arbitrarily small by reducing the step-size α∈ (0,1). The second bound exponentially vanishes as k →∞ at the rate of ρ = 1 - α d_min (1 - γ) ∈ (0,1). Therefore, it proves the exponential convergence of the mean-squared error of the lower comparison system up to a constant bias. Although the convergence of the lower-upper comparison system has been established, it only guarantees convergence of an upper bound of the lower comparison system in (<ref>). As mentioned earlier, it is hard to directly prove convergence of (<ref>) due to the dependance of the system matrix on Q_k^L. In particular, if we take the expectation on both sides of (<ref>), it is not possible to separate system matrix and the state unlike the lower-upper comparison system, making it much harder to analyze the stability of the lower comparison system. To circumvent such a difficulty, we instead study an error system <cit.> by subtracting the lower comparison system from the lower-upper comparison system Q_k + 1^L - Q_k + 1^LU = (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q_k^L - Q^* - D} )_: = A_Q_k^L (Q_k^L - Q_k^LU ) + αγ DPΠ _Γ _Q^* Q^* (Γ _Q_k^L - Q^* - Γ _Q^* )_: = B_Q_k^L (Q_k^LU - Q^* ) Here, the stochastic noise α w_k is canceled out in the error system. Matrices (A_Q_k^L, B_Q_k^L) switch according to the external signal Q_k^L, and Q_k^LU-Q^* can be seen as an external disturbance. The key insight is as follows: if we can prove the stability of the error system, i.e., Q_k^L-Q_k^LU→ 0 as k→∞, then since Q_k^LU→ Q^* as k →∞, we have Q_k^L → Q^* as well. Keeping this picture in mind, we can establish the following bound on the expected error 𝔼[Q_k^L - Q^* _∞ ]. For all k ≥ 0, we have 𝔼[Q_k^L - Q^* _∞ ] ≤ 9 d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 2| S× A× B|^3/2/1 - γρ ^k + 4αγ d_max | S× A× B|^2/3/1 - γkρ ^k - 1 Taking norm on the error system in (<ref>), we get Q_k + 1^L - Q_k + 1^LU_∞≤ A_Q_k^L _∞ Q_k^L - Q_k^LU_∞ + B_Q_k^L _∞ Q_k^LU - Q^*_∞ ≤ ρQ_k^L - Q_k^LU_∞ + 2αγ d_maxQ_k^LU - Q^*_∞ where the second inequality is due to <ref> and the definition of B_Q_k^L in (<ref>). Taking the expectation on both sides of the last inequality and combining the last inequality with that in <ref> yield 𝔼[ Q_i + 1^L - Q_i + 1^LU_∞ ] ≤ ρ𝔼[ Q_i^L - Q_i^LU_∞] + 2αγ d_max3| S× A× B|α ^1/2/d_min^1/2 (1 - γ )^3/2 + 2αγ d_maxQ_0^LU - Q^*_2 | S× A× B|ρ ^i for all i≥ 0. Summing the inequality from i=0 to i=k and letting Q_0^LU = Q_0^L = Q_0 lead to 𝔼[ Q_k^L - Q_k^LU_∞ ] ≤6γ d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + kρ ^k - 1 2αγ d_maxQ_0 - Q^*_2 | S× A× B|. Using Q_0 - Q^*_2 ≤ | S× A× B|^1/2Q_0 - Q^*_∞≤ | S× A× B|^1/22/1 - γ further leads to 𝔼[ Q_k^L - Q_k^LU_∞ ] ≤ 6γ d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + kρ ^k - 1 4 αγ d_max| S× A× B|^2/3/1 - γ On the other hand, using the triangle inequality leads to 𝔼[Q_k^L - Q^* _∞ ] = 𝔼[Q_k^L - Q_k^LU + Q_k^LU - Q^* _∞ ] ≤𝔼[Q_k^LU - Q^* _∞ ] + 𝔼[Q_k^L - Q_k^LU_∞ ] Combining (<ref>) with that in <ref> leads to 𝔼[ Q_k^L - Q^*_∞ ] ≤ 𝔼[ Q_k^LU - Q^*_∞ ] + 𝔼[Q_k^L - Q_k^LU_∞ ] ≤ 3α ^1/2 | S× A× B|/d_min^1/2 (1 - γ )^3/2 + 2| S× A× B|^3/2/1 - γρ ^k + 𝔼[Q_k^L - Q_k^LU_∞ ]. Moreover, combining the above inequality with (<ref>) yields the desired conclusion. Note that the first term in (<ref>) is the constant error due to the constant step-size, which is scaled according to α∈ (0,1). The second term in (<ref>) is due to the gap between lower comparison system and original system, and the third term in (<ref>) is due to the gap between upper comparison system and original system. The second term O(ρ^k) exponentially decays, and the third term O(k ρ^k-1) also exponentially decays while the speed is slower than the second term due to the additional linearly increasing factor. The upper bound in (<ref>) can be converted to looser but more interpretable forms as follows. For any k ≥ 0, we have 𝔼[Q_k^L - Q^*_∞ ] ≤ 9 d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 2| S× A× B|^3/2/1 - γρ ^k + 4αγ d_max | S× A× B|^2/3/1 - γ - 2/ln (ρ )ρ ^ - 1/ln (ρ ) - 1ρ ^k/2 and 𝔼[Q_k^L - Q^* _∞ ] ≤ 9 d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 2| S× A× B|^3/2/1 - γρ ^k + 8γ d_max | S× A× B|^2/3/1 - γ1/d_min (1 - γ )ρ ^k/2 - 1 In (<ref>), we focus on the term kρ ^k - 1 = kρ ^k/2 + k/2 - 1 = kρ ^k/2 - 1ρ ^k/2 Let f(x) = xρ ^x/2 = xρ ^x/2. Checking the first-order optimality condition df(x)/dx = d/dxxρ ^x/2 = ρ ^x/2 + x1/2ρ ^x/2ln (ρ ) = 0 it follows that its maximum point is x = - 2/ln (ρ ), and the corresponding maximum value is f( - 2/ln (ρ )) = - 2/ln (ρ )ρ ^ - 1/ln (ρ ) Therefore, we have the bounds kρ ^k - 1 = kρ ^k/2ρ ^ - 1ρ ^k/2≤ - 2/ln (ρ )ρ ^ - 1/ln (ρ ) - 1ρ ^k/2 Combining this bound with (<ref>), one gets the first bound in (<ref>). To obtain the second inequality in (<ref>), we use the relation 1 - 1/x≤ln x ≤ x - 1,∀ x > 0 to obtain 1/ln (ρ ^ - 1 )ρ ^1/ln (ρ ^ - 1 )≤ 1/1 - 1/ρ ^ - 1ρ ^ρ ^ - 1 - 1 ≤ 1/α d_min (1 - γ )ρ ^1/1 - α d_min (1 - γ ) - 1 ≤ 1/α d_min (1 - γ ) where the last inequality uses α∈ (0,1) in <ref>. Combining the above bound with (<ref>), (<ref>) follows. This completes the proof. §.§ Upper comparison system Until now, we have established a finite-time analysis for the lower comparison system. In a similar manner, one can also offer a finite-time analysis for the upper comparison system. Due to the symmetrical nature of the upper comparison system in relation to the lower comparison system, we will omit the comprehensive derivation process for the sake of brevity in this presentation. In particular, for the upper comparison system (<ref>), let us define the following system: (Q_k + 1^UL - Q^* ) = (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} )_: = A(Q_k^UL - Q^* ) + α w_k , which is a stochastic linear system, and lower bounds the upper comparison system. For this reason, let us call it a upper-lower comparison system. We can prove that the trajectory of this upper-lower comparison system upper bounds that of the upper comparison system in (<ref>). Suppose Q_0^U- Q^* ≥ Q_0^UL - Q^*, where ≥ is used as the element-wise inequality. Then, Q_k^U- Q^*≥ Q_k^UL-Q^*, for all k≥0. The proof is done by an induction argument. Suppose the result holds for some k ≥ 0. Then, (Q_k + 1^U - Q^* ) = (I + α{γ DPΠ _Γ _Q^* (Q_k^U - Q^* )Γ _Q^* - D} )(Q_k^U - Q^* ) + α w_k ≥ (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} )(Q_k^U - Q^* ) + α w_k ≥ (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} )(Q_k^UL - Q^* ) + α w_k = Q_k + 1^UL - Q^* where the second inequality is due to the hypothesis Q_k^U- Q^*≥ Q_k^UL-Q^* and the fact that A_Q^* is a positive matrix (all elements are nonnegative). The proof is completed by induction. Defining x_k := Q_k^UL - Q^* and A:= (I + α{γ DPΠ _Γ _Q^* Q^* Γ _Q^* - D} ), (<ref>) can be represented as the stochastic linear system (<ref>). Its finite-time bound is identical to that in <ref>. Moreover, subtracting the upper comparison system from the upper-lower comparison system leads to Q_k + 1^U - Q_k + 1^UL = ( I + α{γ DPΠ _Γ _Q^* (Q_k^U - Q^* )Γ _Q^* - D})_: = A_Q_k^U (Q_k^U - Q_k^UL ) + αγ DP(Π _Γ _Q^* (Q_k^U - Q^* ) - Π _Γ _Q^* Q^* )Γ _Q^* _: = B_Q_k^U (Q_k^UL - Q^* ) Using this system and following similar lines as in the lower comparison system, we can establish a finite-time error bound on the upper system's error. For any k ≥ 0, we have 𝔼[Q_k^U - Q^* _∞ ] ≤ 9 d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 2| S× A× B|^3/2/1 - γρ ^k + 8γ d_max | S× A× B|^2/3/1 - γ1/d_min (1 - γ )ρ ^k/2 - 1 Taking norm on the error system in (<ref>), we get Q_k + 1^U - Q_k + 1^UL_∞≤ A_Q_k^U _∞ Q_k^U - Q_k^UL_∞ + B_Q_k^U _∞ Q_k^UL - Q^*_∞ ≤ ρQ_k^U - Q_k^UL_∞ + 2αγ d_maxQ_k^UL - Q^*_∞ where the second inequality is due to <ref> and the definition of B_Q_k^U in (<ref>). The remaining parts of the proof are essentially identical to the proof of <ref>, and hence, they are omitted here for brevity. Similarly to <ref>, the upper bound in (<ref>) can be converted to looser but more interpretable forms as follows. For any k ≥ 0, we have 𝔼[Q_k^U - Q^*_∞ ] ≤ 9 d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 2| S× A× B|^3/2/1 - γρ ^k + 4αγ d_max | S× A× B|^2/3/1 - γ - 2/ln (ρ )ρ ^ - 1/ln (ρ ) - 1ρ ^k/2 and 𝔼[Q_k^U - Q^* _∞ ] ≤ 9 d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 2| S× A× B|^3/2/1 - γρ ^k + 8γ d_max | S× A× B|^2/3/1 - γ1/d_min (1 - γ )ρ ^k/2 - 1 Now, <ref> and <ref> offer finite-time error bounds for the upper and lower comparison systems, respectively. By merging these bounds, we can deduce an upper bound on the original switching system (<ref>). For any k ≥ 0, we have 𝔼[Q_k - Q^* _2 ] ≤27d_max | S× A× B|α ^1/2/d_min^3/2 (1 - γ )^5/2 + 6| S× A× B|^3/2/1 - γρ ^k + 24γ d_max | S× A× B|^2/3/1 - γ3/d_min (1 - γ )ρ ^k/2 - 1 We have 𝔼[Q_k - Q^* _2 ] = 𝔼[Q_k - Q_k^L + Q_k^L - Q^* _2 ] ≤ 𝔼[Q_k^L - Q^* _2 ] + 𝔼[Q_k - Q_k^L _2 ] ≤ 𝔼[Q_k^L - Q^* _2 ] + 𝔼[Q_k^U - Q_k^L _2 ] ≤ 𝔼[Q_k^L - Q^* _2 ] + 𝔼[Q_k^U - Q^* + Q^* - Q_k^L _2 ] ≤ 𝔼[Q_k^L - Q^* _2 ] + 𝔼[Q_k^U - Q^* _2 ] + 𝔼[Q^* - Q_k^L _2 ] = 2𝔼[Q_k^L - Q^* _2 ] + 𝔼[Q_k^U - Q^* _2 ] where the first and fourth inequalities come from the triangle inequality, and the second is due to the fact that Q_k^U - Q_k^L ≥ Q_k - Q_k^L ≥ 0. Combining the last inequality with <ref> and <ref>, one gets the desired conclusion. § CONCLUSION This paper has investigated the finite-time analysis of the minimax Q-learning algorithm applied to two-player zero-sum Markov games. Additionally, we have established a finite-time analysis of the associated Q-value iteration method. To conduct our analysis, we employed switching system models for both minimax Q-learning and Q-value iteration. We anticipate that this approach provides deeper insights into minimax Q-learning and facilitates a more straightforward and insightful convergence analysis. Furthermore, these additional insights hold the potential to uncover new connections and foster collaboration between concepts in the domains of control theory and reinforcement learning communities. IEEEtran § TECHNICAL LEMMAS Let us consider the stochastic linear system (<ref>). The noise w_k has the zero mean conditioned on Q_k, and is bounded. These properties are formally proved in the following lemma. We have * 𝔼[w_k] = 0; * 𝔼[w_k _∞ ] ≤√(W_max); * 𝔼[w_k _2 ] ≤√(W_max); * 𝔼[w_k^T w_k ] ≤9/(1 - γ )^2 = :W_max. for all k≥ 0. For the first statement, we take the conditional expectation on (<ref>) to have 𝔼[w_k |x_k ] = 0. Taking the total expectation again with the law of total expectation leads to the first conclusion. Moreover, the conditional expectation, 𝔼[w_k^T w_k |Q_k ], is bounded as 𝔼[w_k^T w_k |Q_k ] = 𝔼[w_k _2^2 |Q_k ] = 𝔼[ . (e_a_k ⊗ e_b_k ⊗ e_s_k )δ _k - (DR + γ DPΠ _Γ _Q_k Q_k Γ _Q_k Q_k - DQ_k )_2^2 |Q_k ] = 𝔼[δ _k^2 |Q_k ] - DR + γ DPΠ _Γ _Q_k Q_k Γ _Q_k Q_k - DQ_k _2^2 ≤ 𝔼[δ _k^2 |Q_k ] = 𝔼[r_k^2 |Q_k ] + 𝔼[2r_k γ (e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k |Q_k ] + 𝔼[ - 2r_k (e_a_k⊗ e_b_k⊗ e_s_k )^T Q_k |Q_k ] + 𝔼[ - 2γ (e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k (e_a_k ⊗ e_b_k ⊗ e_s_k )^T Q_k |Q_k ] + 𝔼[γ (e_s_k ' )^T Π _Γ _Q_k Q_k Γ_Q_k Q_k γ (e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k |Q_k ] + 𝔼[(e_a_k⊗ e_s_k )^T Q_k (e_a_k⊗ e_s_k )^T Q_k |Q_k ] ≤ 1 + 2γ E[|r_k ||(e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k ||Q_k ] + 2𝔼[|r_k ||(e_a_k⊗ e_b_k⊗ e_s_k )^T Q_k ||Q_k ] + 2γ𝔼[|(e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k ||(e_a_k ⊗ e_b_k ⊗ e_s_k )^T Q_k ||Q_k ] + γ ^2 𝔼[|(e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k ||(e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k ||Q_k ] + 𝔼[|(e_a_k⊗ e_b_k⊗ e_s_k )^T Q_k ||(e_a_k⊗ e_b_k⊗ e_s_k )^T Q_k ||Q_k ] ≤ 9/(1 - γ )^2 =:W_max, where δ_k is defined as δ _k : = r_k + γ (e_s_k' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k - (e_a_k ⊗ e_b_k ⊗ e_s_k )^T Q_k, and the last inequality comes from Assumptions <ref>-<ref>, and <ref>. Taking the total expectation, we have the fourth result. Next, taking the square root on both sides of 𝔼[w_k _2^2 ] ≤ W_max, one gets 𝔼[w_k _∞ ] ≤𝔼[w_k _2 ] ≤√(𝔼[w_k _2^2 ])≤√(W_max), where the first inequality comes from ·_∞≤·_2. This completes the proof. § PROOF OF <REF> The update of minimax Q-learning in <ref> can be written compactly as Q_k + 1 = Q_k + α (e_a_k ⊗ e_b_k ⊗ e_s_k )(r_k + γ (e_s_k ' )^T Π _Γ _Q_k Q_k Γ _Q_k Q_k - (e_a_k ⊗ e_b_k ⊗ e_s_k )^T Q_k ) Taking the infinity norm of both sides of (<ref>) with k=0 and using <ref>, we have Q_1 _∞≤ (1 - α )Q_0 _∞ + α (1 + γQ_0 _∞ ) ≤ (1 - α + αγ + α )max{1,Q_0 _∞} ≤ (1 + γ )max{1,Q_0 _∞}. For induction argument, assume Q_k _∞≤ (1 + γ + ⋯ + γ ^k )max{1,Q_0 _∞}. Then, taking the infinity norm of both sides of (<ref>) leads to Q_k + 1_∞≤ (1 - α )Q_k _∞ + α (|r_k | + γQ_k _∞ ) ≤ (1 - α )Q_k _∞ + α + γαQ_k _∞ ≤ (1 - α )(1 + γ + ⋯ + γ ^k )max{1,Q_0 _∞} + α + γα (1 + γ + ⋯ + γ ^k )max{1,Q_0 _∞} ≤ (1 - α )(1 + γ + ⋯ + γ ^k )max{1,Q_0 _∞} + α (1 + γ + ⋯ + γ ^k + 1 )max{1,Q_0 _∞} = (1 + γ + ⋯ + γ ^k + γ ^k + 1 )max{1,Q_0 _∞} where the second inequality is due to the boundednes of rewards in <ref>, the third inequality follows from the hypothesis, and the fourth inequality is due to max{1,Q_0 _∞}≥ 1. By induction, we have Q_k _∞≤ (1 + γ + ⋯ + γ ^k )max{1,Q_0 _∞}≤max{1,Q_0 _∞}/1 - γ, ∀ k ≥ 0, which completes the proof. § PROOF OF <REF> Let us consider the stochastic linear system (<ref>). We first investigate how the autocorrelation matrix, 𝔼[x_k x_k^T ], propagates over the time. In particular, the autocorrelation matrix is updated through the linear recursion 𝔼[x_k + 1 x_k + 1^T ] = A 𝔼[x_k x_k^T ]A^T + α ^2 W_k, where 𝔼[w_k w_k^T ] = W_k ≽ 0 is the covariance of the noise. Defining X_k := 𝔼[x_k x_k^T ], k ≥ 0, it is equivalently written as the matrix recursion X_k + 1 = AX_k A^T + α^2 W_k, ∀ k≥ 0 with X_0 := x_0x_0^T. From this observation, one has X_k = α ^2 ∑_i = 0^k - 1A^i W_k - i - 1 (A^T )^i + A^k X_0 (A^T )^k. Therefore, one can derive 𝔼[Q_k^LU - Q^*_2^2 ] = 𝔼[(Q_k^LU - Q^* )^T (Q_k^LU - Q^* )] = 𝔼[ tr((Q_k^LU - Q^* )^T (Q_k^LU - Q^* ))] = 𝔼[ tr((Q_k^LU - Q^* )(Q_k^LU - Q^* )^T )] = 𝔼[ tr(X_k )] ≤ n λ_max (X_k ) ≤ n α ^2 ∑_i = 0^k - 1λ_max (A^i W_k - i - 1 (A^T )^i ) + n λ_max (A^k X_0 (A^T )^k ) ≤ n α ^2 sup _j ≥ 0λ_max (W_j )∑_i = 0^k - 1λ_max (A^i (A^T )^i ) + n λ_max (X_0 )λ_max (A^k (A^T )^k ) = n α ^2 sup _j ≥ 0λ_max (W_j ) ∑_i = 0^k - 1A^i _2^2 + n λ_max (X_0 )A^k _2^2, where the first inequality comes from the fact that since X_k ≽ 0, the diagonal elements are nonnegative, and we have tr(X_k ) ≤ nλ_max (X_k ). Moreover, the second inequality is due to A^i W_k-i-1(A^T )^i ≽ 0 and A^k X_0 (A^T )^k ≽ 0. Next, the maximum eigenvalue of W_k is bounded as λ_max (W_k) ≤ W_max for all k ≥ 0, where W_max > 0 is given in <ref>. This is because λ_max (W_k) ≤ tr(W_k) = tr(𝔼[w_k w_k^T ]) = 𝔼[ tr(w_k w_k^T )] = 𝔼[w_k^T w_k ] ≤ W_max, where the second inequality comes from <ref>, and the first equality uses the fact that the trace is a linear function. Therefore, one gets 𝔼[Q_k^LU - Q^*_2^2 ]≤ α ^2 W_max n^2 ∑_i = 0^k - 1A^i _∞ ^2 + n^2λ_max (X_0 )A^k _∞ ^2 ≤ α ^2 W_max n^2 ∑_i = 0^k - 1ρ ^2i + n^2 λ_max (X_0 )ρ ^2k ≤ α ^2 W_max n^2 lim_k →∞∑_i = 0^k - 1ρ ^2i + n^2 λ_max (X_0 )ρ ^2k ≤ α ^2 W_max n^2/1 - ρ ^2 + n^2 λ_max (X_0 )ρ ^2k ≤ α ^2 W_max n^2/1 - ρ + n^2 λ_max (X_0 )ρ ^2k ≤ α W_max n^2 /d_min (1 - γ ) + n^2 x_0 _2^2 ρ ^2k, where the first inequality is due to ·_2 ≤√(n)·_∞, the second inequality is due to <ref>, the third and fourth inequalities come from ρ∈ (0,1), the last inequality is due to λ_max (X_0) ≤ tr(X_0 ) = tr(x_0 x_0^T ) = x_0 _2^2 and ρ = 1-α d_min(1-γ). Taking the square root on both side of the last inequality, using the subadditivity of the square root function, the Jensen inequality, and the concavity of the square root function, we have the desired conclusion.
http://arxiv.org/abs/2306.07584v1
20230613071843
Complexity of fermionic states
[ "Tuomas I. Vanhala", "Teemu Ojanen" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech", "cond-mat.str-el" ]
Computational Physics Laboratory, Physics Unit, Faculty of Engineering and Natural Sciences, Tampere University, P.O. Box 692, FI-33014 Tampere, Finland Helsinki Institute of Physics P.O. Box 64, FI-00014, Finland Email: [email protected] Computational Physics Laboratory, Physics Unit, Faculty of Engineering and Natural Sciences, Tampere University, P.O. Box 692, FI-33014 Tampere, Finland Helsinki Institute of Physics P.O. Box 64, FI-00014, Finland How much information a fermionic state contains? To address this fundamental question, we define the complexity of a particle-conserving many-fermion state as the entropy of its Fock space probability distribution, minimized over all Fock representations. The complexity characterizes the minimum computational and physical resources required to represent the state and store the information obtained from it by measurements. Alternatively, the complexity can be regarded a Fock space entanglement measure describing the intrinsic many-particle entanglement in the state. We establish universal lower bound for the complexity in terms of the single-particle correlation matrix eigenvalues and formulate a finite-size complexity scaling hypothesis. Remarkably, numerical studies on interacting lattice models suggest a general model-independent complexity hierarchy: ground states are exponentially less complex than average excited states which, in turn, are exponentially less complex than generic states in the Fock space. Our work has fundamental implications on how much information is encoded in fermionic states. Complexity of fermionic states Teemu Ojanen July 31, 2023 ============================== § INTRODUCTION The complexity of an object or a process quantifies how something can be generated from simple building blocks in an optimal way. For example, the computational complexity of a mathematical operation is defined in terms of the number of elementary operations required in its execution, or the complexity of a unitary operation in a quantum computer is defined as a the minimum number of elementary quantum gate operations required in its generation.<cit.> Various notions of complexity in quantum systems and their relation to quantum information processing has been actively studied recently.<cit.> In this work, we introduce the complexity of N-particle fermionic states. The complexity quantifies how resource intensive it is to express a given state as a linear combination of Slater states (fermionic product states), which are the building blocks of the fermionic Fock space. If the complexity of a state is 𝒞, to express this state in any Fock basis, one needs to specify at least 𝒞 nonzero coefficients. We show that the complexity provides a bound for how much the information in the state can be compressed, thus determining the minimal physical and computational resources required to represent the state. Sophisticated numerical methods<cit.> have been developed to mitigate the exponential complexity of correlated systems, however, only a genuine quantum simulation<cit.> can be expected to incorporate it in general. Our results provide quantitative estimate for the complexity of distinct classes of states, outlining required resources for the quantum simulation targeting fermionic states.<cit.> Besides its computational and information-theoretic implications, the complexity constitutes an entanglement measure in the Fock space. In contrast to widely studied partition entanglement measures,<cit.> the complexity describes intrinsic partition-independent properties of N-particle states. It sharply distinguishes between the states of interacting and noninteracting Hamiltonians: all non-degenerate eigenstates of noninteracting systems can be represented as a single Slater state, thus having a trivial complexity. The central finding in our work is that the complexity for distinct classes of states can be faithfully estimated from the correlation entropy S_c, defined essentially as the entanglement entropy between a single particle and the rest of the system. This quantity, exhibiting intensive size scaling, is calculated from the eigenvalues of the single-particle correlation matrix, and thus easily available in many numerical and theoretical methods. Specifically, i) we establish a universal lower bound for the complexity S_P≥ S_c, where S_P is the logarithmic complexity 𝒞=e^S_P ii) we introduce a model-independent finite size complexity scaling hypothesis S_P∼α N_p S_c for homogeneous N_p-particle states with constant filling fraction iii) numerical studies of interacting lattice models suggest that the coefficient α characterizes universal features of distinct classes of states, implying the exponential complexity hierarchy summarized in Fig. 1. In strongly coupled lattice models, the ground states are exponentially less complex than average excited states, which in turn are exponentially less complex than the generic states in the Fock space. Due to the model-independent nature of the scaling hypothesis, we postulate that the same complexity scaling is applicable for a broad class of local Hamiltonians. Our work has fundamental implications on how much information is contained in fermionic states. § FERMIONIC COMPLEXITY We begin by defining the complexity for an arbitrary fermionic state |Φ⟩ in the Fock space of N_p identical particles and N_o available single-particle orbitals. This state can be expanded as |Φ⟩=∑_k=1^k_max a_{n_B_i}_k|{n_B_i}_k⟩ where B_i denotes orbital i in the single-particle basis ℬ, and {n_B_i}_k labels the distinct sets of single particle occupation numbers n_B_i=0,1. The N_p-particle Slater basis states are defined as |{n_B_i}_k⟩=ĉ^†_B_jN_p…ĉ^†_B_j2ĉ^†_B_j1|0⟩, where the product of fermion creation operators contains the populated orbitals in the set {n_B_i}_k. Each Slater state is multiplied with a nonzero complex probability amplitude a_{n_B_i}_k≠ 0. Depending on |Φ⟩ and the employed single-particle orbitals ℬ, the number of terms k_max varies between 1 and the Fock space dimension Q=N_oN_p. We now consider the 2nd Renyi entropy of the probability distribution of the Slater states S_P_ℬ=-ln∑_k P_k^2 , where P_k=|a_{n_B_i}_k|^2 denotes the probability weight of |{n_B_i}_k⟩ in |Φ⟩. To eliminate the dependence on ℬ, we define the logarithmic complexity as S_P=min_ℬS_P_ℬ, where the minimization is carried over all possible single-particle bases ℬ. Finally, we define the complexity of the state |Φ⟩ as 𝒞=e^S_P. In practical calculations, carrying out the minimization in Eq. (<ref>) is highly nontrivial task. Remarkably, as seen below, for the eigenstates of the studied lattice Hamiltonians, the optimal basis is excellently approximated by the correlation matrix eigenbasis and the position basis at weak and strong coupling. The complexity, as defined above, has two illuminating interpretations: i) The complexity of a state determines its maximum compression in the Fock space, characterizing the number of terms in the most compact representation. In addition, by employing fundamental results in classical and quantum information theory<cit.>, it can be shown that the maximum compression of the quantum information obtained by measurements is determined by the complexity. These aspects are discussed in Sec. A of Methods. Thus, complexity can be regarded as a basis-independent measure of N-particle information in a fermionic state, characterizing the minimum computational and physical resources to represent and store it. ii) The complexity of a state describes its intrinsic N-particle entanglement. Without entanglement, the state could be represented as a single Slater state. If the state has complexity 𝒞, the amount of entanglement corresponds to that in an equal superposition of 𝒞 Slater states. In contrast to the entanglement entropy and other partition-based measures, the complexity does not depend on arbitrary case-specific partition. Moreover, the complexity sharply distinguishes interacting and non-interacting systems since all non-degenerate eigenstates of quadratic Hamiltonians can be represented as a single Slater state with S_P=0, irrespective whether they obey the area-law<cit.>, the volume-law<cit.> or the critical entanglement entropy scaling. §.§ Complexity lower bound from natural orbitals A central role in the complexity is played by the single-particle correlation matrix, also known as the one-body reduced density matrix, C_ij=⟨Φ|ĉ^†_jĉ_i|Φ⟩, where ĉ^†_i, ĉ_j denote the fermionic creation and annihilation operators and indices i,j ∈ 1 … N_o label all possible single-particle orbitals in a fixed basis. If we have a system with N_o available orbitals, the correlation matrix has dimension N_o× N_o. Due to Fermi statistics, the correlation matrix eigenvalues satisfy 0≤λ_i≤1 and ∑_iλ_i=N_p. Thus, they can be interpreted as single-orbital occupation probabilities in the eigenbasis of C_ij. The eigestates of C_ij are commonly referred as the natural orbitals<cit.>, which have found modern applications in analyzing strongly correlated many-body systems.<cit.> We can define one-particle correlation entropy in the state |Φ⟩ as S_c^p=- ln∑_iλ_i^2/N_p, which is a measure of how the occupation probabilities of the natural orbitals collectively differ from 1 or 0. As discussed in Sec. B in Methods, up to a trivial constant, S_c^p is equal to the Renyi entanglement entropy between a single particle and the rest of the system, clarifying the role of S_P as a novel entanglement measure. By interchanging the role of particles and holes, we define single-hole occupation probabilities λ̃_i=1-λ_i, which satisfy 0≤λ̃_i≤1 and ∑_iλ̃_i=N_o-N_p. We then define a single-hole correlation entropy as S_c^h=- ln∑_iλ̃_i^2/N_o-N_p, and the correlation entropy as the larger of the two S_c=max{ S_c^p, S_c^h}. In Sec. C in Methods we prove that, for arbitrary fermionic state |Φ⟩, the correlation entropy provides a lower bound for the logarithmic complexity S_P≥ S_c. This complexity bound is nontrivial: there exist states with nonzero logarithmic complexity for which the lower bound is saturated. A simple example is obtained by considering states where the number of particles N_p and available orbitals N_o satisfy nN_p≤ N_o with n≥ 2. Given a single-orbital basis, one can find at least n disjoint occupation number sets {n_B_i}_k where each occupied orbital with n_B_i=1 belongs precisely to one set. Forming a superposition of such occupation sets |ψ⟩=∑_k=1^n√(P_k)|{n_B_i}_k⟩, with ∑_k P_k=1, yields an example of a complexity bound saturating state. For these states the correlation matrix C is diagonal, and the natural occupations λ_i=C_ii=P_k for each of the N_p occupied orbitals in the set {n_B_i}_k. Thus, S_c=S_p=-ln∑_i^n P_i^2, and the lower bound in (<ref>) is saturated. The state |ψ⟩ can also be regarded as an n-orbital generalization of the Greenberger-Horne-Zeilinger state 1/√(2)(|↑↑↑…⟩+|↓↓↓…⟩).<cit.> These type of states, whose complexity do not scale with the total number of particles at fixed filling fraction ν=N_p/N_o, define the low-complexity category in Fig. 1. This category include, for example, eigenstates of impurity systems with a non-extensive number of scattering centers, such as the Kondo model. Despite a macroscopic reorganization of the Fermi sea, the eigenstates have only a few correlation matrix eigenvalues that differ from 0 or 1.<cit.> §.§ Complexity scaling The existence of the lower-bound (<ref>) saturating states suggests that the bound cannot be significantly improved without making additional assumptions of the states of interest. Eigenstates of local interacting Hamiltonians and other large-scale homogeneous states defined on a d-dimensional spatial lattice constitute a class of central importance. They define a family of states which can be studied as a function of the system size for a fixed filling fraction ν. How is the complexity of such states scaling as the system size grows? For a generic filling fraction ν≠ 0,1, the dimension of the Fock space of such states grows exponentially in N_p. Thus, in the leading order, we expect that the logarithmic complexity scales as S_P∼ N_p. However, the maximum value of the correlation entropy does not scale with the system size S_c≤max{-lnν, -ln(1-ν)}. This shows that S_c alone does not provide an accurate approximation for the complexity of these states. However, the role of S_c in Eq. (<ref>) suggests that it encodes some universal features of the complexity. Combining this idea with the exponential scaling in the system size, we postulate that the complexity of uniform states follows, in the leading order, the scaling form S_P∼α N_i S_c, where α>0 captures universal features of distinct classes of states. Here N_i is the number of particles N_i=N_p when ν≤1/2, and the number of holes N_i=N_o-N_p when ν>1/2. We illustrate this hypothesis for three paradigmatic examples: the Hubbard model of spinful fermions Ĥ=t∑_⟨ i, j⟩ ,σ(ĉ^†_iσĉ_jσ+h.c.)+U∑_⟨ i, j⟩,σn̂_iσn̂_iσ̅, the t-V model of spinless fermions Ĥ=∑_⟨ i, j⟩(tĉ^†_iĉ_j+ h.c.+Vn̂_in̂_j), and Haar-distributed states, which we call “generic states” as they represent uniformly distributed unit vectors in the Fock space (see Sec. D in Methods). We observe that, indeed, the value of α distinguishes different broad classes of states: * The generic states have α=α_g, where 1≤α_g≤ 2, depending on the filling fraction. The maximum α_g=2 is obtained at ν=1/2, while α_g→1 when ν→ 0 or ν→ 1. * For non-degenerate ground states, α=1/2 provides an excellent lower bound, which can become tight in various limits. * Average excited states have 1≲α<α_g when the interaction exceeds the bandwith. The difference in α, despite its innocent appearance, translates into an exponential difference in the complexity. The complexity of generic states provides a baseline reference to compare other types of states. The analytical expression for α_g is derived in the Sec. D in Methods. The generic states saturate the maximum value of the correlation entropy S_c and the maximal leading order complexity allowed by the dimensionality of the Fock space. As seen below, the eigenstates of local Hamiltonians allow exponential compression compared to the generic states. In Fig. <ref> we illustrate the ground state complexity for the Hubbard model for ν=1/2 and the t-V for ν=1/3. The minimizing basis, found by the conjugate gradient optimization (see Sec. D in Methods for details), is well approximated by the momentum states at weak coupling. In this case, the momentum basis is a natural orbital basis, however, the natural orbitals are not unique due to degeneracies in the natural occupations. For the t-V model, the natural orbitals are essentially the optimal basis also at strong coupling, while for the Hubbard chain, the optimal basis at strong coupling coincides with the position orbitals. The ground state complexity for both models is seen to satisfy S_P≳1/2N_pS_c, where the lower bound appears tight for small V and large U. In the Hubbard chain, the correlation entropy saturates the maximum value S_c=ln 2 at strong coupling. Thus the logarithmic complexity of a generic state at half filling, S_P=2N_pln 2, is four times larger than that of the ground state of the Hubbard chain S_P ≈1/2N_pln 2 at strong coupling. Furthermore, the size scaling suggests that α converges reasonably close to α=1/2 for all coupling strengths. This is observed for both models at fillings for which the ground state is non-degenerate. For the t-V model at half filling, the ground state corresponds to two near-degenerate charge density wave configurations. In this case, we observe that the complexity of each charge-density wave state is well-captured by S_P∼1/2N_pS_c. In Fig. <ref> we analyze the complexity of excited states, for the same systems as in Fig. <ref>, by performing a full diagonalization in the parity and center-of-mass momentum sector which contains the ground state. For the excited states, finding numerically the minimizing basis becomes more challenging. As seen in Figs.<ref> a)-b), the numerical optimization does not find the true minimum for some high-complexity states. However, in the vast majority of cases, the optimization converges very close to the minimum value over natural, momentum and position orbitals. This indicates that, like for the ground states, one obtains an accurate approximation for the complexity by analyzing only these orbitals, especially when considering averages over many states. For both models, the average complexity of the excited states grows as a function of interaction and saturates to a constant at U/t,V/t≈ 4. At strong coupling, the average complexity is substantially higher than for the ground states. While the full diagonalization is restricted to modest system sizes, a fact one should be conscious of in extrapolating the results, Fig. 3 g) and f) imply that the ratio S_P/(N_pS_c) for the average excited states converge to a constant α<α_g. The specific value of α depends on the coupling strength and filling, but the average excited states remain, even around the midspectrum, significantly less complex than generic states. This behaviour is markedly different from the entanglement entropy, which exhibits identical leading order volume-law scaling for the midspectrum states of nonintegrable Hamiltonians and generic states. <cit.> § DISCUSSION In the above, we have seen how the complexity hierarchy summarized in Fig. 1 emerges. The single-Slater states, such as the eigenstates of quadratic Hamiltonians, have trivial complexity and are regarded as the fundamental building blocks of more complex states. For the low-complexity states, for which the complexity is not scaling with the system size when filling fraction is fixed, the complexity can be estimated from the universal lower bound S_c. As seen above, the ground state complexity is typically well captured by the scaling Ansatz (<ref>) with prefactor α=1/2. When the interaction exceeds the bandwidth, the complexity of average excited states follow (<ref>) with 1≲α<α_g, where the upper bound determines the complexity of generic states. The model-independent nature of the scaling hypothesis and the qualitative agreement of different models suggest that the above results are not sensitive to the specific form of the Hamiltonian, as long as some broad features, such as locality and large scale homogeneity, are satisfied. § CONCLUSION AND OUTLOOK We introduced the complexity of a fermionic state to quantify the amount of information in it. The complexity provides a bound to the quantum state compression by choosing an optimal Fock basis, determining the minimum computational and physical resources to represent states. We showed that, for distinct classes of states, the complexity can be estimated from the eigenvalues of the single-particle correlation matrix. Considering the rapidly increasing interest in fermionic quantum simulation and quantum information processing, our results open several topical avenues of research. Does the observed complexity scaling laws for ground states and excited states represent a fundamental limit in encoding information to the eigenstates of local Hamiltonians? Do the complexity scaling laws, as their model-independent form suggests, also hold for higher dimensional systems? How can the scaling laws for eigenstates be derived from general arguments? How does the notion of Fock complexity, as studied here, reflect the circuit complexity of concrete fermionic quantum simulation schemes?<cit.> To what extent the discovered complexity structure applies to bosonic states? Answers to these questions would provide fundamental new insight in many-body systems and their quantum information applications. § METHODS §.§ Complexity as a fundamental bound to quantum state compression Here we illustrate two aspects of the complexity: its role as a characteristic number of terms in the minimal Fock representation, and determining the minimal physical resources to store the quantum information extracted by measurements. §.§.§ Complexity as the characteristic number of terms in the minimal Fock representation The complexity of a state is connected to the characteristic number of terms which are required to span it in the optimal basis. Let {P_n} be the probabilities in the optimal Fock basis which determines the complexity and let's assume that the distribution is arranged in non-increasing order P_n_1≥ P_n_2 when n_1<n_2. How many terms are needed in the optimal basis to effectively span the state? Specifically, how large should ñ be to satisfy ∑_n=1^ñ P_n∼ 1? This question is important for the states with large complexity ñ,𝒞≫ 1 and the answer depends on the distribution: i) for sufficiently uniform distributions with a well-defined typical probability scale, the required number of terms is ñ∼𝒞 ii) for heavy-tailed distributions, the required number of terms can scale nonlinearly in the complexity ñ∼𝒞^β with β>1. Let's first study i) and consider a case where the probabilities have a characteristic order of magnitude P_n∼ P_0 when n≤ n', and are strongly suppressed for n>n'. This implies that P_0∼ 1/n' and 𝒞=1/∑_n P^2∼ n'. Thus, the complexity roughly coincides with the effective cutoff index n' and we can conclude that ∑_n=1^ñ P_n∼ 1 is achieved when ñ∼𝒞. When the distribution is strictly box distribution with constant probabilities P_0=1/n', the full probability is exactly recovered after 𝒞 terms ∑_n=1^𝒞 P_n= 1. In general, to recover the full probability for distributions with a finite tail above n=n', one might need to include a few multiples of 𝒞 terms. The linear scaling between ñ and 𝒞 reflects the typical expectation that entropy-like quantities scale as the logarithm of the total number of contributing states. In case ii), the distribution has a long tail, the probabilities do not have a well-defined scale, and the previous reasoning breaks down. For this type of distributions, a nonlinear scaling ñ∼𝒞^β with a model-specific β>1 becomes possible. Such behaviour can be observed, for example, for power-law distributions and the distributions of eigenstates of lattice Hamiltonians, as illustrated in Fig. <ref>. For the ground state of a strongly coupled Hubbard model, we find that ∑_n=1^𝒞 P_n∼ 0.5 and that the standard deviation and the complexity of the optimal distribution satisfy σ=C^β, where β≲ 2. Because most of the probability is located withing a few standard deviations, the full probability is covered by ∑_n=1^m𝒞^β P_n∼ 1, where m is a small integer. To summarize, the complexity of a state provides a lower bound estimate for the characteristic number of terms in the optimal Fock representation. §.§.§ Complexity and quantum information from measurements In information theory, the notion of entropy was introduced to quantify the compression of strings of data which follows a known distribution.<cit.> Analogously, the logarithmic complexity, which is an entropy quantity, characterizes the compression of the quantum information obtained by measurements. This can be made concrete by preparing n copies of state |ψ⟩ and performing repeated N_p-particle measurements in some Fock basis |{n_B_i}_k⟩, where {n_B_i}_k denotes an occupation number set in the single-orbital basis ℬ, and k∈{1,2,… Q} where Q is the Fock space dimension. The resulting quantum states, obtained as outcomes of the n measurements, constitutes the total information obtained from the measurements. This information can be stored as a composite state of the form |{n_B_i}_k_1⟩⊗ |{n_B_i}_k_2⟩⊗…⊗|{n_B_i, }_k_n⟩, which is an element of Q^n-dimensional Hilbert space. However, in general, composite states of the form (<ref>) do not fill this space densely. The complexity of |ψ⟩ provides a fundamental lower bound of how much of the Q^n-dimensional space such states cover. In the language of quantum information theory, these composite states, obtained with probability P_k_1^ℬP_k_2^ℬP_k_3^ℬ… P_k_n^ℬ, can be regarded as quantum messages constructed from individual letters, where each letter is a quantum state drawn from the ensemble { |{n_B_i}_k⟩,P_k^ℬ}. Now one can ask how much these quantum messages can be compressed, or what is the minimum dimension of space ℋ in which the messages can be accurately stored when n is large. The dimension of ℋ essentially determines the physical resources needed to store information extracted from |ψ⟩. This formulation turns the problem into an application of Schumacher's encoding theorem<cit.> in the special case where the letters form an orthogonal set. In this case, the quantum state of the messages is uniquely indexed by strings k_1k_2… k_n. When n is large, Shannon's noiseless coding theorem implies that these strings can be faithfully compressed to 2^n H(P_ k^ℬ) long strings, where H(P_ k^ℬ)= -∑_k P_ k^ℬlog_2 P_ k^ℬ is the Shannon entropy.<cit.> Thus, in this limit, almost all messages fit into a space of dimension log_2 (ℋ)=nH(P_ k^ℬ). The maximum compression is obtained in the Fock basis that minimizes H(P_ k^ℬ). Since the Shannon entropy is bounded from below by the second Renyi entropy H(P_ k^ℬ)≥ -log_2 ∑_k (P_ k^ℬ)^2, the minimum H(P_ k^ℬ) is bounded by the logarithmic complexity and ln (ℋ)≥ nS_P. Thus, the complexity provides a lower bound to the dimension of ℋ, where the states obtained by n measurements can be stored. Whenever the logarithmic complexity is smaller than its maximum value ln Q, the composite states obtained from |ψ⟩ by n measurements do not fill the whole Q^n dimensional space densely but only a subspace of it. This would allow compression of quantum information, with the maximum compression rate limited by the complexity. A dense filling would be obtained if |ψ⟩ was a generic state for which the leading order logarithmic complexity is maximal. §.§ Correlation entropy as an entanglement entropy The single-particle correlation matrix in state |Φ⟩ is conventionally defined as C_ii'=⟨Φ|ĉ^†_i'ĉ_i|Φ⟩, which can also be written in first quantized notation as C_ii'=N_p ∑_j,k,l,...Φ(i,j,k,l,...) Φ(i',j,k,l,...)^*, where Φ(i,j,k,l,...) is the antisymmetric wave function of the particles at coordinates i,j,k,l,... .<cit.> Thus the actual normalized reduced density matrix of a single particle, defined as the partial trace over the coordinates of the other particles, is ρ_1=C/N_p,<cit.> and the order n Renyi entanglement entropy is defined as S_n=1/1-nln(Tr(ρ_1^n)). If | Φ⟩ is a single Slater determinant, C has N_p times degenerate eigenvalue 1, the rest being zero. Therefore the entanglement entropies become S^Slater_n=-1/1-nlog(N_p^n-1). (Slater state) However, in the spirit of the complexity S_P which is trivial for Slater states, we subtract the free fermion contribution and define the single-particle correlation entropies as S_c,n=S_n-S^Slater_n=1/1-nln(Tr( C^n/N_p)). S_c,2 is the particle correlation entropy discussed in the main text. Thus, the correlation entropy is actually a one-particle entanglement entropy from which the free fermion contribution has been subtracted. §.§ Proof of the complexity lower bound Here we will give a proof of the complexity lower bound (<ref>) in three steps. Proposition 1: Let's consider a fermionic N_p particle state |Φ⟩. Furthermore, let's assume that λ_i is the set of correlation matrix eigenvalues (occupation probabilities of the natural orbitals) and n̅_B_i are the occupation probabilities of single-particle orbitals in an arbitrary basis ℬ. They always satisfy ∑_iλ_i^2≥∑_in̅_B_i^2. Proof: Let C be the correlation matrix. Then ∑_iλ_i^2=Tr C^2=∑_α,βC_αβC_βα=∑_α,β|C_αβ|^2≥∑_α|C_αα|^2≡∑_αn̅_B_α^2. Here we used the fact that the in the double sum all entries are positive and that occupation probabilities in a general basis are defined as diagonal entries of the correlation matrix. Proposition 2: The average occupation numbers n̅_B_i and the state probabilities P_{n_B_i}_k always satisfy ∑_in̅_B_i^2/N_p≥∑_kP_{n_B_i}_k^2. The first sum is over all the single-particle orbitals whereas the second sum is over all k_max occupation number sets in Eq. (1). Proof: The average occupation numbers can be written in terms of the state probabilities as n̅_B_i=∑_kP_{n_B_i}_kn_B_i^k, where n_B_i^k=0,1 is the value of the occupation number of orbital B_i in the set {n_B_i}_k. From this we get ∑_in̅_B_i^2=∑_i∑_k,lP_{n_B_i}_kn_B_i^k P_{n_B_i}_ln_B_i^l≥∑_i∑_kP_{n_B_i}_k^2n_B_i_k^2= ∑_kP_{n_B_i}_k^2∑_i n_B_i_k^2=∑_kP_{n_B_i}_k^2∑_i n_B_i^k=∑_kP_{n_B_i}_k^2N_p. The inequality follows from dropping non-negative terms from the double sum. Comparing the starting and final form, we have proved Proposition 2. Universal lower bound for S_P: using Property 1. and the monotonicity of logarithm, we deduce that S_c^p=-log∑_iλ_i^2/N_p≤ -log∑_in̅_B_i^2/N_p. Now, using Property 2 it follows that S_c^p≤ -log∑_in̅_B_i^2/N_p≤ -log∑_kP_{n_B_i}_k^2=S_P_ℬ. Since this holds for arbitrary basis ℬ, we can minimize the right-hand side over all bases and it still holds. Thus, we have proved that S_c^p≤ S_P. The corresponding inequality for the hole correlation entropy S_c^h≤ S_P can be straightforwardly established by exchanging the roles of particles and holes and tracing the same steps. Thus, we arrive at S_c≤ S_P where S_c is the larger one of S_c^p,S_c^h. §.§ Complexity of generic states Here we derive the complexity of generic states in a Fock space with M available orbitals and N_p particles with dimension Q=N_oN_p=N_o!/(N_o-N_p)!N_p!. Let's start with some normalized vector in the Fock space |ψ_0⟩=∑_k=1^Qa_k|k ⟩, where |k ⟩ is some basis and ∑_k |a_k|^2=1 and consider all the states that can be obtained from |ψ_0⟩ by unitary transformations: |ψ⟩=U|ψ_0⟩= ∑_k,j=1^QU_jka_k|j ⟩. These states fill the Fock space uniformly and are referred as generic states. To calculate the complexity, we extract the probabilities P_j=|U_jka_k|^2=U_jkU_jl^*a_ka_l^* (repeated indices are summed) and their squares P_j^2=U_jkU_jl^*U_jmU_jn^*a_ka_l^*a_ma_n^*. To evaluate the average ⟨ P_i^2 ⟩ over the Haar measure, we can make use of the circular unitary ensemble result<cit.> ⟨ U_jkU_jmU_jl^*U_jn^* ⟩=1/Q^2(δ_klδ_mn+δ_knδ_ml) for Q≫ 1. Employing the above formula, we obtain ⟨ P_j^2 ⟩=2/Q^2, and ∑_j=1^Q⟨ P_j^2⟩=2/Q For large Q≫ 1, the average logarithmic complexity becomes S_P=⟨ -ln∑_iP_i^2⟩=-ln∑_i⟨ P_i^2⟩=-ln (2/Q). The expectation value can be moved inside the logarithm, because the argument becomes non-fluctuating in the large Q limit. Also, the minimization over possible single-particle orbitals would not affect the result in large systems, since the number of optimization parameters scale linearly in orbitals while the independent components of the states vectors grow exponentially. The this behaviour is illustrated in Fig. 5, showing how the optimized complexity in small systems is approaching the above analytical results. By employing Stirling's formula, the leading order complexity of generic states state becomes S_P= -N_oln[ν^ν(1-ν)^1-ν], where ν=N_p/N_o. Since the generic states are uniformly distributed in space and cannot be compressed, their leading order complexity is the maximum allowed by the dimensionality of the Fock space. As illustrated in Fig. 5, the generic states also maximize the particle and hole correlation entropies S_c^p=-lnν, S_c^h=-ln (1-ν). Thus, the result (<ref>) can be expressed in the general form (<ref>) with α=α_g where α_g =1+(1-ν) ln (1-ν)/νln (ν), 0≤ν≤1/2 α_g =1+νln (ν)/(1-ν) ln (1-ν), 1/2<ν≤ 1 §.§ Computational details We perform exact diagonalization calculations using the QuSpin package <cit.>, which allows easy building of Hamiltonian matrices for the fermionic lattice models considered here. The package also allows selecting specific symmetry sectors of lattice models, fixing e.g. quantum numbers corresponding to center-of-mass momentum and parity under reflection p → -p. For the excited state calculations we perform full diagonalization of the selected symmetry block using standard dense hermitian methods, while for the ground state results we employ ARPACK-based sparse methods included in the QuSpin library and Scipy <cit.>. To study the entropy S_P_ℬ in different bases ℬ, we need to change the single-orbital basis for the full many-body eigenstates which are initially computed in the position basis. In the second quantized formalism, an orbital transformation for a system with N_o orbitals is specified by an N_o × N_o unitary matrix U acting on the annihilation operators as c⃗'⃗=Uc⃗,  c⃗=[c_1,c_2 ⋯, c_N_o]^T. The unitary matrix can be parametrized by a hermitian matrix A such that U=exp(i A), and this transformation can then be expressed as an operator Û in the many-body Fock space as Û=exp(i c⃗^† A c⃗), acting on operators as Û^†c⃗Û= U c⃗. That the orbital rotations can be expressed in such exponentiated form is referred to as the Thouless theorem <cit.> in the literature <cit.>. The operator Â=c⃗^† A c⃗ is represented as a sparse matrix that only couples basis states connected by a single hop, thus having N_p N_h Q non-zero elements, where N_p and N_h are the number of particles and holes, respectively, and Q is the number of Fock basis states. Applying the operator Û on a state in the Fock space can be carried out by sparse matrix methods, where the only large matrix operation is matrix-vector multiplication by Â. <cit.> For small systems, the non-zero matrix elements of  can be computed and stored in memory in a sparse matrix format. For large systems, it is advantageous to compute the matrix elements of  on the fly when performing the matrix-vector multiplication, as memory access becomes the bottleneck of the computation. For basis optimization we again parametrize the orbital basis in the form U=exp(iA) and perform a conjugate gradient minimization of the Renyi entropy S_P_ℬ with the elements of the hermitian matrix A treated as free parameters. For the single-component models, we use a random matrix U ∼CUE(N_o) as the starting point of the minimization, with N_o the number of orbitals in the model. For the two-component Hubbard model, we enforce component conservation meaning that A is block-diagonal and does not mix different spin components. § DATA AVAILABILITY The data supporting the findings of this work are available upon reasonable request. § CODE AVAILABILITY The codes implementing the calculations in this work are available upon reasonable request. § AUTHOR CONTRIBUTIONS T.O. proposed the idea, which the authors developed together. T. I. V. developed the numerical approaches and carried out all numerical calculations. The manuscript was prepared jointly by the authors. § ACKNOWLEDGEMENTS T.O. acknowledges the Academy of Finland (project 331094) and Jane and Aatos Erkko Foundation for support. Computing resources were provided by CSC – the Finnish IT Center for Science. § COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2306.03475v1
20230606074829
Graph-to-local limit for the nonlocal interaction equation
[ "Antonio Esposito", "Georg Heinze", "André Schlichting" ]
math.AP
[ "math.AP", "cs.NA", "math.NA" ]
We study a class of nonlocal partial differential equations presenting a tensor-mobility, in space, obtained asymptotically from nonlocal dynamics on localising infinite graphs. Our strategy relies on the variational structure of both equations, being a Riemannian and Finslerian gradient flow, respectively. More precisely, we prove that weak solutions of the nonlocal interaction equation on graphs converge to weak solutions of the aforementioned class of nonlocal interaction equation with a tensor-mobility in the Euclidean space. This highlights an interesting property of the graph, being a potential space-discretisation for the equation under study. [2020]35R02, 35A01, 35A15, 35R06, 05C21 Interplay between multi-spin and chiral spin interactions on triangular lattice Jian-Xin Li July 31, 2023 =============================================================================== § INTRODUCTION In this manuscript we study the connection between nonlocal dynamics on infinite graphs and the corresponding local counterparts in the underlying Euclidean space. This problem is a natural consequence of the recent interest in evolution equations on graphs, motivated by applications to data science, among others. Graphs are indeed a suitable mathematical structure to classify and represent data, as done in <cit.> and the references therein. Furthermore, it is worth to mention recent advances in the use of graphs in the context of social dynamics or opinion formation, <cit.>, kinetic networks, <cit.>, and synchronization, <cit.>. In this work we consider the nonlocal interaction equation on graphs, introduced in <cit.>, which is relevant to detecting local concentrations in networks. We will refer to it as nonlocal-nonlocal in view of the nonlocal nature of the graph. More precisely, we think of equations describing mass displacement among vertices — points in — connected according to a given weight function, η in the following. One of the essential differences with the Euclidean space is that on the graph the mass is moving rather than the point. More precisely, in we usually describe movements of particles with an associated mass, whilst on the graph the particle, or vertex, is fixed and the mass is transported. In order to cope with this structural property, one needs to consider suitable interpolating functions so that to be able to describe the flux, hence the dynamics. We refer to <cit.> for more details, as well as related works <cit.>. Another important aspect is to deal with a large number of entities, for instance individuals or data; hence it is crucial to consider discrete and continuum models. The setup introduced in <cit.> allows to consider both descriptions in a unified framework, as follows. The class of partial differential equations (PDEs) we consider can be specified through three elements: a nonlocal continuity equation, an upwind flux interpolation, and a constitutive relation for a nonlocal velocity. The nonlocal continuity equation is concerned with the time-evolution of a probability measure ρ_t ∈(^d), for t∈ [0,T], where mass located at a vertex x∈^d can be nonlocally transported to y∈^d along a channel with capacity, referred to as weight, given by an edge weight function η: ^d ×^d∖{x≠ y}→ [0,∞). The nonlocal continuity equation on a time interval [0,T] is of the form ∂_t ρ_t + j_t = 0, with j_t( x) = ∫_∖*xη(x,y) j_t(x,y) , where the flux is a time-dependent antisymmetric measure, j_t ∈(G), on the set G=* (x,y)∈: η(x,y)>0, being (x,y)∈×:x y. We will use the shorthand notation (ρ, j) =((ρ_t)_t,(j_t)_t)∈_T for any solution of (<ref>), cf. Defintion <ref>. The relation constituting the flux depends on a σ-finite absolutely continuous measure μ∈^+(^d), as in <cit.>, wherein μ acts as an abstract notion of vertices of a graph. More precisely, we associate to a nonlocal time-dependent velocity field v_t : G→ the induced flux by using an upwind interpolation as follows j_t(x,y) = v_t(x,y)_+(ρ⊗μ)(x,y)-v_t(x,y)_-(μ⊗ρ)(x,y). Here, for a∈, we denote with a_+ = max*a,0 and a_- = max*-a,0 the positive and negative part, respectively. Intuitively, the support of μ defines the underlying set of vertices, i.e. V = μ. In particular, any finite graph can be represented by choosing μ=μ^N=∑_i=1^Nδ_x_i/N, for x_1, x_2,…,x_N∈. The last element of the model is the identification of the velocity field in terms of a symmetric interaction potential K:^d×^d → and a potential P:^d→ by v_t(x,y) = - K∗ρ_t(x,y) - P(x,y) , where the nonlocal gradient is defined by f(x,y) f(y)-f(x). System (<ref>) was introduced in <cit.> as a Finslerian gradient flow of the interaction energy (ρ)=1/2∬_^2d K(x,y)ρ(y)ρ(x)+∫_ P(x)ρ(x). Note that the velocity field is given as the nonlocal gradient of the first variation of the energy, that is v_t = -'(ρ_t), where '(ρ)= K ∗ρ + P denotes the variational derivative of and (K∗ρ)(x) = ∫_ K(x,y)ρ(y), for any x∈. An intriguing problem is to understand the limiting behaviour of weak solutions to (<ref>) as the graph structure localises, i.e. the range of connection between vertices decreases, while the weight of each connecting edge increases. Following a formal argument presented in <cit.>, one expects to approximate weak solutions of the more standard nonlocal interaction equation on . However, as we shall see, the intrinsic geometry of the graph impacts the limiting gradient structure of the equation. Accordingly, the main goal of this work is to provide a rigorous proof of the local limit of the system (<ref>) along a sequence of edge weight functions η^:→[0,∞) defined by η^(x,y) 1/^d+2ϑ⟨*|x+y/2,x-y/, in terms of a reference connectivity ϑ:^d∖0→ [0,∞) satisfying the Assumptions (<ref>) – (<ref>) below. The scaling in (<ref>) leads to the local evolution ∂_tρ_t=div(ρ_t (∇ K*ρ_t+∇ P)), 𝖭𝖫𝖨𝖤_ where the tensor :^d →^d× d depends on the nonlocal structure encoded through the reference measure μ and the connectivity ϑ. (<ref>) can be similarly decomposed into three components. First, the local continuity equation on ^d given as ∂_t ρ_t + div_t = 0 , where now _t∈(^d;^d) is a vector-valued flux. Second, a kinetic relation, for the flux in terms of a vector field v̂_t: ^d →^d encoding the tensor structure of (<ref>) as _t( x) = ρ_t( x) (x) v̂_t(x) = ρ_t( x) ∑_i,k=1^d _ik(x) v̂_t,k(x)e_i . Third, a constitutive relation for the velocity linking to the interaction energy (<ref>) given by v̂_t = - ∇ K ∗ρ_t - ∇ P = -∇'(ρ_t) . We provide a rigorous proof for the convergence of system (<ref>) with η^ given through (<ref>) to the system (<ref>), in case of C^1 interaction kernels. This result is somewhat sharp since for attractive pointy potentials one does not expect convergence of weak solutions, as pointed out in <cit.>. First, we give an heuristic argument and match each of the three elements of the systems (<ref>) and (<ref>), separately. The nonlocal continuity equation (<ref>) can be represented by its local counterpart (<ref>) thanks to a continuous reconstruction for the nonlocal flux. More precisely, we denote by j^ the nonlocal divergence as defined in (<ref>) with η replaced by η^ given in (<ref>); see Definition <ref>. Next, in Proposition <ref>, for any j^∈() and any >0 we construct a local vector-valued flux ^∈(^d;^d) such that j^ = ÷^. Inspired by <cit.>, we use a superposition with needle measures defined as one-dimensional Hausdorff measure ^1_ x,y restricted to the line-segment x,y⊂ from x to y. The tentative definition of ^ is, for any Borel set A, is then ^[A] :≈1/2∬_G^ y-x/y-x^1⟨*|A∩ x,yη^(x,y) j^(x,y) . However, checking that the measure-valued local divergence of the above measure agrees with the nonlocal divergence of j^ asks for special uniform integrability properties of j^, which are not necessarily verified in our setting. For this reason, we proceed by exploiting a finite approximation of (<ref>) by replacing j^ with an empirical approximation j^_N. The actual measure is obtained as a suitable limit of the sequence ^_N defined as in (<ref>) with j^ replaced by j^_N. Since the argument is based on compactness, the uniqueness of such a limit is not achieved nor can we ensure that the limit has indeed the representation (<ref>). For the present application, the mere existence is sufficient. We point out that the “reverse” question on the decomposition of measure-valued divergence vector fields into one-dimensional needles is studied in <cit.> and commented on in <cit.>. Having established the continuous reconstruction, the constitutive relation for the velocity (<ref>) can be localized under suitable regularity assumptions on the kernel K and potential P. Indeed, for x-y=(), neglecting higher order terms in , we obtain the identity v^_t(x,y) ≈v̂^_t(x) · (y-x) with v̂^_t(x) =- (∇ K ∗ρ^_t)(x) - ∇ P(x) . The crucial mathematical step consists in the combination of the above two equations through the upwind flux interpolation (<ref>). Formally, if the measures μ and ρ_t have a smooth density, we can do another expansion to arrive for x∈^d at the approximate identity ^_t(x) ≈ρ^_t(x) ^(x) v̂^_t(x) with ^(x) = 1/2∫_(x-y)⊗(x-y) η^(x,y)μ(y). In particular, in this step it is shown that the non-symmetric upwind-based structure can be replaced by symmetric approximate tensors ^ with vanishing error as → 0. This symmetrization is generalized to arbitrary ρ∈() by introducing a mollifier on an intermediate scale ^α for α>0 sufficiently small. The final step consists in identifying the limit of the tensor ^, where the specific scaling hypothesis (<ref>) provides, after a change of variable, the approximate identity ^(x) ≈(x) with (x)= 1/2μ/^d(x)∫_ w⊗ w ϑ(x,w) w . The limit tensor gives, together with the reconstruction (<ref>), flux identification (<ref>) and velocity approximation (<ref>), the limit system (<ref>). The heuristic argument above is based on several approximations and smoothness assumptions which are not a priori satisfied by solutions to the systems (<ref>) and (<ref>). We make it rigorous by using a variational framework, allowing to handle measure-valued solutions. An interesting byproduct of our result is the link between Finslerian and Riemannian gradient flows, which is the first result in this direction to the best of our knowledge. More precisely, (<ref>) is shown to be a gradient flow of the nonlocal interaction energy in the infinite-dimensional Finsler manifold of probability measures endowed with a nonlocal upwind transportation quasi-metric, 𝒯, peculiar of the upwind interpolation (<ref>). Due to the loss of symmetry the underlying structure of () does not have the formal Riemannian structure, but Finslerian instead. We refer the reader to <cit.> and <cit.> for further details. On the other hand, following <cit.>, we establish a chain-rule inequality for the nonlocal interaction energy in a 2-Wasserstein space defined over ^d_, which is  endowed with a metric induced by ^-1. Upon considering the corresponding Wasserstein scalar product on the tangent space of _2(^d_), at some probability measure with bounded second moment, one can notice the underlying Riemannian structure, following <cit.>, thereby making the connection between the weak and variational formulations of (<ref>). We stress that not only do we connect the graph and tensorized local gradient structures using the notion of curves of maximal slope for gradient flows after De Giorgi <cit.>, but, upon identifying weak solutions of (<ref>) with curves of maximal slopes, we also obtain an existence result for (<ref>) via stability of gradient flows. This is indeed another interesting property of the graph, as it represents a valuable space-discretisation for the PDE under study, working in any dimension, in addition to other methods, e.g. particle approximations and tessellations. Indeed, our result can be also seen as a deterministic approximation of (<ref>). As an outlook, we point out that a quantitative convergence statement might be obtained upon establishing a suitable nonlocal replacement for weak BV estimates, as in the numerical literature <cit.>. This would allow to adapt techniques previously developed for determining the rate of convergence for upwind schemes with rough coefficients, for which a measure-valued solution framework is used <cit.>. Alternatively, one could directly use the limiting metric structure along the lines of <cit.>. Focusing on the quasi-metric space introduced in <cit.>, natural problems to address are the asymptotic behaviour of the quasi-metric and the Gromov-Hausdorff convergence of the according quasi-metric spaces. First steps in the metric setting are done in <cit.> and <cit.>, respectively. Finally, generalizing the present approach to finite-graph approximation for diffusion equations seems possible by using a gradient flow formulation similar to the Scharfetter-Gummel finite volume scheme recently introduced in <cit.>. §.§ Relation to the literature This manuscript is a natural question raised in <cit.>, where nonlocal dynamics on graphs are considered for the aforementioned upwind interpolation. In <cit.>, the authors deal with a class of continuity equations on graphs with general interpolations, and provide a well-posedness theory exploiting a fixed-point argument. Depending on the interpolation chosen, one can notice structural differences with the more standard Euclidean space. In <cit.>, the analysis of <cit.> is extended to two species with cross-interactions as well as nonlinear mobilities and α-homogeneous flux-velocity relations for α > 0, by generalizing the underlying Finslerian structure and notion of gradient. In <cit.> the generalized system is explored both in analytical case studies and numerical simulations on finite graphs of varying shapes and connectivity, leading to the observation of various patterns, including mixing of the two species, partial engulfment, or phase separation. Regarding the limiting equation we mention <cit.>, where well-posedness for (<ref>) with = is shown for pointy locally λ-convex potentials. To this end the authors generalize the theory of <cit.> and employ a minimizing movement scheme to show the existence of solutions. Furthermore, in <cit.>, the author studies a class of equation similar to (<ref>) in view of the presence of a tensor (not necessarily of the form (<ref>)). However, velocity fields considered are of the form v=-∇ (F'(u)+V), under suitable assumptions on F and V. Hence, differently from our case, in <cit.> the author mainly focuses on linear and nonlinear diffusion. Our variational approach is similar to the one used in <cit.> to study the limiting behaviour of random walks on tessellations in the diffusive limit. Starting from the forward Kolmogorov equation on a general family of finite tessellations, the authors show that solutions of the forward Kolmogorov equation converge to a non-degenerate diffusion process solving an equation of the form (<ref>), with diffusion instead of the nonlocal interaction velocity field, similar to <cit.>. Another difference is that the method used relies on a generalized gradient structure of the forward Kolmogorov equation with respect to the relative entropy — the so called cosh gradient structure to be precise. This result on tessellations is related to <cit.>, where a similar problem is considered, though the gradient structure is the so called quadratic gradient structure. The latter manuscripts concern the convergence of discrete optimal transport distances to their continuous counterparts, see also <cit.>. In this context, the work <cit.> studies Gromov–Hausdorff limit of Wasserstein spaces on point clouds, obtained by constructing a geometric graph similar to our setting (restricted on the torus). The previous manuscript is, indeed, an invitation to the study of stability of evolution PDEs on graphs, though the analysis focuses on the general problem of approximating the underlying metric space rather than the gradient dynamics — differently from our result. Moreover, we also point out that the upwind interpolation does not fall in the class of interpolating function considered in the aforementioned works. Related to this point, we also mention <cit.>, where a direct gradient flow formulation of jump processes is established — the authors consider driving energy functional containing entropies. The kinetic relations used there are symmetric, hence excluding for instance the upwind interpolation, which is the one we use on graphs. It is worth to mention the work <cit.> dealing with dynamics on graphs for data clustering by connecting the mean shift algorithm with spectral clustering at discrete and continuum levels via suitable Fokker–Planck equations on graphs. See also <cit.> for clustering algorithms based on thresholding schemes on graphs, giving also rise to a nonlocal dynamic. Concerning graph structure, we point out the recent work <cit.> on a novel interpretation of the aggregation equation (arising in applications to granular media) as the gradient flow of the kinetic energy, rather than the interaction energy. This is possible upon introducing a suitable nonlocal collision metric. The underlying state space resembles a graph, indeed, and the PDE under study takes the form of a local-nonlocal continuity equation. The scaling of the function η in (<ref>) is chosen such that two derivatives emerge in the local limit. This can be seen as a second order counterpart to the nonlocal first-order calculus developed in <cit.>. With a first order scaling many classical results from calculus can be recovered, like for instance the recent divergence theorem in <cit.>. Second order nonlocal calculus with emphasis on questions of regularity and connections to jump processes got also attention with many contributions by Kassmann <cit.> and references therein. We also refer to the book <cit.> and the references therein for further applications of nonlocal vector calculus with connections to numerical approximations, and various applications, such as nonlocal dynamics of anomalous diffusion and nonlocal peridynamic models of elasticity and fracture mechanics (perydynamic). In the context of numerical schemes for local conservation laws, we mention <cit.>, where the authors proposed a class of monotonicity-preserving nonlocal nonlinear conservation laws, in one space dimension. Under suitable assumptions on the kernel, one could interpret the latter class of PDEs as equations on graphs. It is, in our opinion, interesting to explore possible applications of the current manuscript to other nonlocal conservation laws. We observe that the graph structure can be also seen as suitable space-discretization, resembling those obtained from tessellations for finite volume schemes. In this regard, there is a natural connection to numerical schemes for gradient flows in the Wasserstein space as also studied in, e.g., <cit.>. §.§ Structure of the manuscript First we present the notation we use. In Section <ref> we explain the setup and recall known results on both the nonlocal-nonlocal interaction equation on graphs and the limiting PDE. We state our main results and give some meaningful examples. The connection between nonlocal and local structure is provided in Section <ref>, where we construct the local flux from the nonlocal ones and identify the tensor-mobility. In Section <ref> we prove the main results of our manuscript: the graph-to-local limit in terms of curves of maximal slopes and existence of weak solutions for (<ref>). We integrate the manuscript with an appendix including some additional results, among them the extension of <cit.> to σ-finite base measures and a formulation of the main result in terms of EDP-convergence. §.§ Notation We list here notation used throughout the manuscript. * a_+max{0,a} and a_-(-a)_+ denote the positive and negative parts of a ∈, respectively. * Given a set A we set _A(x)=1 for x∈ A and _A(x)=0 for x∉ A. * Moduli of continuity are always denoted by ω. * In a metric space (X,d) for R>0 we set B_R(x)y∈ X:d(x,y)≤ R. * In a normed space (X,·) for R>0 we set B_R B_R(0). * (^d) is the σ-algebra of Borel sets of . * C_b(A) is the set of bounded continuous functions from A to . * (A) is the set of Radon measures on A ⊆^d. * (A) is the set of non-negative Radon measures on A. * 𝒫(A)⊂(A) is the set of Borel probability measures on A. * _2(A)⊆(A) is the set of ρ∈(A) with finite second moment, that is, m_2(ρ) ∫_A |x|^2 ρ(x) < ∞. * (x,y)∈^d×^d : x y is the off-diagonal of ×. * μ∈(^d) denotes the base measure setting the underlying geometry. * ϑ:∖*0→[0,∞) is the edge connectivity map, cf. Section <ref>. * η^1/^d+2ϑ⟨*|x+y/2,x-y/ is the -dependent edge weight function. * G^ { (x,y) ∈: η^(x,y)>0} is the set of edges. * G_ x^ y∈∖*x:η^(x,y)>0 is the set of points connect to x. * ρ∈(^d) denotes a mass configuration. * ρ (ρ_t)_t∈[0,T]⊂() denotes a family of mass configurations. * j∈() denotes a measure-valued nonlocal flux. * j (j_t)_t∈[0,T]⊂() denotes a family of nonlocal fluxes. * ∈(;) denotes a measure-valued local flux, cf. Proposition <ref>. * (_t)_t∈[0,T]⊂(;) denotes a family of local fluxes * v:→ denotes a nonlocal velocity field. * v (v_t)_t∈[0,T] denotes a family of velocity fields. * (μ,η;ρ,j) stands for the μ and η dependent action density of ρ and j. * (μ,η;ρ, j)∫_0^T(μ,η;ρ_t,j_t) t denotes the action of ρ and j. * (μ,η;ρ,v) stands for the μ and η dependent action density of ρ and v. * (μ,η;ρ, v)∫_0^T(μ,η;ρ_t,v_t) t denotes the action of ρ and v. * T^_ρ_2()⊂(G^ ) denotes the space of nonlocal tangent fluxes at the configuration ρ∈_2(), cf. (<ref>). * T^_ρ_2() denotes the space of nonlocal tangent velocities v:G^ → at the configuration ρ∈_2(), cf. (<ref>). * l^_ρ(v)[w] stands for the first variation of (μ,η^;ρ,·) at v∈ T^_ρ_2() in the direction of w∈ T^_ρ_2() for ρ∈_2(), cf. (<ref>). * ^(x) 1/2∫_(x-y)⊗(x-y) η^(x,y)μ(y) is the approximate tensor. * (x)1/2μ/^d(x)∫_ w⊗ w ϑ(x,w) w denotes the limit tensor. * _μ,η denotes the nonlocal extended quasi-metric, cf. Definition <ref>. * ^2([0,T];(X,d) denotes the set of 2-absolutely continuous curves, which map from the time interval [0,T] into the (quasi-)metric space (X,d). * ρ_t'_μ,η denotes for the curve ρ∈^2([0,T];(𝒫_2(),𝒯_μ,η)) the (forward) metric derivative at time t∈[0,T]. * f(x,y)=f(y)-f(x) is the nonlocal gradient of a function f : ^d → * · j is the nonlocal divergence of a flux j∈(), cf. Definition <ref>. * _T denotes the set of solutions to the nonlocal continuity equation on the time intervall [0,T]; _1, cf. Definition <ref>. * (ϱ_0,ϱ_1) is the subset of (ρ, j)∈ which satisfy ρ_0=ϱ_0, ρ_1=ϱ_1. * _T denotes the set of solutions to the local continuity equation on the time intervall [0,T]; _1, cf. Definition <ref>. * (ϱ_0,ϱ_1) is the subset of (ρ,)∈ which satisfy ρ_0=ϱ_0, ρ_1=ϱ_1. * (ρ)1/2∬_^2d K(ρ⊗ρ) denotes the interaction energy of ρ∈(). * δ/δρ denotes the variational derivative of . * _ is the metric slope of with respect to _μ,η^, cf. Definition <ref>. * _ denotes the graph De Giorgi functional, cf. Definition <ref>. * W_ denotes the 2-Wasserstein metric on (^d_). * _ is the metric slope of with respect to W_, cf. Definition <ref>. * _ denotes the local De Giorgi functional, cf. Definition <ref>. Let us also specify the notions of narrow convergence and convolution. A sequence (ρ^n)_n⊂(A) is said to converge narrowly to ρ∈(A), in which case we write ρ^n ⇀ρ, provided that ∀ f∈ C_b(A): ∫_A f ρ^n→∫_A f ρ as n →∞. Given a function f A × A → and ρ∈(A), we write f*ρ the convolution of f and ρ, that is, f*ρ(x)∫_A f(x,y)ρ(y), for any x ∈ A such that the right-hand side exists. § PRELIMINARIES ON GRAPHS, GRADIENT STRUCTURES, AND MAIN RESULTS For the sake of clarity we divide this section in subsections. We specify the graph structure, recall results on the nonlocal-nonlocal interaction equation as well as the limiting PDE, including possible examples covered by our theory. At the end of the section we present our main results. §.§ Graph The graph is identified through a pair (μ,η), being μ a base measure standing for a (subset) of vertices and η an edge weight function. We consider a non-negative σ-finite base measure μ∈() such that μ = μ^d. We assume that the density μ is bounded and uniformly continuous, denoting by ω_μ∈ C([0,∞);[0,∞)) its modulus of continuity, which satisfies ω_μ(δ)→ 0 as δ→ 0. More precisely, this means _1 ∀ x,y∈ it holds μ(x)-μ(y)≤ω_μ(x-y), _2 ∃ c_μ,C_μ>0 such that ∀ x∈ it holds c_μ≤μ(x) ≤ C_μ. We fix an edge connectivity map ϑ:×(∖*0)→[0,∞) and another modulus of continuity ω_ϑ∈ C([0,∞);[0,∞)) satisfying ω_ϑ(δ)→0 as δ→ 0, and we make the following assumptions: _1∀ z∈ the map w↦ϑ(z,w) is symmetric and continuous on ϑ>0; _2∀ z,z̅∈, w∈∖*0 it holds ϑ(z,w)-ϑ(z̅,w)≤ω_ϑ(z-z̅); _3∃ >0 such that ∀ z∈ it holds ϑ(z,·) ⊂ B_; _4∃ >0 such that sup_(z,w)∈×()w^2ϑ(z,w) ≤; _5∃ >0 such that ∀ z,ξ∈ it holds∫_w·ξ^2ϑ(z,w) w ≥ξ^2. We define the set (x,y)∈×:x y and the family of edge weight function (η^)_>0, η^:→[0,∞) by η^(x,y) 1/^d+2ϑ⟨*|x+y/2,x-y/. Furthermore, for every >0, we introduce the sets G^ (x,y)∈:η^(x,y)>0, and, for a given x∈ G_ x^ y∈∖*x:η^(x,y)>0. In order to lighten notations, we will drop the index >0 for η^ and G^ when this is not relevant for the analysis. The next lemma collects some basic estimates following from the above assumptions, which are used throughout the article. Let μ,ϑ satsify (<ref>), (<ref>), and (<ref>) – (<ref>), respectively. For any >0 let η^ as in (<ref>). Then, for any >0 it holds ∀ (x,y)∈ G^ it holds x-y≤, sup_x∈G_ x^ ≤^d, sup_(x,y)∈x-y^2η^(x,y) ≤/^d, sup_>0sup_x∈∫_x-y^2η^(x,y)μ(y) ≤, where >0 depends only on and the dimension d and one can set =. The assumption on the support of ϑ, (<ref>), and the definition of η^ immediately yield (<ref>). This also implies that G_ x^ is contained in a ball of radius around x, whence (<ref>). Furthermore, the latter observation and (<ref>) give the bound (<ref>). Finally, (<ref>) is obtained by combining (<ref>), (<ref>) and the upper bound from (<ref>). As μ generalises the set of vertices in the nonlocal model, at first glance μ≪^d seems to be rather restrictive, since it excludes finite graphs. However, models with μ≪^d are known to be approximated by finite graphs <cit.>, the only restriction being that μ is a finite measure. In Theorem <ref> we replace this assumption by assumptions satisfied in the present paper; thereby the results in <cit.>, in conjunction with the present work, can be applied to obtain a finite graph approximation for the nonlocal interaction equation. Indeed, we note that the assumptions (<ref>) – (<ref>) and (<ref>), (<ref>) imply that for any >0 the pair (μ,η^) satisfies all the assumptions from <cit.>. More precisely, for x,y∈ G^ we have the moment bounds sup_(x,y)∈|x-y|^2|x-y|^4η^(x,y) ≤1∨()^2/^d, sup_x∈∫_|x-y|^2|x-y|^4η^(x,y)μ(y) ≤⟨*|1∨()^2, while the local blow-up control condition lim_δ→ 0sup_x∈∫_B_δ(x)∖{x}x-y^2η^(x,y)μ(y) = 0, is implied by (<ref>), (<ref>) and (<ref>). For the sake of completeness, we mention that one could fix μ=^d by redefining ϑ^μ(z,w) = μ⟨*|zϑ(z,w) without changing the limiting equation, since μ is assumed to be absolutely continuous, with uniformly bounded and uniformly continuous density. However, the base measure μ makes a difference for the dynamic on the graph, thus the graph-to-local limit is more universal keeping μ. The definition of η^ can be relaxed to η^(x,y) 1/^d+2ϑ⟨*|x+y/2,x-y/+f()g⟨*|x+y/2,x-y/, for some g:×(∖*0)→, which is uniformly continuous in its first argument as well as symmetric and continuous in its second argument, and some f:[0,∞)→, which satisfies f∈ o(^-d-2) as → 0. Indeed, with this scaling the perturbation vanishes in the limit → 0, leaving the limiting tensor unchanged. §.§ Examples We provide relevant examples of edge connectivity maps satisfying conditions (<ref>) – (<ref>). The requirements (<ref>) – (<ref>) are not difficult to check, thus we provide a simple instance where condition (<ref>) holds. More precisely, the latter is satisfied by an edge connectivity ϑ such that there exist 0≤ r<R<∞ and C>0 for which ϑ(z,·)|_B_R∖B_r≥ C for any z∈. Indeed, first note that for ξ = 0 there is nothing to show. Taking w,ξ∈∖*0, let us denote by φ the angle between w and ξ and recall that w·ξ = wξcosφ. Choosing any 0 <φ_0 <s/2, for any 0≤φ≤φ_0 we have cos(φ)≥cos(φ_0)>0. Therefore, denoting by V_φ_0 the volume of B_1∩φ≤φ_0, which is the (d-dimensional) spherical sector of B_1 corresponding to φ_0, we have ∫_w·ξ^2ϑ(z,w) w ≥ C V_φ_0⟨*|R^d+2-r^d+2cos(φ_0)^2ξ^2, and hence (<ref>). A simple but important example for a pair (μ,ϑ) satisfying the above assumptions is ϑ(z,w) = C_d_B_1(w) with dimension dependent constant C_d>0 and μ=^d. Indeed, (<ref>), (<ref>) and (<ref>) – (<ref>) are easily checked, while (<ref>) follows from the above considerations. This choice of (approximating) graph is peculiar as  (<ref>) converges to the standard nonlocal interaction equation on , being =, cf. (<ref>). As a generalization, we consider a symmetric tensor field 𝔻∈ C( ; ^d × d) uniformly elliptic and bounded, i.e. 0<D_* ≤𝔻≤ D^* <∞ in the sense of quadratic forms, a variable radius R∈ C(;(R_*,R^*)) for some 0<R_*<R^* <∞, and a normalization function d∈ C(; (d_*, d^*)) for 0<d_* < d^* < ∞. We define the connectivity function ϑ(z,w)= d(z), w,𝔻(z)^-1 w≤ R(z) ; 0, w,𝔻(z) w> R(z) . By construction, ϑ satisfies (<ref>) – (<ref>). The formula (<ref>) gives, after a change of variable w= 𝔻(z)^1/2 y, where 𝔻(z)^1/2 denotes the unique symmetric square root of 𝔻: (z) = 1/2∫_ w⊗ w ϑ(z,x) w = 1/2d(z) ∫_ w⊗ w _w,𝔻(z)^-1 w≤ R(z) w = 1/2 d(z)⟨*|𝔻(z)^1/2∫_B_R(z)(0)⟨*|𝔻(z)^1/2 y⊗⟨*|𝔻(z)^1/2 y y = 1/2 d(z)⟨*|𝔻(z)^1/2 C_d R(z)^d+2𝔻(z) , where we used the identity ∫_B_R(z)(0)⟨*|𝔻(z)^1/2 y⊗⟨*|𝔻(z)^1/2 y y = 𝔻(z)^1/2⟨[|]∫_B_R(z)(0) y ⊗ y y𝔻(z)^1/2 = 𝔻(z)^1/2⟨*|C_d R(z)^d+2𝔻(z)^1/2 = C_d R(z)^d+2𝔻(z) with C_d = ∫_B_1(0) y_1^2 y = π^d/2 /(2Γ(d/2+2)). In particular, by choosing the normalization d(z) = 2/C_d R(z)^d+2⟨*|𝔻(z)^1/2, and μ=^d, we obtain the identity = 𝔻. §.§ Nonlocal interaction energy The nonlocal interaction energy considered in what follows is defined by (ρ) 1/2∬_^2d K(x,y)ρ(x)ρ(y). The interaction kernel K×→ is assumed to satisfy the following assumptions: K∈ C^1(×); K(x,y)=K(y,x) for (x,y)∈×; ∃ L_K>0 such that for all (x,y),(x',y')∈× it holds |K(x,y)-K(x',y')|≤ L_K(|(x,y)-(x',y')|∨|(x,y)-(x',y')|^2); ∃ C_K>0 such that for all (x,y)∈× it holds |∇ K(x,y)|≤ C_K(1+|x|+|y|). We observe the assumptions on the interaction kernel are somehow sharp as one does not expect the result holds true for pointy potentials, cf. <cit.>. As already noticed in <cit.>, assumption (<ref>) implies that, for some C >0 and all x,y∈, K(x,y)≤ C ⟨*|1+ x^2 + y^2; indeed, for fixed (x',y')∈×, (<ref>) yields K(x,y) - K(x',y')≤ L_K ⟨*| 1 ∨ 2⟨*| |(x,y)|^2 + |(x',y')|^2, and bounding the maximum on the right-hand side (∨) by the sum, we arrive at K(x,y)≤ L_K +2 L_K ⟨*||(x',y')|^2 + |(x,y)|^2 + K(x',y'), which gives (<ref>) with C=2L_K(1+|(x',y')|^2) + K(x',y'). We also notice, that the bound (<ref>) implies that _2()→ is proper, since its domain contains  _2(). The analysis in this manuscript easily extends to free energies of the form (<ref>) including potential energies _P(ρ)∫_ P ρ, for some external potential P^d → satisfying a local Lipschitz condition with at-most-quadratic growth at infinity and linear growth for ∇ P: similarly to (<ref>) – (<ref>), there exist L, C∈ (0,∞) so that for all x,y∈^d we have P(x)-P(y) ≤ L ⟨*| |x-y|∨ |x-y|^2, |∇ P(x)| ≤ C(1+|x|). For ease of presentation we shall not include the potential energy in our proofs, as no additional technical difficulties arise. §.§ Nonlocal-nonlocal interaction equation In this subsection we recall results on the nonlocal interaction equation on graphs from <cit.> we shall use in the following. For simplicity, let ρ≪μ, where we use the notation ρ to denote both the measure and the density with respect to μ. The equation reads, for μ-a.e. x, ∂_tρ_t(x)+∫_ (K*ρ)(x,y)_- η(x,y) ρ_t(x) μ(y) - ∫_(K*ρ)(x,y)_+ η(x,y) ρ_t(y)=0. 𝖭𝖫^2𝖨𝖤 The theory in <cit.> also applies to the case when ρ is not absolutely continuous with respect to μ. The general weak form of (<ref>) is obtained in terms of the nonlocal continuity equation we specify later. In order to have a graph-analogue of Wasserstein gradient flows for interaction energies we defined a suitable quasi-metric space, where the quasi-distance is obtained in a dynamical formulation à la Benamou–Brenier, <cit.>. For this reason, it is crucial to identify paths connecting probability measures, a nonlocal continuity equation, and an action functional to be minimized, resembling the total kinetic energy. Let μ∈^+() and η:→[0,∞) as before. For ρ∈() and j∈(G), consider λ∈(×) such that ρ⊗μ,μ⊗ρ,|j|≪|λ|. We define (μ,η;ρ,j)1/2∬_G (α(j/|λ|,(ρ⊗μ)/|λ|)+α(-j/|λ|,(μ⊗ρ)/|λ|))η|λ| . Hereby, the lower semicontinuous, convex, and positively one-homogeneous function α×_+→_+∪{∞} is defined, for all j∈ and r≥ 0, by α(j,r)(j_+)^2/r if r>0, 0 if j≤ 0 and r=0, ∞ if j> 0 and r=0, with j_+=max{0,j}. If μ and η are clear from the context, we write (ρ,j) for (μ,η;ρ,j). Given a pair of curves (ρ, j) ((ρ_t)_t∈[0,T],(j_t)_t∈[0,T]) with ρ_t∈() and j_t∈(G), we define (μ,η;ρ, j)∫_0^T(μ,η;ρ_t,j_t) t. The concept of graph gradient and graph divergence is as follows. For any function ϕ→ we define its nonlocal gradient ϕ G → by ϕ(x,y)=ϕ(y)-ϕ(x) . For any j∈(G), its nonlocal divergence · j ∈(^d) is defined as the negative η-weighted adjoint of , i.e., for any φ∈ C_0(), ∫φ· j = - 1/2∬_Gφ(x,y) η(x,y)j(x,y) = 1/2∫φ(x) ∫η(x,y) ⟨*|j(x,y) - j(y,x). In particular, for j∈^as(G) j∈(G) j(x,y)=- j(y,x), ∫φ· j= ∬_G φ(x) η(x,y) j(x,y). The following two lemmas will be employed for fixed. Let μ∈^+(), η^→ be as in (<ref>) such that (<ref>) – (<ref>) are satisfied. Let (ρ^)_>0⊂() and (j^)_>0⊂(G^ ) be such that (μ,η^;ρ^,j^)< ∞. Then, for any measurable Φ G^ →_+, it holds 1/2∬_G^ Φ η^j^≤√((μ,η^;ρ^,j^)∬_G^ Φ^2η^(ρ^⊗μ+μ⊗ρ^)). By Remark <ref>, this follows from <cit.>. Let (μ^)_>0⊂^+() and (η^)_>0 be families satisfying (<ref>), (<ref>), and (<ref>) – (<ref>), respectively, uniformly in . Let (ρ^)_>0⊂() and (j^)_>0⊂(G^ ). Then, for any measurable Φ:G^ →_+ satisfying Φ(x,y)≤x-yx-y^2, we have 1/2∬_G^ Φ ηj^≤sup_>0√(2(μ^,η^;ρ^,j^)), k=1,2. Keeping in mind Remark <ref> and the upper bound for Φ, this immediately follows from <cit.>. We consider the following nonlocal continuity equation in flux form ∂_tρ_t+· j_t=0 on (0,T)×, where ρ=(ρ_t)_t∈[0,T] and j=( j_t)_t∈[0,T] are unknown Borel families of measures in () and (G), respectively. Equation (<ref>) is understood in the weak form, i.e. ∀φ∈ C_c^∞((0,T)×), ∫_0^T∫_∂_tφ_t(x)ρ_t(x) t +1/2∫_0^T∬_Gφ_t(x,y)η(x,y) j_t(x,y) t=0. Since |φ(x,y)|≤||φ||_C^1(2∧|x-y|), the weak formulation is well-defined under the integrability condition ∫_0^T∬_G(2∧|x-y|)η(x,y)j_t(x,y) t<∞ . The integrability condition (<ref>) is automatically satisfied by any pair (ρ, j), which satisfies (μ,η;ρ, j)< ∞, due to Corollary <ref>. Hence we arrive at the following definition of weak solution of the nonlocal continuity equation: A pair (ρ, j)= ((ρ_t)_t∈[0,T],(j_t)_t∈[0,T]) with ρ_t∈() and j_t∈(G) is called a weak solution to the nonlocal continuity equation (<ref>) provided that * ρ is weakly continuous curve in (); * j is a Borel-measurable curve in (G); * the pair (ρ, j) satisfies (<ref>). We denote the set of all weak solutions on the time interval [0,T] by _T. For ϱ_0,ϱ_1∈(), a pair (ρ, j)∈(ϱ_0,ϱ_1) if (ρ, j)∈_1 and in addition ρ_0=ϱ_0 and ρ_1=ϱ_1. When the dependence of η on >0 needs to be emphasized, we will write ^_T, ^ and ^(ϱ_0,ϱ_1) for the respective objects. Any weak solution satisfying (<ref>), which additionally satisfies the integrability condition (<ref>) has a weakly continuous representative and hence is a weak solution in the sense of Definition <ref>. Let (ρ, j)∈_T for some T>0. Then there exists a weakly continuous curve (ρ̅_t)_t∈[0,T]⊂() such that ρ̅_t=ρ_t for a.e. t∈[0,T]. Moreover, for any φ∈ C_c^∞([0,T]×) and all 0≤ t_0≤ t_1≤ T it holds ∫_φ_t_1(x)ρ̅_t_1(x) -∫_φ_t_0(x)ρ̅_t_0(x)=∫_t_0^t_1∫_∂_tφ_t(x)ρ_t(x) t + 1/2∫_t_0^t_1∬_Gφ_t(x,y)η(x,y) j_t(x,y) t. See <cit.> and <cit.>. An important property is the preservation of second moments, uniformly in . Let (μ^)_>0⊂^+() and (η^)_>0 be families satisfying (<ref>), (<ref>), and (<ref>) – (<ref>), respectively, uniformly in . Let (ρ_0^)_⊂_2() be such that sup_>0 M_2(ρ_0^) < ∞ and (ρ^, j^)_n ⊂_T^ so that sup_>0(μ^,η^;ρ^, j^)<∞. Then, sup_>0sup_t∈ [0,T]M_2(ρ_t^) < ∞. By Remark <ref>, this follows from <cit.>. The dyamical nonlocal quasi-metric is defined as follows. For μ∈^+(), η^ as before and ϱ_0, ϱ_1 ∈_2(), we define a nonlocal extended quasi-metric by _μ,η^(ϱ_0,ϱ_1)^2 =inf{∫_0^1 (μ,η^;ρ_t,j_t) t: (ρ, j)∈(ϱ_0, ϱ_1)}. When μ and η^ are clear from the context or the dependence on >0 is not important, we will shorten notation by writing or _ instead of _μ,η^. Properties of _μ,η^ can be found in <cit.>, including that it is indeed an extended quasi-metric on _2(). We denote by ^2([0,T];(𝒫_2(),𝒯_μ,η^)) the set of 2-absolutely continuous curves with respect to _μ,η^, that is for such a 2-absolutely continuous curves ρ there exists m∈ L^2([0,T]) such that 𝒯_μ,η^(ρ_s,ρ_t) ≤∫_s^t m(τ)τ for all 0≤ s ≤ t ≤ T . The (forward) metric derivative of a curve ρ∈^2([0,T];(𝒫_2(),𝒯_μ,η^)) is defined for a.e. t∈[0,T] by ρ_t'_μ,η^lim_τ↘ 0𝒯_μ,η^(ρ_t,ρ_t+τ)/τ = lim_τ↘ 0𝒯_μ,η^(ρ_t-τ,ρ_t)/τ. We often shorten the notation by writing ρ_t'_ instead of ρ_t'_μ,η^. We emphasize that the metric derivative defined above is only forward-in-time, or one-sided, due to the lack of symmetry for the nonlocal transportation cost . The metric derivative is well-defined as a consequence of the works <cit.> and <cit.> (see also <cit.>), which generalize <cit.> to the asymmetric setting. Moreover, the metric derivative can be identified with the action of a suitable minimal flux as a consequence of <cit.>, the proof of which is based on <cit.> and <cit.>. Noting that the time reparametrizations in these proofs preserve orientation, it is clear that they generalize to the quasi-metric case based on the one-sided derivative from Definition <ref>. The measure-flux form of (<ref>) as nonlocal continuity equation is specified below. A curve ρ [0,T]→_2(^d) is called a weak solution to (<ref>) if, for the flux j [0,T]→(G) defined by j_t(x,y)=δ/δρ(x,y)_- ρ_t(x)μ(y)-δ/δρ(x,y)_+ ρ_t(y)μ(x), the pair (ρ, j) is a weak solution to the continuity equation ∂_tρ_t+· j_t=0 on [0,T]×, according to Definition <ref>. Weak solutions of (<ref>) are curves of maximal slope with respect to a (one-sided) strong upper gradient, which is the square root of the metric slope defined below. This allows to identify the set of weak solutions as the zero-level set of the so called De Giorgi functional, <cit.>, in the quasi-metric space (_2(),)), being the quasi-distance recalled in Definition <ref>, cf. <cit.>. For any ρ∈_2(), let the metric slope at ρ be given by _(ρ) ⟨[|]μ,η^;ρ -δ/δρ(ρ). For any ρ∈^2([0,T];(_2(),)), the graph De Giorgi functional at ρ is defined as _(ρ)(ρ_T)-(ρ_0)+1/2∫_0^T⟨[|]_(ρ_τ) + |ρ_τ'|_^2τ. §.§ The limiting local equation In the graph-to-local limit the equation obtained is a continuity equation, which can be interpreted in flux form. A pair (ρ, j)= ⟨[|](ρ_t)_t∈[0,T],(j_t)_t∈[0,T] with ρ_t∈() and j_t∈(;) is called a weak solution to the continuity equation ∂_tρ_t+∇· j_t=0 provided that * ρ is a weakly continuous curve in (); * j is a Borel-measurable curve in (;) such that ∫_0^T∫_|j_t|() t<∞; * the pair (ρ, j) satisfies (<ref>) in the sense that for any 0≤ t_0≤ t_1≤ T, ∫_t_0^t_1∫_∂_tφ_t(x)ρ_t(x) t + ∫_t_0^t_1∫_∇φ_t(x)· j_t(x) t =∫_φ_t_1(x)ρ_t_1(x)-∫_φ_t_0(x)ρ_t_0(x) ∀φ∈ C_c^1([0,T]×). The set of all weak solutions on the time interval [0,T] is denoted by _T. For ϱ_0,ϱ_1∈(), a pair (ρ, j)∈(ϱ_0,ϱ_1) if (ρ, j)∈_1 and in addition ρ_0=ϱ_0 and ρ_1=ϱ_1. In the definition above, we use <cit.> which ensures formulation (<ref>) is legitimate, up to considering a continuous representative in the left-hand side. We also observe that one can consider test functions only space dependent, bounded, and Lipschitz, Lip_b(); then (<ref>) becomes t∫_φ(x) ρ_t(x) = ∫_∇φ(x) · j_t(x) ∀φ∈Lip_b(). The localisation outlined in the introduction provides in the limit a kinetic relation depending on a tensor, . More precisely, the general form is ∂_tρ_t+(ρ_t v_t)=0, for a tensor :→× Borel measurable, continuous, symmetric, and uniformly elliptic, cf. Proposition <ref>. As aforementioned, in <cit.>, the author focuses on the case v=-∇ (F'(u)+V) and provides a well-posedness theory in an equivalent Wasserstein space. Indeed, Lisini considers the Riemannian metric on induced by ^-1 (uniformly elliptic and bounded) d_(x,y)=inf{∫_0^1√(⟨^-1(γ(t))γ̇(t),γ̇(t)⟩):γ∈([0,1];), γ(0)=x,γ(1)=y}, and the corresponding Wasserstein distance is W_^2(μ,ν)inf{∬ d_^2(x,y)γ(x,y):γ∈Γ(μ,ν)} . This is equivalent to the dynamical version, for ϱ_0,ϱ_1∈_2(_^d), W_^2(ϱ_0,ϱ_1)=inf{∫_0^1 * j_t/ρ_t_L^2(ρ_t;_^d)^2 t:(ρ, j)∈(ϱ_0,ϱ_1)}, being * j/ρ_L^2(ρ;_^d)^2=∫_*^-1(x) j/ρ(x), j/ρ(x)ρ(x). We observe that the uniform ellipticity of the tensor ^-1 implies that the distance d_ is equivalent to the Euclidean one and we denote by _^d the corresponding metric space (,d_). In the space _2(^d_) equipped with the 2-Wasserstein distance, W_, Eq. (<ref>), with the vector field v=-∇ (F'(u)+V), is a gradient flow of the free energy associated, meaning it is a curve of maximal slope with respect to a specific strong upper gradient, cf. <cit.>. Let us recall the definitions of strong upper gradient and curve of maximal slope. A function g:_2(^d_) → [0,+∞] is called a strong upper gradient for the functional if for every absolutely continuous curve ρ∈^2([0,T];(_2(^d_),W_)) the function g∘ρ is Borel and |(ρ_t)-(ρ_s))|≤∫_s^t g(ρ_τ)ρ_τ'τ, ∀ 0<s≤ t<T. In particular, if (g∘ρ_·)ρ_·'∈ L^1(0,T), then ∘ρ is absolutely continuous and |(∘ρ)'|≤ g(ρ_t)ρ_t' t∈[0,T]. A curve ρ∈^2([0,T];(_2(^d_),W_) is called a curve of maximal slope for with respect to the strong upper gradient g if and only if t↦(ρ_t) is a non-increasing map satisfying (ρ_t)-(ρ_s))+1/2∫_s^t(g(ρ_τ)^2+ρ_τ'^2)τ=0, ∀ 0<s≤ t<T. The equation we study, ∂_tρ_t=div(ρ_t ∇ K*ρ_t), <ref> differs from that in <cit.> since we do not consider diffusion, but nonlocal interaction instead. Following <cit.> and <cit.>, we can formulate (<ref>) as gradient flows of the energy (<ref>) using the corresponding strong upper gradient, that is the square root of the metric slope defined below, together with the corresponding De Giorgi functional in the continuum setting. Let ρ∈_2(). The metric slope of the nonlocal interaction energy is given by _ (ρ) ∫_[]∇δ/δρ,∇δ/δρρ. For any ρ∈^2([0,T];(_2(^d_),W_)), the local De Giorgi functional at ρ is defined as _(ρ)(ρ_T)-(ρ_0)+1/2∫_0^T⟨[|]_ (ρ_τ) + |ρ_τ'|^2_τ. In Section <ref> we prove that curves of maximal slope are weak solutions of (<ref>), cf. Theorem <ref>. For consistency, we state the definition of weak solutions for (<ref>). A curve ρ [0,T]→_2(^d) is called a weak solution to (<ref>) if, for the flux j [0,T]→(;) defined by j_t(x)=-∇δ/δρ(x)ρ_t(x), the pair (ρ, j) is a weak solution to the continuity equation ∂_tρ_t+∇· j_t=0 on [0,T]×, according to Definition <ref>. §.§ Main results The main result of the present work is the graph-to-local limit for the nonlocal interaction equation. Let (μ,ϑ) satisfy (<ref>), (<ref>) and (<ref>) – (<ref>). Let η^ be given by (<ref>) and assume K satisfies (<ref>) – (<ref>). For any >0 suppose that ρ^ is a gradient flow of in (_2(),_)), that is, _(ρ^) = 0 for any >0, with (ρ_0^)_⊂_2() be such that sup_>0 M_2(ρ_0^) < ∞. Then there exists ρ∈^2([0,T];(_2(^d_),W_)) such that ρ_t^⇀ρ_t as →0 for all t∈[0,T] and ρ is a gradient flow of in (_2(^d_),W_)), that is, _(ρ) = 0. The tensor is as in (<ref>). The previous theorem is also a rigorous link between Finslerian and Riemannian gradient flows of the nonlocal interaction energy. Since curves of maximal slope in the local setting are weak solutions of (<ref>), we actually provide an existence result via stability. In particular, this provides an interesting property of graphs as they can be used to discretise in space the equation under study. Let μ,ϑ satsify (<ref>), (<ref>), and (<ref>) – (<ref>), respectively. Consider the tensor as in (<ref>). Assume K satisfies (<ref>) – (<ref>). Let ϱ_0∈_2() such that ϱ_0≪μ. There exists a weakly continuous curve ρ:[0,T]→_2() such that ρ_t≪μ for all t∈[0,T] which is a weak solution to (<ref>) with initial datum ρ_0 = ϱ_0. § LINKING GRAPH AND LOCAL STRUCTURE In this section we connect the nonlocal structure given by the (sequence of) graphs (μ,η^) with the local Euclidean one. The first step is to construct weak solutions of from those of . Second, we show that in the limit → 0 the antisymmetric part of our quasi-metric structures vanishes. Then, we derive the local structure from the remaining symmetric part, characterizing the tensor  in terms of the base measure μ, and the edge connectivity function ϑ, thereby identifying the local geometry. Hereafter, we assume the pair (μ,ϑ) satisfies (<ref>), (<ref>) and (<ref>) – (<ref>), and we let η^ be given by (<ref>). §.§ Continuous reconstruction and compactness First, we show that solutions of the nonlocal continuity equation can be represented as those of the local continuity equation by means of a specific choice of the flux. Let j∈() satisfy the integrability condition ∬_x-yη(x,y)*j(x,y)<∞. Then there exists ∈(;) such that 1/2∬_φ ηj =∫_^d∇φ·, for all φ∈ C_c^1(). In particular, if (ρ, j)∈_T such that (μ,η;ρ, j)<∞, then there exists (_t)_t∈[0,T]⊂(^d;^d) such that (ρ, )∈_T. The construction of the local flux hinges on the representation of nonlocal gradients as integrals along elementary needles, following the terminology introduced by <cit.>. Let (x,y)∈ fixed and define, for any measurable A∈(^d), the measure σ_x,y∈^+(^d) by σ_x,y[A] = ^1(A∩ x,y) with x,y*(1-s)x+s y ∈^d: s∈ [0,1]. Introducing the unit vector ν_x,yy-x/y-x, we get the identities φ(y)-φ(x) = ∫_0^y-x∇φ(x+sν_x,y) ·ν_x,y s = ∫_ x,y∇φ(ξ) ·ν_x,y^1(ξ) =∫_∇φ(ξ) ·ν_x,yσ_x,y(ξ). Next, we define λ∈() by λ(A)∬_A x-yη(x,y) j(x,y) and observe that it is finite due to the integrability assumption on j. Splitting λ into its positive and negative parts and renormalizing these parts, we can employ <cit.> to obtain a family of counting measures (λ^N)_N∈ on such that λ^N⇀λ narrowly as n→∞. These measures can be written as λ^N = ∑_k=1^N λ_k^N δ_(x_k^N,y_k^N) with sup_N∈∑_k=1^N λ_k^N <∞ . by the finiteness of λ. Defining σ̃_x,y∈(;) by σ̃_x,y(A)ν_x,yσ_x,y(A)/x-y, the representation of nonlocal gradients in (<ref>) yields 1/2∬_φ(x,y)/x-yλ^N(x,y) =1/2∬_∫_∇φ(ξ) ·σ̃_x,y(ξ)λ^N(x,y) =1/2∑_k=1^N∫_∇φ(ξ) ·σ̃_x_k^N,y_k^N(ξ)λ_k^N =∫_∇φ(ξ) ·1/2∑_k=1^N λ_k^N σ̃_x_k^N,y_k^N(ξ). Motivated by this identity, we define ^N ∈(;) by ^N(A) 1/2∑_k=1^N λ_k^N σ̃_x_k^N,y_k^N(A). Observing that σ̃_x,y() = ν_x,y^1( x,y ∩)/x-y = 1, we obtain the uniform bound sup_N∈*^N(^d)≤sup_N∈∑_k=1^N λ_k^N <∞, which implies that there exists ∈(;) and a (not relabelled) subsequence such that ^N∗⇀ as N→∞. We note that for any φ∈ C^1_c() the map ∋ (x,y)↦φ(x,y)/x-y is continuous and bounded. Indeed, given (x,y)∈ let 0<δ< x-y/2. Then, due to the mean-value theorem, for any (x',y')∈ B_δ((x,y))⊂^2d with x' y', we obtain the upper bound []φ(x,y)/x-y - φ(x',y')/x'-y' ≤[]φ(x,y)-φ(x',y')/x-y+[]φ(x',y') []x-y-x'-y'/x-yx'-y' ≤2∇φ_L^∞δ/x-y+4φ_L^∞δ/x-y(x-y-2δ), where we used that x-y-x'-y'≤x-x'-y+y'≤ 2δ. This upper bound vanishes as δ→ 0 and thereby ensures continuity. Boundedness follows again from the mean value theorem and the boundedness of ∇φ. Using this for the narrow convergence of λ^N ⇀λ and the weak-^∗ convergence along a subsequence of ^N to , along this subsequence we obtain the identity 1/2∬_φ(x,y)/x-yλ^N(x,y) = ∫_∇φ(ξ) ·^N(ξ) ↓ N→∞ ↓ N→∞ 1/2∬_φ(x,y)/x-yλ(x,y) = ∫_∇φ(ξ) ·(ξ). Since the left limit is unique, the right limit is independent of the particular subsequence and thus ÷ is unique as well. Finally, the second claim follows from the first part of Corollary <ref>, and the definitions of solutions to and , Lemma <ref> and Definition <ref>, respectively. The next two results concern compactness for solutions of the continuity equation constructed in Proposition <ref>. Let (μ^)_>0⊂^+() and (η^)_>0 be families satisfying (<ref>), (<ref>), and (<ref>) – (<ref>), respectively, uniformly in . For any >0, let ρ^ (ρ_t^)_t∈[0,T]⊂() and j^ (ρ_t^)_t∈[0,T]⊂(G) such that sup_>0(μ^,η^;ρ^, j^)<∞. Moreover, consider (^)_>0 associated to ( j^)_>0 as in Proposition <ref>. Then * (∫_·^_tt)_>0 is weakly-^∗ compact in ((0,T)×;); * (t↦^_t)_ is equi-integrable w.r.t. ^1. In particular, there exists ( _t)_t∈[0,T]⊂(;) such that (along a subsequence) we have ∫_·^_tt∗⇀∫_·t weakly-^∗ in ((0,T)×;). Combining (<ref>), Corollary <ref> and Hölder's inequality, we obtain for any measurable I⊂[0,T] |^|(I×) ≤∫_I^_t() t = sup_φ∈ C^1_c()∖{0}1/∇φ_L^∞∫_I∫_∇φ(x)·^_t(x) t (<ref>) =sup_φ∈ C^1_c()∖{0}1/2∇φ_L^∞∫_I∬_G^ φ(x,y)η^(x,y) j^_t(x,y) t ≤1/2∫_I∬_G^ (2x-y)η^(x,y)j_t^(x,y) t ≤sup_>0√(2I(μ^,η^;ρ^, j^)), which implies compactness as |I|≤ T. The equi-integrability follows from (<ref>). Finally, the representation of the limit is a consequence of disintegration and the equi-integrability. Let (μ^)_>0⊂^+() and (η^)_>0 be families satisfying (<ref>), (<ref>), and (<ref>) – (<ref>), respectively, uniformly in . Let (ρ^, j^)_>0⊂_T be such that sup_>0(μ^,η^;ρ^, j^)<∞ and let ^ be associated to j^ as in Proposition <ref>. Then there exists a (not relabeled) subsequence of pairs (ρ^,^)∈_T and a pair (ρ,)∈_T such that ρ^_t⇀ρ_t narrowly in () for a.e. t∈[0,T] and such that ∫_·^_tt∗⇀∫_·t weakly-^∗ in ((0,T)×;). The weak-^∗ convergence of ∫_·^_tt to ∫_·_tt follows from Lemma <ref>. Narrow convergence of ρ^_t to ρ_t can be obtained by using <cit.>. More precisely, compactness is ensured by Lemma <ref> and Prokhorov's theorem, whilst equicontinuity is a consequence of the equi-integrability of the flux, similarly, e.g., to <cit.>. §.§ Limiting tensor structure This section is devoted to identifying the tensor for the limiting interaction equation (<ref>). The limiting structure depends on the underlying Finslerian nature of (<ref>) as gradient flow of in ((),_μ,η^), being _μ,η^ the upwind mass transportation cost, cf. Definition <ref>. In order to obtain the tensor we need to recall the definition of graph tangent fluxes introduced in <cit.>. Given a pair (μ,η^), the space of graph tangent fluxes at ρ∈_2() is defined by T^_ρ_2() { j∈(G^ ):(μ,η^;ρ,j)<∞ and (μ,η^;ρ,j)≤(μ,η^;ρ,j+j_𝖽𝗂𝗏) ∀ j_𝖽𝗂𝗏∈(G^ )}, where (G^ )[]j_𝖽𝗂𝗏∈(G^ ): ∬_G^ φ j_𝖽𝗂𝗏 = 0 ∀φ∈ C_c^∞(). We define the space of tangent velocities by T^_ρ_2() *v:G^ →: v_+(ρ⊗μ)-v_-(μ⊗ρ)∈ T^_ρ_2(). In particular, the set φ:φ∈ C_c^∞() is dense in T^_ρ_2() with respect to a suitable L^2-norm (cf. <cit.>). The crucial operator for the Finslerian structure is the tangent-to-cotangent mapping l^_ρ:T^_ρ_2()→⟨[|]T^_ρ_2()^∗, which is given, for a fixed v∈T^_ρ_2(), by l^_ρ(v)[w]1/2∬_G^ wη^*v_+(ρ⊗μ)-v_-(μ⊗ρ), for any w∈T^_ρ_2(). Furthermore, l_ρ^ is also the first variation of 1/2(μ,η^,ρ,·) at v in the direction w, and takes a similar role of a Riemannian metric in the Finslerian framework from <cit.>. In this subsection we will rigorously identify the limiting tensor arising from the Finslerian structure by showing in Propositions <ref> and <ref> that l^_ρ(φ)[ψ] = 1/2∬_G^ φ(x,y)ψ(x,y)η^(x,y)ρ(x)μ(y) + o(1) =∫_∇φ(x)·^(x)∇ψ(x)ρ(x)+o(1) where ^(x) 1/2∫_ (x-y)⊗(x-y) η^(x,y)μ(y). Then, we will see that ^ converges to a unique limit on compact subsets of ^d as → 0 (cf. Proposition <ref>), before concluding that l^_ρ itself converges to a limiting inner product containing (cf. Theorem <ref>). Our first main step towards these goals is a proof that l^_ρ is almost symmetric, with the antisymmetric part vanishing as → 0. To see this, we require the following two technical lemmas. We remind the reader of the assumption that (μ,ϑ) satisfy (<ref>) – (<ref>) and (<ref>), (<ref>), and that η^ and G^ are given by (<ref>) and (<ref>), respectively. For every φ∈ C_c() it holds G^ ∩φ≤ 2 φ^d, for all >0. Furthermore, for every >0 the intersection G^ ∩φ is compact in . First, observe that φ is contained in (φ×)∪(×φ). Indeed, if φ(x,y) 0, then either φ(x) 0 or φ(y) 0. Thus, due to the symmetry of G^ and the assumption (<ref>), we obtain *G^ ∩φ≤*G^ ∩(φ×)∪(×φ) ≤ 2 *G^ ∩ (φ×) = 2 ∫_φ*G_ x^ x ≤ 2φ^d. The compactness of G^ ∩φ is an easy consequence of the Assumption (<ref>). Indeed, for (x,y)∈ G^ ∩(×φ) we have y∈φ and x-y≤ and hence x∈φ+B_, which for fixed >0 is also compact. Due to the symmetry of G^, this concludes the proof. Let f_k ∈ C(), k=1,2, be 1-Lipschitz functions. Then, for any _0>0 and any φ,ψ∈ C_c^2(^d), the family of maps (ξ^)_0<≤_0, ξ^:→ defined by ξ^(x)∫_^d∖*xf_1⟨[|]φ(x,y)f_2⟨[|]ψ(x,y)η^(x,y)μ(y) is equicontinuous and ⋃_0<≤_0ξ^ is contained in a compact set. Moreover, the family satisfies the uniform bound sup_>0ξ^_L^∞≤∇φ_L^∞∇ψ_L^∞ and is therefore compact by Ascoli-Arzelà. Due to the absolute continuity of μ, we can apply the change of variables w=y-x and rewrite ξ^ as ξ^(x) =∫_∖0f_1⟨[|]φ(x,x+w)f_2⟨[|]ψ(x,x+w)η^(x,x+w)μ(x+w) w =∫_∖0f_1⟨[|]φ(x,x+w)/wf_2⟨[|]ψ(x,x+w)/ww^2η^(x,x+w)μ(x+w) w = ∫_∖0f̃_1(x,w)f̃_2(x,w)η^(x,w)μ(x+w) w, for f̃_1(x,w) f_1(φ(x,x+w))/|w|, f̃_2(x,w) f_2(ψ(x,x+w))/|w|, and η^(x,w) |w|^2η^(x,x+w) = 1/^d+2|w|^2ϑ(x+w/2,w/). In order to prove the uniform equicontinuity, we note that 1-Lipschitz assumption on f_k implies the following elementary inequality for a,b,c,d∈ and k=1,2: f_k(a± b)-f_k(c± d) ≤(a± b) - (c± d)≤*a-c+*b-d. Considering the difference f̃_1(x,w) - f̃_1(y,w) and noting that all considerations apply analogously to f_2 and ψ, we first assume that w>x-y^α for 0<α<1. In this case, (<ref>) and the mean value theorem yield []f̃_1(x,w) - f̃_1(y,w)w =[]f_1⟨[|]φ(x,x+w) - f_1⟨[|]φ(y,y+w) ≤[]φ(y+w,x+w)+[]φ(x,y) ≤ 2 ∇φ_L^∞*x-y < 2 ∇φ_L^∞*x-y^1-α*w. Now considering the case w≤x-y^α, using Taylor with Lagrange remainder and again (<ref>), we obtain []f̃_1(x,w) - f̃_1(y,w)w = []f_1⟨[|]φ(x,x+w) - f_1⟨[|]φ(y,y+w) = []f_1⟨[|]w·⟨[|]∇φ(x)+12H_φ(ζ_x,w) w-f_1⟨[|]w·⟨[|]∇φ(y)+12H_φ(ζ_y,w) w ≤[]w·⟨*|∇φ(x)-∇φ(y)+[]w·⟨[|]12⟨*|H_φ(ζ_x,w)-H_φ(ζ_y,w) w ≤⟨*|ω_∇φ(x-y)+H_φ_L^∞x-y^αw. Arguing analogously for f_2, we have f̃_k(x,w) - f̃_k(y,w)≤ω(x-y) for a suitable ω∈ C([0,∞);[0,∞)) with ω()→ 0 as → 0, which only depends on α, and first and second derivatives of φ and ψ. Regarding the difference η^(x,w)-η(y,w), due to (<ref>) and (<ref>) we have ∫_∖0η^(x,w)-η^(y,w) w = ∫_∖0w^2/^d+2*ϑ⟨*|x+w/2,w/-ϑ⟨*|y+w/2,w/ w = ∫_B_w^2*ϑ⟨*|x+ w/2,w-ϑ⟨*|y+ w/2,w w ≤ω_ϑ(x-y)∫_B_w^2 w ≤B_1C_𝗌𝗎𝗉𝗉^d+2ω_ϑ(x-y). Furthermore, since both f_1 and f_2 are 1-Lipschitz, the mean value theorem yields *f̃_1(x,w)≤∇φ_L^∞ and *f̃_2(x,w)≤∇ψ_L^∞. Combining this with the previous estimates, (<ref>), (<ref>) and Lemma <ref>, we obtain ∫_∖0[]f̃_1(x,w)f̃_2(x,w) μ(x+w)η^(x,w) - f̃_1(y,w)f̃_2(y,w) μ(y+w)η^(y,w) w ≤∫_∖0[]f̃_1(x,w)-f̃_1(y,w)f̃_2(x,w)η^(x,w)μ(x+w) w +∫_∖0f̃_1(y,w)[]f̃_2(x,w)-f̃_2(y,w)η^(x,w)μ(x+w) w +∫_∖0f̃_1(y,w)f̃_2(y,w)[]η^(x,w)-η^(y,w)μ(x+w) w +∫_∖0f̃_1(y,w)f̃_2(y,w)η^(y,w)[]μ(x+w)-μ(y+w) w ≤⟨*|∇φ_L^∞+∇ψ_L^∞ω(x-y) + C_μ∇φ_L^∞∇ψ_L^∞B_1C_𝗌𝗎𝗉𝗉^d+2ω_ϑ(x-y) +∇φ_L^∞∇ψ_L^∞ω_μ(x-y). Together with the former considerations, this proves the equicontinuity of (ξ^)_0<≤_0. Next, we observe that due to Lemma <ref> for any fixed >0 the support of f_1⟨[|]φ(x,y)f_2⟨[|]ψ(x,y)η^(x,y) is compact in ×. Consequently, ξ^ is compact in for any fixed >0 and so is the union ⋃_0<≤_0ξ^. Finally, the bound ξ^_L^∞≤∇φ_L^∞∇ψ_L^∞ again follows from the mean value theorem, the fact that both f_1 and f_2 are 1-Lipschitz, and the bound (<ref>). Let (ρ^)_>0 be such that ρ^⊂μ and ρ^ converges narrowly to some ρ∈() as → 0. Then, for all φ,ψ∈ C^2_c(), it holds l^_ρ^(φ)[ψ] = 1/2∬_G^ φ(x,y)ψ(x,y)η^(x,y)ρ^(x)μ(y) + o(1) . Using that a_+=(1/2)(a+|a|), for a∈, and antisymmetry in (<ref>) we rewrite l^_ρ^(φ)[ψ] = ∬_G^ (φ)_+(x,y)ψ(x,y)η^(x,y)ρ^(x)μ(y) =1/2∬_G^ φ(x,y)ψ(x,y)η^(x,y)ρ^(x)μ(y) +∫_⟨*|1/2∫_|φ|(x,y)ψ(x,y)η^(x,y)μ(y)ρ^(x) + . We want to show that vanishes as → 0. To this end, we introduce another mollification at scale α∈(0,1) and define ρ^ρ^^dν_^α∗ρ^^d, where ν is a mollifier and ν_^α(x) 1/^α dν⟨*|x/^α. Introducing ξ̅^(x)1/2∫_|φ|(x,y)ψ(x,y)η^(x,y)μ(y), we can write = ∫_ξ̅^⟨*|ρ^-ρ^ + ∫_ξ̅^ρ^_1 + _2. Due to Lemma <ref> applied to the 1-Lipschitz functions f_1 = · and f_2 =, we have (ξ̅^)_>0⊂ C_0() and ξ̅^ converges strongly in C() to some ξ̅^0∈ C_0() as → 0. Hence, we infer _1 converges to 0 as → 0 by weak-strong convergence (see e.g. <cit.>), since ρ^ and ρ^ weakly-^∗ converge to the same limit ρ. For _2 we symmetrize and estimate _2 = 1/4∬_G^ []φ(x,y) ψ(x,y) η^(x,y)⟨[|]ρ^(x)μ(y)-μ(x)ρ^(y) ≤1/4∬_G^ []φ(x,y) []ψ(x,y) η^(x,y) []ρ^(x)μ(y)-μ(x)ρ^(y) x y ≤1/4∇φ_L^∞∇ψ_L^∞∬_G^ |x-y|^2 η^(x,y) []ρ^(x)μ(y)-μ(x)ρ^(y) x y ≤1/4∇φ_L^∞∇ψ_L^∞/^d∬_G^ ρ^(x) []μ(y)-μ(x) x y +1/4∇φ_L^∞∇ψ_L^∞∬_G^ |x-y|^2 η^(x,y) μ(x) []ρ^(x)-ρ^(y) x y 1/4∇φ_L^∞∇ψ_L^∞⟨*|_2,1 + _2,2 . For the first term (<ref>), (<ref>) and (<ref>) yield the bound _2,1≤ω_μ() . Since ν_^α(·-z)⊂ B_^α(z) and the Lebesgue measure is translation invariant, Lemma <ref> provides us the estimate G^ ∩ν_^α(·-z)≤ 2B_1^d(1+α). With these preliminary considerations we apply the mean value theorem to the mollifier ν together with the bounds (<ref>) and (<ref>) to obtain _2,2 ≤1/^α d∫_∬_G^ x-y^2η^(x,y)μ(x)*ν⟨*|x-z/^α-ν⟨*|y-z/^α x yρ^(z) ≤1/^d(1+α) C_μ∫_∬_G^ ∩ν_^α(·-z)*∇ν⟨*|ξ·⟨*|x-y/^α x yρ^(z) ≤^1-α-d(1+α)∇ν_L^∞ C_μ∫_G^ ∩ν_^α(·-z)ρ^(z) ≤ 2^1-α∇ν_L^∞B_1 C_μ . As α < 1 this concludes the proof. The next step consists in the identification of the tensor from the remaining symmetric part of the graph structure. For all φ,ψ∈ C^1_c() and all x∈, we have 1/2∫_φ(x,y)ψ(x,y)η^(x,y)μ(y) = ∇φ(x)·^(x)∇ψ(x)+o(1) , where ^(x) 1/2∫_ (x-y)⊗(x-y) η^(x,y)μ(y). By means of Taylor's theorem with Peano remainder and (<ref>), we obtain for any x∈ the identity 1/2∫_φ(x,y)ψ(x,y)η^(x,y)μ(y) =1/2∫_∇φ(x)·(x-y)⊗(x-y)∇ψ(x)η^(x,y)μ(y)+(ω()) =∇φ(x)·⟨*|1/2∫_ (x-y)⊗(x-y) η^(x,y)μ(y)∇ψ(x)+(ω()), where the modulus ω:[0,∞)→ [0,∞) is defined by ω(δ) = ∇φ_L^∞ω_∇ψ(δ)+∇ψ_L^∞ω_∇φ(δ)+ω_∇φ(δ)ω_∇ψ(δ), and where we used the fact that x-y≤ by Assumption (<ref>). Next, we show that the tensors ^ converge to a unique limiting tensor as → 0. There exists a unique ∈ C(^d; ^d× d) such that for any compact K⊂^d and any _0>0 the sequence ⟨*|^__0 ≥>0 with ^ defined as in (<ref>) converges strongly to in C(K;^d× d) as → 0. The limiting tensor, given by (x) 1/2μ(x)∫_ w⊗ w ϑ(x,w) w, is bounded and uniformly continuous. Furthermore, the tensor is uniformly elliptic, i.e. there exist c,C>0 such that for any x,ξ∈ we have cξ^2 ≤ξ·(x)ξ≤ Cξ^2. Finally, for any x∈ the matrix (x) is symmetric. By a change of variables and arguing analogously to (<ref>) and (<ref>), we obtain ^(x)-^(y) = *1/2∫_ w⊗ w [η^(x,x+w)μ(x+w)-η^(y,y+w)μ(y+w)] w ≤1/2C_𝗌𝗎𝗉𝗉^d+2B_1C_μω_ϑ(x-y) + 1/2ω_μ(x-y). Similar to this, using Assumption (<ref>) instead of Assumptions (<ref>) and (<ref>), we obtain ^_L^∞≤1/2. Hence, the family (^)__0≥>0 is equicontinuous and equibounded. In particular, for every compact K⊂^d and every vanishing sequence (_n)_n∈, there exists a (not relabeled) subsequence and a tensor ∈ C(;^d× d) with _L^∞≤1/2 and (x)-(y)≤1/2ω_μ(x-y) such that ^_n→ strongly in C(K;^d× d) as n→∞. To identify the limit, we calculate 2^(x) = ∫_ w⊗ w 1/^d+2ϑ⟨*|x+w/2,w/μ(x+w) w = ∫_ w⊗ w ϑ(x+ w/2,w) μ(x+ w) w = μ(x)∫_ w⊗ w ϑ(x,w) w + (ω_ϑ()+ω_μ()). This shows the explicit form (<ref>) of the tensor as well as its independene of the particular subsequence (_n)_n∈ and the particular compact set K⊂^d. Having shown (<ref>), the uniform ellipticity of is an easy consequence of (<ref>), (<ref>) and (<ref>), while the symmetry of the matrix (x) for any x∈ follows from (<ref>). Finally, we combine the previous results to derive the structure of the continuity equation (<ref>) from the Finsler-type product. The tangent-to-cotangent mapping l^_ρ:T^_ρ_2()→⟨[|]T^_ρ_2()^∗ defined in (<ref>) satisfies lim_→ 0l^_ρ^(φ)[ψ] = ∫_∇φ·∇ψρ, ∀φ,ψ∈ C^2_c(), with the tensor ∈ C(;^d× d) obtained as limit of (^)__0 ≥>0 defined as in (<ref>) from Proposition <ref>. By Proposition <ref>, there exists a limiting tensor ∈ C(^d;^d× d) such that for any compact K⊂^d we have ^→ strongly in C(K;^d× d) as → 0. In particular, for every φ,ψ∈ C^1_c(), we have that ∇φ·^∇ψ→∇φ·∇ψ strongly in C_0() as n→∞. As narrow convergence of measures implies weak-^∗ convergence of measures, this allows us to employ usual weak-strong convergence argument (cf. e.g. <cit.> for the strategy) to obtain lim_→ 0∫_∇φ(x)·^(x)∇ψ(x)ρ^ = ∫_∇φ(x)·(x)∇ψ(x)ρ. The proof is then concluded by employing Proposition <ref>, for which the C^2-regularity of the test functions is necessary. § GRAPH-TO-LOCAL LIMIT FOR CURVES OF MAXIMAL SLOPE The graph-to-local limit for (<ref>) is proven by exploiting the variational formulation of the equations as curves of maximal slopes, as explained in Sections <ref>, <ref>. The localising graph provided by (μ,η^), for η^ as in (<ref>), identifies a sequence of weak solutions of (<ref>), that is, for ρ^≪μ, μ-a.e. x and a.e. t, ∂_tρ_t^(x)+∫_ (K*ρ^_t)(x,y)_- η^(x,y) ρ_t^(x) μ(y) - ∫_(K*ρ^_t)(x,y)_+ η^(x,y) ρ_t^(y)=0. 𝖭𝖫^2𝖨𝖤_ In view of <cit.>, weak solutions of (<ref>) are curves of maximal slope in the Finslerian quasi-metric space (_2(), 𝒯_μ,η^), that is zero-level sets of the graph De Giorgi functional _(ρ^)=(ρ_T^)-(ρ_0^)+1/2∫_0^T⟨[|]_(ρ_τ^) + |ρ_τ'|_^2τ, with the metric slope _(ρ^)= ∬_[]⟨[|] K*ρ^_-(x,y)^2η^(x,y)ρ^(x)μ(y). We show that in the → 0 limit we obtain a zero-level set of the local De Giorgi functional _(ρ)=(ρ_T)-(ρ_0)+1/2∫_0^T⟨[|]_ (ρ_τ) + |ρ_τ'|^2_τ, where the metric slope is _ (ρ)=∫_[]∇δ/δρ,∇δ/δρρ, thereby proving Theorem <ref>. As byproduct, the graph-to-local limit provides a proof of Theorem <ref>, the existence of weak solutions of (<ref>). Indeed, we find that weak solutions are curves of maximal slopes in local setting, according to Definitions <ref> and <ref> with respect to the Wasserstein gradient flow structure of (<ref>) in the metric space (_2(^d_),W_). Throughout this section we fix the tensor as in (<ref>) for μ,ϑ satsifying (<ref>), (<ref>), and (<ref>) – (<ref>), respectively, and we consider η^ given by (<ref>) as before. §.§ Lower limit for the metric derivative and metric slope In this section we derive the lower limits for the metric derivatives and the metric slopes, respectively. This will then be combined with a chain rule (cf. Proposition <ref>) in order to obtain the convergence of the zero level sets of the graph De Giorgi functionals to the zero level set of the local De Giorgi functional as → 0. Consider a family (ρ^)_>0⊂^2([0,T];(𝒫_2(),𝒯_)), with (ρ_0^)_⊂_2() such that sup_>0 M_2(ρ_0^) < ∞. Then there exists ρ∈^2([0,T];(_2(^d_),W_)) such that, for a.e. t∈[0,T], ρ_t^⇀ρ_t and it holds lim inf_→ 01/2∫_0^T []⟨ρ^_t|'_^2 t≥∫_0^T|ρ'_t|_^2 t. <cit.> characterises absolutely continuous curves, and, in particular, for every >0 it allows us to infer that there exists a unique j^ such that (ρ^, j^)∈_T and for a.e. t∈[0,T] it holds ⟨ρ^_τ|'_^2 = (μ,η^,ρ^_t,j^_t)<∞. We employ Proposition <ref> to obtain (ρ^,^)∈_T; Proposition <ref> implies convergence to a limit (ρ,)∈_T such that ρ⊂_2() (by Lemma <ref> and lower semicontinuity). Next, we note that for any φ∈ C_c^∞() the one-sided Cauchy-Schwarz inequality <cit.> and Young's inequality yield φ, j^_η^ = 1/2∬_φ(x,y)η^(x,y) j^(x,y) = 1/2∬_φ(x,y)η^(x,y)(v^_+(x,y)ρ^(x)μ(y)-v^_-(x,y)μ(x)ρ^(y)) = l^_ρ^(v^)[φ] ≤√(l^_ρ^(v^)[v^] l^_ρ^(φ)[φ]) = √((μ,η^;ρ^,j^) (μ,η^;ρ^,φ)) ≤1/2(μ,η^;ρ^,j^) + 1/2(μ,η^;ρ^,φ). For completeness we mention the second equality holds since we have finite action, whence upwind flux, cf. <cit.>. Therefore, for φ∈ C_c^∞((0,T);C_c^∞()) we obtain lim inf_→ 0 1/2∫_0^T ⟨μ,η^;ρ^_t,j^_t| t ≥lim inf_→ 0⟨*|∫_0^T[]φ_t,j^_t_η^ t - 1/2∫_0^T ⟨μ,η^;ρ^_t,φ_t| t ≥lim_→ 0∫_0^T[]φ_t,j^_t_η^ t - lim sup_→ 01/2∫_0^T ⟨μ,η^;ρ^_t,φ_t)| t Thm. <ref> =∫_0^T[]∇φ_t,_t t - 1/2∫_0^T ∫_⟨∇φ_t,∇φ_t⟩ρ_t t. With this, arguing analogously to the last step in the proof of <cit.>, we set V{^1/2∇φ:φ∈ C_c^∞((0,T);C_c^∞())}, being ^1/2(x) the square root of the positive-definite symmetric matrix (x). Fenchel–Moreau duality theorem implies 1/2∫_0^T_t/ρ_t_L^2(ρ;_^d)^2 t=1/2∫_0^T∫_⟨^-1_t/ρ_t, _t/ρ_t⟩ρ_t t =sup_φ∈ V∫_0^T*^1/2∇φ_t,^-1/2_t t - 1/2∫_0^T ∫_⟨^1/2∇φ_t,^1/2∇φ_t⟩ρ t ≤lim inf_→ 01/2∫_0^T ⟨μ,η^;ρ^_t,j^_t| t = lim inf_→ 01/2∫_0^T []⟨ρ^_τ|'_^2 t, where the last equality follows from <cit.> as mentioned at the beginning of the proof. The above inequality and Proposition <ref> imply ρ∈^2([0,T];(_2(^d_),W_)) since (ρ,)∈_T and, for 0≤ s≤ t≤ T, it holds due to the Cauchy-Schwarz inequality W_^2(ρ_s,ρ_t)≤(t-s)∫_s^t_τ/ρ_τ_L^2(ρ_τ;^d_)^2τ<∞, and Lebesgue differentiation theorem gives, for a.e. τ the claimed result ρ'_τ_≤*_t/ρ_t_L^2(ρ_t;^d_) . Let us turn to the lower limit for the slopes. Let (μ,ϑ) satisfy (<ref>), (<ref>) and (<ref>) – (<ref>). Let η^ be given by (<ref>) and let K satisfy (<ref>) – (<ref>). Let (ρ^_n)_n∈⊂_2() be such that ρ^_n⇀ρ∈_2() narrowly as n→∞. Then, up to passing to a subsequence, we have lim inf_n→∞__n(ρ^_n) ≥_ (ρ). We intend to apply Theorem <ref>, thus we shall take multiple steps to replace K∗ρ^_n by a compactly supported, sufficiently smooth test function, which is independent of n. Throughout the proof, without loss of generality we assume that _n≤ 1/ for all n∈, so that by (<ref>) ∀ n∈, ∀(x,y)∈ G^_n: x-yx-y^2 = x-y. Step 1: Truncation Let χ_R∈ C^∞_c(;[0,1]) such that χ_R⊂B̅_2R, χ_R|_B_R≡ 1, ∇χ_R≤ 2/R. First, we observe that for any (x,y)∈ G it holds ⟨[|] K∗ρ^_n_- ≥⟨[|] K∗χ_Rρ^_n_- - [] K∗(1-χ_R)ρ^_n ≥⟨[|] K∗χ_Rρ^_n_- - L_K*x-ysup_n∈ρ^_n(B_R^c). Thus, due to the tightness of (ρ^_n)_n, there exists ω∈ C([0,∞);[0,∞)) with ω(R)→ 0 as R→∞, such that __n(ρ^_n) = (μ,η^_n;ρ^_n,- K∗ρ^_n) ≥(μ,η^_n;ρ^_n,- K∗χ_Rρ^_n)-ω(R) ≥(χ_Rμ,η^_n;χ_Rρ^_n,- K∗χ_Rρ^_n)-ω(R) (χ_Rμ,η^_n;χ_Rρ^_n,φ_R^_n)-ω(R), where the last inequality holds by the monotonicity of the integral. Step 2: Mollification In order to apply Theorem <ref> later, we need further regularity. Given a mollifier (ν_δ)_δ>0, for every n∈ we define φ^_n_R,δν_δ∗φ_R^_n∈ C_c^∞(). Then, using (<ref>) and (<ref>) we calculate [](χ_Rμ,η^_n;χ_Rρ^_n,φ_R^_n) - (χ_Rμ,η^_n;χ_Rρ^_n,φ^_n_R,δ) = []∬_[][]⟨[|]φ_R^_n_+^2-[]⟨[|]φ^_n_R,δ_+^2(χ_R⊗χ_R)η^_n⟨*|ρ^_n⊗μ ≤ 2 L_K∬_[]⟨[|]φ_R^_n-φ^_n_R,δ(x,y)x-yχ_R(x)χ_R(y)η^_n(x,y)ρ^_n(x)μ(y) ≤ 2 L_K∇φ_R^_n-∇φ^_n_R,δ_L^∞(B_2R). Next, we show that ∇φ_R^_n-∇φ^_n_R,δ_L^∞(B_2R)→ 0 as δ→ 0, uniformly in n. Indeed, for any x∈ B_2R it holds []∇φ_R^_n(x)-∇φ^_n_R,δ(x) =*∫_⟨*|∇φ_R^_n(x)-∇φ_R^_n(y)ν_δ(x-y) y ≤sup_y∈ B_δ(x)*∇φ_R^_n(x)-∇φ_R^_n(y) = sup_y∈ B_δ(x)*∫_⟨*|∇_1 K(y,z)-∇_1 K(x,z)χ_R(z)ρ^_n(z) ≤sup_z∈ B_2Rsup_y∈ B_δ(x)*∇_1 K(y,z)-∇_1 K(x,z) ω_R(δ)δ→ 0⟶0, where we used that the continuous function ∇_1 K is uniformly continuous on the compact set B_2R. Step 3: n-independent test function Next, we show that up to a negligible error, vanishing as n→∞, we can replace φ_R,δ^_n by φ_R,δ -ν_δ∗ K∗χ_Rρ. Indeed, we observe *⟨*|φ_R,δ^_n(x,y)_+ - ⟨*|φ_R,δ(x,y)_+χ_R(x)χ_R(y) ≤*⟨*|φ_R,δ^_n - φ_R,δ(x,y)χ_R(x)χ_R(y)≤∇φ_R,δ^_n - ∇φ_R,δ_L^∞(B_2R)x-y. Hence, due to (<ref>), keeping in mind (<ref>), we obtain *(χ_Rμ,η^_n;χ_Rρ^_n,φ_R,δ^_n) - (χ_Rμ,η^_n;χ_Rρ^_n,φ_R,δ) ≤ 2L_K∬_*⟨*|(φ_R,δ^_n(x,y))_+-⟨*|(φ_R,δ(x,y))_+ χ_R(x)χ_R(y)x-yη^_n(x,y)ρ^_n(x)μ(y) ≤2L_K∇φ_R,δ^_n-∇φ_R,δ_L^∞(B_2R). We conclude this step by showing that for any δ,R>0, it holds lim sup_n→∞∇φ_R,δ^_n-∇φ_R,δ_L^∞(B_2R) = 0. To see this, we first observe that by the triangle inequality for any N∈ and a,a_k∈, k=1,…,N, we have a≤∑_k=1^N a_k +min_ka-a_k. On the other hand, for any σ>0 there exists N_σ∈ and points x^(k), k=1,…,N_σ, such that min_kx-x^(k)≤σ for any x∈ B_2R. We define K_δν_δ∗ K and denote by ω_K_δ,R the modulus of continuity of ∇ K_δ on B_2R× B_2R. Recalling χ_R=B_2R, we obtain ∇φ_R,δ^_n-∇φ_R,δ_L^∞(B_2R) = max_1≤ i≤ dsup_x∈ B_2R*∫_∂_x_i K_δ(x,z)χ_R(z)⟨*|ρ^_n-ρ(z) ≤max_1≤ i≤ d∑_k=1^N_σ*∫_∂_x_i K_δ(x^(k),z)χ_R(z)⟨*|ρ^_n-ρ(z) + max_1≤ i≤ dsup_x∈ B_2Rmin_k*∫_⟨*|∂_x_i K_δ(x,z)-∂_x_i K_δ(x^(k),z)χ_R(z)⟨*|ρ^_n-ρ(z) ≤max_1≤ i≤ d∑_k=1^N_σ*∫_∂_x_i K_δ(x^(k),z)χ_R(z)⟨*|ρ^_n-ρ(z) + 2ω_K_δ,R(σ). Since N_σ is finite, the first sum vanishes as n→∞ due to the narrow convergence of ρ^_n towards ρ. Hence, since σ>0 is arbitrary, the claim is proved. Step 4: Conclusion Applying Theorem <ref> with χ_Rμ and χ_Rρ^_n, keeping in mind Proposition <ref> on the uniqueness of the limit, we have lim_n→∞(χ_Rμ,η^_n;χ_Rρ^_n,φ_R,δ) = ∫_∇φ_R,δ·_R∇φ_R,δχ_Rρ, where, recalling η^_n(x,y) = 1/_n^d+2ϑ⟨*|x+y/2,y-x/_n, it holds _R(x) = lim_n→∞1/2∫_ (x-y)⊗(x-y) η^_n(x,y)χ_R(y)μ(y) = lim_n→∞1/2_n^d∫_⟨*|y-x/_n⊗⟨*|y-x/_n ϑ⟨*|x+_n/2y-x/_n,y-x/_nχ_R(y)μ(y) y = 1/2χ_R(x)μ(x)∫_ w⊗ w ϑ(x,w) w, which, as R→∞, converges pointwise to (x) = 1/2μ(x)∫_ w⊗ w ϑ(x,w) w. Regarding φ_R,δ, we define φ_R - K∗χ_Rρ and observe that for every δ_0>0 the family (φ_R,δ)_0<δ≤δ_0 is supported on a compact set, so that we find ∇φ_R,δ-∇φ_R_L^∞(B_2R)δ→ 0⟶ 0. Since χ_R→ 1 as R→∞ pointwise on , (<ref>) and the dominated convergence theorem give us for for every x∈ ∇φ_R(x) = -(∇_1 K)∗ρχ_R(x)R→∞⟶-∇ K∗ρ(x). Thus, another application of the dominated convergence theorem yields lim inf_n→∞__n(ρ^_n) ≥lim inf_n→∞(χ_Rμ,η^_n;χ_Rρ^_n,φ_R^_n)-ω(R) ≥lim inf_n→∞(χ_Rμ,η^_n;χ_Rρ^_n,φ_R,δ^_n)-ω_R(δ) -ω(R) ≥lim inf_n→∞(χ_Rμ,η^_n;χ_Rρ^_n,φ_R,δ)-ω_R(δ) -ω(R) = lim_n→∞(χ_Rμ,η^_n;χ_Rρ^_n,φ_R,δ) -ω_R(δ) -ω(R) = ∫_∇φ_R,δ·_R∇φ_R,δχ_Rρ -ω_R(δ)-ω(R) δ→0⟶∫_∇φ_R·_R∇φ_Rχ_Rρ -ω(R) R→∞⟶_ (ρ). §.§ Wasserstein gradient flow structure for (NLIE) In order to prove Theorem <ref>, it remains to show that the local De Giorgi functional _ is non-negative. This is a consequence of a chain rule inequality proven in Proposition <ref>. Afterwards we prove Theorem <ref> by establishing convergence of gradient flows of in the Finslerian spaces (_2(^d),_) towards gradient flows of the same energy, , in the Riemannian space (_2(^d_),W_)) as → 0, using the concept of curves of maximal slope in the corresponding spaces, following <cit.>. (Chain-rule inequality) Assume K satisfies (<ref>) – (<ref>). For any ρ∈^2([0,T];(_2(^d_),W_)) the following chain rule inequality holds _(ρ)=(ρ_T)-(ρ_0)+1/2∫_0^T(_ (ρ_t)+|ρ'_t|^2_) t≥0. Let us remind the reader the tensor is continuous, symmetric, and uniformly elliptic, cf. Proposition <ref>. According to <cit.>, if ρ∈^2([0,T];(_2(^d_),W_), there exists a unique vector field u:[0,T]×→ such that (ρ,ρ u)∈_T and ρ_t'_=u_t_L^2(ρ_t;^d_) t∈[0,T]. The uniform ellipticity of (Proposition <ref>) implies 1/C∫_0^T∫_*u_t^2ρ_t t≤∫_0^T∫_^-1u_t,u_tρ_t t=∫_0^Tρ_t'_^2 t<∞, by assumption. Since (ρ,ρ u)∈_T such that u_L^2(ρ;)∈ L^1(0,T), due to the previous inequality, <cit.> implies W_2-absolutely continuity of the curve ρ = (ρ_t)_t∈[0,T]⊂_2(). Due to Proposition <ref> and the fact that is symmetric and uniformly elliptic, for any 0≤ s≤ t≤ T we obtain the result from (ρ_t)-(ρ_s)=∫_s^t∫_∇ K*ρ_τ u_τρ_ττ =∫_s^t∫_^1/2∇ K*ρ_τ,^-1/2u_τρ_ττ ≥ -1/2∫_s^t *∫_∇ K*ρ_τ,∇ K*ρ_τρ_τ∫_^-1 u_τ,u_τρ_ττ =-1/2∫_s^t(_ (ρ_τ)+|ρ'_τ|^2_)τ . We now combine the lower limits of Section <ref> with the chain-rule inequality from Proposition <ref> to prove Theorem <ref>. Continuity of the energy with respect to narrow convergence, <cit.>, and the lower limits for the metric derivatives and the slopes, Propositions <ref> and <ref>, imply 0=lim inf_→0_(ρ^)≥_(ρ). With this, the chain-rule inequality proven in Proposition <ref> ensures _(ρ)=0. Next, we want to establish the connection between the zero level sets of _ and weak solutions of (<ref>). To this end, we first show that √(_ ) is a strong upper-gradient for . Assume K satisfies (<ref>) – (<ref>). For any ρ∈^2([0,T];(_2(^d_),W_)) it holds √(_ ) is a strong upper gradient for in the sense of Definition <ref>, i.e. |(ρ_t)-(ρ_s))|≤∫_s^t √(_ (ρ_τ))ρ_τ'τ, ∀ 0<s≤ t<T. Arguing as in Propositon <ref>, we infer there exists a unique vector field u:[0,T]×→ such that (ρ,ρ u)∈_T with u_L^2(ρ;)∈ L^1(0,T), hence the curve (ρ_t)_t∈[0,T]⊂_2() is W_2-absolutely continuous. Therefore, by applying Proposition <ref>, we infer what is claimed, i.e. |(ρ_t)-(ρ_s)|=|∫_s^t∫_∇ K*ρ_τ u_τρ_ττ| ≤∫_s^t∫_[][]^1/2∇ K*ρ_τ,^-1/2u_τρ_ττ ≤∫_s^t∫_[][]∇ K*ρ_τ,∇ K*ρ_τ^1/2[][] u_τ,^-1u_τ^1/2ρ_ττ ≤∫_s^t√(_ (ρ_τ)) ρ_τ'τ. We are now able to identify weak solutions of (<ref>) as curves of maximal slope of with respect to the strong upper gradient √(_), thus gradient flows in (_2(^d_),W_). Assume K satisfies (<ref>) – (<ref>). A curve ρ⊂_2() is a weak solution of (<ref>) if and only if it is a curve of maximal slope for with respect to its strong upper gradient √(_), i.e. ρ∈^2([0,T];(_2(^d_),W_)) and _(ρ)=0. According to Definition <ref>, for a weak solution of (<ref>) we have j_t(x)=-∇δ/δρ(x)ρ_t(x). By again setting u_t-∇δ/δρ_t the density of j_t with respect to ρ_t, we notice u_t_L^2(ρ_t;^d_)=_ (ρ_t)=∫_[]∇δ/δρ_t,∇δ/δρ_tρ_t<∞. The metric slope is uniformly bounded as follows as consequence of the uniformly ellipticity of and Assumption (<ref>): _ (ρ_t) =∫_[]∇δ/δρ_t,∇δ/δρ_tρ_t ≤ C∬_^2d|∇ K(x,y)|^2ρ_t(y)ρ_t(x) ≤ C∬_^2d(1+|x|+|y|)^2ρ_t(y)ρ_t(x) ≤C̃(1+M_2(ρ_t))≤C̅(ρ_0,T). The uniform bound on the second order moments of ρ_t can be proven by a standard procedure, which we include for completeness. Upon considering a smooth cut-off function, by Remark <ref> we have / t∫_⟨[|]1+ |x|^2ρ_t(x) =-2∫_[]x,∇δ/δρ_tρ_t ≤2(∫_⟨^-1 x,x⟩ρ_t)^1/2(∫_[]∇δ/δρ_t,∇δ/δρ_tρ_t)^1/2 ≤ C ∫_|x|^2ρ_t(x)+_ (ρ_t) ≤C̃∫_⟨[|]1 + |x|^2ρ_t(x). Gronwall's inequality provides propagation of second order moments for weak solutions of (<ref>). We have ρ∈^2([0,T];(_2(^d_),W_)) by the uniform bound on the metric slope _. Since (0,T)∋ t ↦u_t_L^2(ρ_t;^d_)∈ L^2(0,T) and by arguing as in the proof of Proposition <ref>, we obtain |ρ'_t|_≤u_t_L^2(ρ_t;^d_). In turn, uniform ellipticity of gives u_t_L^2(ρ_t;)∈ L^2(0,T), thus W_2-absolutely continuity of ρ, due to <cit.>. Exploiting the chain rule from Proposition <ref>, we obtain, for any 0≤ s≤ t≤ T, (ρ_t)-(ρ_s)=-∫_s^t _(ρ_τ)τ≤-∫_s^t √(_ (ρ_τ))ρ_τ'τ. Being √(_ ) a strong upper gradient by Corollary <ref>, we infer _(ρ)=0. Let ρ be a curve of maximal slope for with respect to its strong upper gradient, √(_ ), cf. Definition <ref>. More precisely, ρ∈^2([0,T];(_2(^d_),W_)) such that _(ρ)=0. Arguing as in previous proofs, there exists a unique vector field u:[0,T]×→ such that (ρ,ρ u)∈_T and ρ_t'_=u_t_L^2(ρ_t;^d_) for a.e. t∈[0,T]. Furthermore, uniform ellipticity of implies W_2-absolutely continuity. Proposition <ref> gives, for any 0≤ s ≤ t≤ T, (ρ_t)-(ρ_s)=∫_s^t∫_∇ K*ρ_τ u_τρ_ττ =∫_s^t∫_⟨^1/2∇ K*ρ_τ,^-1/2u_τ⟩ρ_ττ ≥-∫_s^t∫_|⟨∇ K*ρ_τ,∇ K*ρ_τ⟩|^1/2|⟨ u_τ,^-1u_τ⟩|^1/2ρ_ττ ≥ -1/2∫_s^t∫_⟨∇ K*ρ_τ,∇ K*ρ_τ⟩ρ_ττ -1/2∫_s^t∫_⟨ u_τ,^-1u_τ⟩ρ_ττ =-1/2∫_s^t(_ (ρ_τ)+|ρ'_τ|^2_)τ. Since _(ρ)=0, the inequalities above are equalities, which is true if and only if u_t(x)=-∇ K*ρ_t(x) for ρ_t-a.e. x and a.e. t∈[0,T]. In particular, (ρ, j)∈_T for j=-ρ∇*ρ. Finally, we conclude with the proof of Theorem  <ref>. The proof is a graph-approximation of (<ref>). Consider (μ,ϑ) satisfying (<ref>), (<ref>) and (<ref>) – (<ref>), and let η^ be given by (<ref>). Theorem <ref> provides a sequence (ρ^)_ of weak solutions to (<ref>), which are curves of maximal slope for in (_2(),_μ,η^), hence such that _(ρ^)=0, for any >0, cf. <cit.>. The graph-to-local limit proven previous to this subsection, Theorem <ref>, implies existence of a limiting curve ρ∈^2([0,T];(_2(^d_),W_)) which is ρ is a gradient flow of in (_2(^d_),W_)), i.e. _(ρ) = 0. In particular, Theorem <ref> asserts this is a weak solution of (<ref>). § CHAIN-RULE FOR THE NONLOCAL INTERACTION ENERGY Below we provide a general chain-rule result for the energy (<ref>) using , as alternative to <cit.>, where convexity is required. Let K satisfy assumption (<ref>) – (<ref>) and ρ∈^2([0,T];(_2(), W_2)). Then, for any 0≤ s≤ t≤ T, the following chain-rule holds (ρ_t)-(ρ_s)=∫_s^t∫_∇ K*ρ_τ(x) j_τ(x), being (j_t)_t∈[0,T]⊂(;) such that (ρ, j)∈_T. Assumption (<ref>) ensures (ρ_t)<∞ for all t∈[0,T], since ρ⊂_2(). Having ρ∈^2([0,T];(_2(), W_2)) implies there exists a unique Borel vector field u∈ L^2([0,T];L^2(ρ_t;)) such that (ρ,ρ u)∈_T as well as ρ_t' = u_t_L^2(ρ_t;) for ^1-a.e. t∈[0,T], cf. <cit.>. For consistency with our notation we denote jρ u and u_t j_t/ρ_t for a.e. t∈[0,T]. We follow the procedure in, e.g., <cit.>, applying two regularization arguments. First, for all (x,y)∈× we define K^(x,y)=K*m_(x,y)=∬_× K(z,z')m_(x-z,y-z') z z', where m_(z)=1/^2dm(z/) for all z∈^2d and >0, being m∈ C_c^∞(^2d) a standard mollifier. We also introduce a smooth cut-off function φ_R∈ C_c^∞(^2d), which is such that φ_R(z)=1 on B_R, φ_R(z)=0 on ^2d∖ B_2R and ∇φ_R≤2/R. We set K_R^φ_R K^ and note that it is a C_c^∞(^2d) function. We now introduce the approximate energies, indexed by and R, _R^(ν)=1/2∫_∫_ K_R^(x,y)ν(y)ν(x) . Next, we extend ρ and j to [-T,2 T] periodically in time by setting ρ_-sρ_T-s and ρ_T+sρ_s for all s∈ (0,T] and likewise for j. We regularize ρ and j in time by using a standard mollifier n∈ C_c^∞(), supported on [-1,1], by setting n_σ(t)=1/σn(t/σ) and ρ_t^σ(A)=n_σ*ρ_t(A)=∫_-σ^σ n_σ(t-s)ρ_s(A) s, ∀ A⊆, j_t^σ(A)=n_σ*j_t(A)=∫_-σ^σ n_σ(t-s)j_s(A) s, ∀ A⊆, for any σ∈(0,T) and any t∈[0,T]; whence ρ_t^σ∈_2() for all t∈[0,T] and it is straightforward to check that (ρ^σ, j^σ)∈_T. Furthermore, we observe ∫_0^T∫_| j_t/ρ_t|^2ρ_t t=∫_0^T∫_α(ρ_t/|λ|, j_t/|λ|) |λ| t, where |λ|∈^+() is such that ρ_t,j_t≪|λ| for a.e. t∈[0,T], where α(a,b)|b|^2/a a>0, 0 a=0 b 0, +∞ a=0 b 0, is lower semicontinuous, jointly convex, and one-homogeneous, see <cit.>. Arguing as in <cit.>, Jensen's inequality and Fubini's Theorem ensure ∫_0^T∫_| j_t^σ/ρ_t^σ|^2ρ_t^σ t ≤ c ∫_0^T∫_| j_t/ρ_t|^2ρ_t t, for a constant c>0. In view of the regularity we have for >0 and σ>0, we compute / t_R^(ρ_t^σ)=∫_∇ K_R^*ρ_t^σ(x) j_t^σ(x), whence, by integrating in time for 0≤ s≤ t≤ T, it holds _R^(ρ_t^σ)-_R^(ρ_s^σ)=∫_s^t∫_∇ K_R^*ρ_τ^σ(x) j_τ^σ(x). The proof will be completed once we let and σ to 0 and R to ∞ in the equality above. In this regard, we first note that <cit.> gives ρ_t^σ⇀ρ_t weakly-^∗ in _loc^+(), for any t∈[0,T], and j^σ⇀ in _loc^+(×[0,T];), being (ρ,)∈_T. On the other hand, since n_σ⇀δ_0 weakly-^∗, as σ→0, we have ρ_t^σ⇀ρ_t weakly-^∗ in _loc^+() and j_t^σ⇀ j_t weakly-^∗ in _loc^+(;), so that ρ̃=ρ and = j by uniqueness of the limit. Since K∈ C^1(×), it is well known that K^→ K uniformly on compact sets as →0. In particular, on both sides of (<ref>) we can send σ→0 and →0. By further using Lebesgue dominated convergence theorem on the R→∞ limit — exploiting the moment control established in Remark <ref> in conjunction with Assumption (<ref>) — we obtain the desired result, i.e. (<ref>). § EXTENSION TO REGULAR Σ-FINITE BASE MEASURES Here we extend the results from <cit.>, shown for finite base measures μ, to σ-finite base measures μ satisfying (<ref>), (<ref>). For the readers convenience, we remind the relevant assumptions in <cit.> for the pair (μ,η): η is continuous on the set η>0, η(x,y)=η(y,x) for all x,y∈; the moment bounds sup_(x,y)∈|x-y|^2|x-y|^4η(x,y) ≤ C_η, sup_x∈∫_|x-y|^2|x-y|^4η(x,y)μ(y) ≤ C_η^μ, and the local blow-up control lim_δ→ 0sup_x∈∫_B_δ(x)∖*xx-y^2η(x,y)μ(y) = 0. Assumption (<ref>) is required in <cit.> to show existence of weak solutions to (<ref>). Instead of μ()<∞, we assume (<ref>), (<ref>) and that G_ x= y∈∖*x:η(x,y)>0⊂ is a bounded set. Note that by Lemma <ref> and Remark <ref> all the above conditions are satisfied under the assumptions (<ref>), (<ref>) and (<ref>) – (<ref>). In order to reprove the results from <cit.> under the altered assumption on μ, we need to ensure that the lower semicontinuity of the De Giorgi functional <cit.> holds true when the sequence (μ^n)_n⊂^+(G) converges weakly-^∗ to μ∈^+(), instead of narrowly, and it satisfies the above assumptions uniformly in n. We notice this is already pointed out in <cit.>, namely the action (μ,η;ρ,j) is in fact weakly-^∗ lower semicontinuous with respect to the base measure μ (see e.g. <cit.> or <cit.>). Due to the identity ρ'_μ,η^2 = (μ,η;ρ,j), for a specific j∈() (see <cit.> or <cit.>), weak-^∗ lower semicontinuity is inherited by metric derivatives. Moreover, for the sake of completeness we also mention compactness for weak solutions of the continuity equation still holds true as weak-^∗ convergence is only needed when showing lower semicontinuity of the total action, see <cit.>. It remains to show lower semicontinuity of the metric slope with respect to weak-^∗ convergence for the base measure μ. Let K satisfy (<ref>) – (<ref>) and η:→[0,∞) satisfy (<ref>). Assume that (μ_n)_n∈ℕ⊂^+() is such that (<ref>) and (<ref>) hold uniformly in n and μ^n∗⇀μ, as n→∞. Let (ρ_n)_n∈⊂_2() such that ρ^n⇀ρ, as n→∞ for some ρ∈_2(). Then, the metric slope is weak-^∗ lower semicontinuous lim inf_n→∞(μ^n,η;ρ^n)≥(μ,η;ρ). We argue similarly to the proof of Proposition <ref>. Let R>0. We set G_R*(x,y)∈: x-y≥1/R and choose χ̅_R∈ C_c^∞(;[0,1]) such that χ̅_R|_G_R∩(B_R× B_R)≡ 1 and χ̅_R⊂G_2R∩(B_2R× B_2R). Further, we choose χ_R∈ C_c^∞(;[0,1]) such that χ_R⊂B_2R, χ_R|_B_R≡ 1 and ∇χ_R≤ 2/R. Arguing analogously to the derivation of (<ref>), we obtain (μ^n,η;ρ^n) ≥∬_⟨*|⟨*|φ_R^n_+^2χ̅_Rη⟨*|ρ^n⊗μ^n-ω(R) 𝒜(χ̅_R(μ^n⊗ρ^n),φ_R^n)-ω(R), where φ_R^n -K∗χ_Rρ^n. Setting φ_R -K∗χ_Rρ, for any R>0, we have []𝒜(χ̅_R(μ^n⊗ρ^n),φ_R^n)-𝒜(χ̅_R(μ^n⊗ρ^n),φ_R)≤ 8R L_K C_η^μφ^n_R-φ_R_L^∞(B_2R), where we used that x-y>1/2R for any (x,y)∈χ̅_R and (<ref>). Arguing as in the proof of (<ref>) we find φ^n_R-φ_R_L^∞(B_2R) vanishes as n→∞. Due to the cut-off χ̅_R, the weak-^∗ convergence of ρ^n⊗μ^n towards ρ⊗μ yields lim_n→∞𝒜(χ̅_R(μ^n⊗ρ^n),φ_R) = 𝒜(χ̅_R(μ⊗ρ),φ_R). Furthermore, denoting φ -K∗ρ, employing again (<ref>) as well as (<ref>) and arguing similar to the estimate (<ref>), for every R>0 we find *𝒜(χ̅_R(μ⊗ρ),φ_R)-𝒜(χ̅_R(μ⊗ρ),φ) ≤ 2 L_K^2 C_η^μρ(B_R^c). For the outer cut-off, we employ again (<ref>) and (<ref>) []𝒜(χ̅_R(μ⊗ρ),φ)-𝒜(μ;ρ,φ) ≤[]∬_G_R^c⟨*|⟨*|φ_+^2η⟨*|ρ⊗μ + []∬_B_R^c× B_R^c⟨*|⟨*|φ_+^2η⟨*|ρ⊗μ ≤ L_K^2 sup_x∈∫_B_R^-1(x)∖x⟨*|x-y^2x-y^4η(x,y)μ(y) + L_K^2 C_η^μρ(B_R^c), which all vanish as R→∞ due to the tightness of ρ and (<ref>). Combining all the previous estimates, we obtain lim inf_n→∞(μ^n,η;ρ^n) ≥lim inf_n→∞𝒜(χ̅_R(μ^n⊗ρ^n),φ_R^n)-ω(R) ≥lim inf_n→∞𝒜(χ̅_R(μ^n⊗ρ^n),φ_R)-ω(R) = 𝒜(χ̅_R(μ⊗ρ),φ_R)-ω(R) R→∞⟶𝒜(μ;ρ,φ) = (μ,η;ρ). The final step is adapting <cit.> to the altered framework. In order to obtain an n-uniform bound similar to <cit.>, we will employ the following lemma: Let μ∈^+() such that (<ref>), (<ref>) hold. There exists a sequence (μ^n)_n∈ of finite counting measures μ^n = ∑_k=1^N_nμ^n_k δ_x^n_k, for suitable N_n∈, μ_k^n∈ [0,∞), x_k^n∈, and a constant C̃_η^μ>0 such that sup_n∈sup_x∈μ^n(G_ x) =sup_n∈sup_x∈μ^n(y∈∖*x:η(x,y)>0) ≤C̃_μ. A possible counting measure approximation is the d-dimensional midpoint Riemann sum, which is obtained by choosing the evaluation points x_k^n:1≤ k ≤ N_n = ^d/2^n∩ B_n and the weights μ^n_k = μ̃(x_k^n)/2^dn, where we recall that μ̃ is the density of μ with respect to ^d. Regarding the inequality (<ref>), we recall that the set G_0 = y∈:η(0,y)>0⊂ is a bounded set, hence so is G_0^δ G_0+B_δ for any δ>0. Furthermore, the measures μ^n are almost translation invariant in the sense that there is δ>0 such that for any x∈ and any n∈ we have μ^n(G_ x) ≤μ^n(G_0^δ). Here δ>0 is introduced to compensate for the fact that more points x_k^n might lie inside G_x than inside G_0 and also to compensate for fluctuations in μ, keeping in mind the uniform bounds c_μ≤μ̃≤ C_μ. Now we are in the position to show existence of solutions to (<ref>) in the adapted setting. Let K satisfy (<ref>) – (<ref>), μ∈^+() be σ-finite, and assume that (μ,η) satisfy (<ref>) – (<ref>). Consider ϱ_0∈_2() with ϱ_0≪μ. Then there exists a narrowly continuous curve ρ:[0,T]→_2() such that ρ_t≪μ for all t∈[0,T], which is a weak solution to (<ref>) with initial datum ρ_0=ϱ_0. We employ Lemma <ref> to obtain a sequence of finite counting measures (μ^n)_n∈⊂^+() satisfying the n-uniform bound (<ref>) and μ^n∗⇀μ as n→∞. Furthermore, for n≥ n_0 where ρ_0(B_n_0)>0, let ρ̅^n_0 ρ_0|_B_n/ρ_0(B_n). By construction, we have ρ̅^n_0() = ρ_0() = 1 for all n≥ n_0 and ρ̅^n_0∗⇀ρ_0 as n→∞, which together imply ρ̅^n_0⇀ρ_0 as n→∞. Then, we follow the construction from the proof of <cit.> replacing in the argument μ by μ̅^n μ|_B_n, which satisfy μ̅^n()<∞, ρ̅^n_0⊂μ̅ for any n∈, and μ̅^n∗⇀μ as n≥ n_0. Hence, upon using a diagonal argument we obtain for any n≥ n_0 a counting measure ρ_0^n∈(), which satisfies ρ_0^n≪μ^n and ρ_0^n⇀ρ_0 as n→∞. Combining the moment bound (<ref>) with the n-uniform bound (<ref>) to replace <cit.> the result is then obtained by arguing as in the remainder of the proof of <cit.>. § RELATION TO EDP CONVERGENCE FOR GRADIENT STRUCTURES In this part of the appendix, we translate the gradient flow formulation in terms of curves of maximal slope for the quasi-metric provided in Section <ref> into the recent notion of gradient flows in continuity equation format <cit.>. The starting point is an abstract continuity equation including the conservation laws of the systems under consideration; both the nonlocal (<ref>) and local continuity equation (<ref>) represent indeed examples of that formalism. The flux formulation has the advantage that the kinetic relations (<ref>) and (<ref>) can be encoded as the subdifferential of a suitable convex functional. For (<ref>) those functionals can be formally defined by (see Definition <ref> for how undefined cases are handled) (ρ,j) = ∬_G 1/4*⟨*|jρ⊗μ_+^2 η(ρ⊗μ) + ∬_G 1/4*⟨*|jμ⊗ρ_-^2 η(μ⊗ρ), ^*(ρ,v) = ∬_G* v_+ ^2/4η(ρ⊗μ) + ∬_G * v_- ^2/4η(μ⊗ρ) . Hereby, the formal duality is understood in the η-weighted dual product *v,j_η = 1/2∬_G v η j and it is easy to verify (see Remark <ref>) that ^*(ρ,v) = sup_j∈(G)⟨*|v,j_ η - (ρ,j) . Hence, we have always the bounds v,j_ η≤(ρ,j) + ^*(ρ,v). Given r,s∈ [0,∞], define the function f:→[0,∞] by f(j) 1/2*α(j,r) + α(-j,s), where α:×[0,∞]→[0,∞] α(j,r) (j_+)^2/r, r>0, 0, j = r=0, ∞, j r=0. For any r,s∈[0,∞) the function f is proper, lsc. and convex, so that by the Fenchel-Moreau theorem f is its own convex biconjugate. The convex conjugate of f is given by f^∗(v) = 1/2*r(v_+)^2+s(v_-)^2. To see this, at first we assume that r,s > 0 and calculate f^∗(v) = sup_j∈*v· j - f(j)= sup_j∈*v· j - (j_+)^2/2r-(j_-)^2/2s = 1/2*r(v_+)^2+s(v_-)^2. By the definition of α, the cases r=0 and s=0 satisfy the same equality, since the supremum is then achieved at j_+=0 or j_-=0 respectively. This provides a robust formulation thanks to the characterization of the subdifferential through Legendre-Fenchel duality: any pair (v,j)∈ C_0(G)×(G) satisfies the identity (<ref>) if and only if j ∈∂_2 ^*(ρ,v) ⇔ v ∈∂_2 (ρ,j) ⇔*v,j_η = (ρ,j) + ^*(ρ,v), where ∂_2 and ∂_2 denote the subdifferential for the second argument, which for (<ref>) and (<ref>) are single-valued. We note that (ρ,j) = 1/2(μ,η;ρ,j) from Definition <ref> and we have thanks to Corollary <ref> the improved integrability of the flux whenever (ρ,j)<∞ of the type 1/2∬x-yη(x,y) j(x,y) ≤√(2 (ρ,j)) <∞. In particular, this allows to extend the test-function class in (<ref>) to nonlocal gradients of the type φ with φ∈ C^1_b(), since we can estimate *φ, j_η ≤1/2∬_G φ(y)-φ(x)/x-yx-yη(x,y) j(x,y) ≤φ_C^1()1/2∬x-yη(x,y) j(x,y) < ∞. With this observation, we find in equation (<ref>) another duality structure between the nonlocal divergence and the nonlocal gradient . Both satisfy, for any test function φ∈ C_b^1(^d), the identity []φ, j = -[]φ, j_η. Let us suppose that the variational derivative of the energy '(ρ) ∈ C^1_b(), which is usually not the case. Then, we can connect the continuity equation (<ref>) with the specific choice for the velocity in (<ref>) as gradient of the derivative v_t = -'(ρ_t) via the identity (ρ_t)t = '(ρ_t), ∂_t ρ_t = -'(ρ_t), j_t = - -'(ρ_t),j_t_η ≥ - (ρ_t,j_t) - ^*(ρ_t,-'(ρ_t)), where equality holds if and only if j_t and v_t are related through (<ref>) and hence equivalently by (<ref>). By integrating the estimate (<ref>), we obtain an energy-dissipation functional defined for solutions (ρ, j)∈_T to (<ref>) by (ρ, j) = (ρ_T) - (ρ_0) + ∫_0^T ⟨*|(ρ_t,j_t) + ^*(ρ_t,-'(ρ_t)) t . This is connected to the De Giorgi functional by the relation (ρ) = inf[](ρ, j) : j such that (ρ, j)∈_T . The previous considerations lead to the following definition of a nonlocal gradient system in continuity equation format in our setting. A nonlocal gradient structure in continuity equation format has the building blocks: * A graph structure induced by (μ,η) defining the nonlocal divergence (implicity depending on η) and providing a notion of solutions for (ρ, j)∈_T given in (<ref>) through the duality product C_b^1(^d)×(^d) ∋ (φ,j)↦φ,j_η. * An energy functional :(^d)→ [0,∞). * A dissipation functional : (^d)×(G)→: For any ρ∈(^d), the map (ρ,·) is convex and lower semicontinuous with min(ρ,·) = (ρ,0)=0. The gradient structure ((μ,η),,,) is called good, provided that for any (ρ, j)∈_T, one has (ρ, j)≥ 0. A curve (ρ, j)∈_T is an EDP solution of the good gradient structure ((μ,η),,,) provided that (ρ, j)= 0. Since, in general the variational derivative of the driving energy is not in the class of admissible test functions, in our case '(ρ)∉ C_b^1(), the goodness of the energy-dissipation functional _T has to be proven. This is typically done with the help of establishing the chain rule inequality (ρ_t)- (ρ_0) ≥∫_0^t []'(ρ_s),j_s_ηs a.e. t∈(0,T). One of the main results of <cit.> can be restated as follows: Any measure-valued solution of (<ref>) is an EDP solution for the gradient structure ((μ,η),,,), with elements given by (<ref>), (<ref>) and (<ref>). Having established the variational formulation for (<ref>), we can ask about the variational convergence for the -rescaled version, where now η is replaced by η^ from (<ref>) and hence, we study the gradient structure ((μ,η^),_,,_) and ask about its limit as → 0. From the heuristics in the introduction, we expect a limit of the form (^d,÷, ,_), where the graph structure is replaced by a -weighted Euclidean structure with respect to the limiting tensor in (<ref>) and the nonlocal divergence by the standard one ÷ j = ∑_i=1^d ∂_i j_i. The limiting dissipation potential defines a dynamic dissipation after Otto and co-authors <cit.> giving rise to the Wasserstein distance <cit.>, which in its -weighted form was studied in <cit.> and is given by _(ρ,j) = ∫_1/2[]jρ(x)^2_(x)ρ(x), where the weighted Euclidean norm is given by *ξ_^2 = ξ·^-1ξ for ξ∈^d. The limiting gradient structure (^d,÷, ,_) is good thanks to the chain rule proven in Proposition <ref>, implying that for (ρ, j)∈_T the ED functional _(ρ, j) given by _(ρ, j) = (ρ_T) - (ρ_0) + ∫_0^T ⟨*|_(ρ_t,j_t) + _^*(ρ_t, - ∇'(ρ_t)) t is non-negative, i.e. _(ρ, j)≥ 0, with the dual dissipation functional _^*(ρ,ξ) = ∫1/2ξ(x), (x)ξρ(x). Hence, any curve (ρ, j)∈_T with _(ρ, j)=0 is a measure-valued solution to (<ref>). For the EDP convergence statement, we need a common notion of curves for >0 and the limit. This is possible thanks to the reconstruction of the flux in Proposition <ref>, where we showed that any (ρ^, j^)∈_T can be associated with a curve (ρ^,^)∈_T. In this way, we arrive at a notion of convergence, which we call τ-convergence _T ∋ (ρ^, j^) τ (ρ, j) ∈_T provided that ρ^_t ρ_t narrowly in (^d) for a.e. t∈[0, T] and ∫_·^_tt∗∫_·t weakly-^∗ in ((0,T)×;). With this preliminary considerations, the main result of this work contained in Theorem <ref> can be recast in terms of EDP-convergence, or also called evolutionary Γ-convergence <cit.>. The EDP convergence statement is formulated in terms of the total dissipation functional of a curve (ρ^, j^)∈_T defined by _(ρ^, j^) = ∫_0^T ⟨*|_(ρ^_t,j^_t) + _^*(ρ^_t,-'(ρ^_t)) t . Now, we can restate the Theorem <ref> in the language of EDP convergence. Typically, the energy is dependent of , and the EDP convergence statement contains a Γ-limit statement of the type __0. In our case this is not necessary, since the driving energy does not depend on and is continuous. Let (μ,ϑ) satisfy (<ref>), (<ref>) and (<ref>) – (<ref>). Let η^ be given by (<ref>) and assume K satisfies (<ref>) – (<ref>). Then, the sequence of gradient structures ((μ,η^),_,,_) for (<ref>) EDP converges to the limiting gradient structure (^d,÷, ,_) for (<ref>) as → 0. Let (ρ^, j^)∈_T with sup_0<≤_0_(ρ^, j^) < ∞, then there exists a subsequence such that _T ∋ (ρ^, j^) τ (ρ ,) ∈_T and lim inf_→ 0_(ρ^, j^) ≥_(ρ,), where the limiting total dissipation function is defined by _(ρ,) = ∫_0^T [ _(ρ_t,_t)+^*_(ρ_t,-∇'(ρ_t))] t , provided (ρ ,) ∈_T. The compactness and convergence of solutions to the continuity equation is contained in Proposition <ref>. The lower semicontinuity of _ is a consequence of the individual lower semicontinuity statements for _ and _^*, which follow from Proposition <ref> and Proposition <ref>, respectively. §.§ Acknowledgements The authors are grateful to Dejan Slepčev and Francesco Patacchini for enlightening discussions on the original question, posed in a previous manuscript. Furthermore, the authors would like to thank José Antonio Carrillo for his valuable suggestions on the content of the current manuscript and for hosting GH in Oxford. AE was supported by the Advanced Grant Nonlocal-CPD (Nonlocal PDEs for Complex Particle Dynamics: Phase Transitions, Patterns and Synchronization) of the European Research Council Executive Agency (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 883363). GH acknowledges support of the German National Academic Foundation (Studienstiftung des deutschen Volkes) and the Free State of Saxony in the form of PhD scholarships. AS is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 – 390685587, Mathematics Münster: Dynamics–Geometry–Structure. abbrv
http://arxiv.org/abs/2306.10459v1
20230618030539
Succulent rings and images of hairy Schwarzschild black holes
[ "Yuan Meng", "Xiao-Mei Kuang", "Xi-Jing Wang", "Bin Wang", "Jian-Pin Wu" ]
gr-qc
[ "gr-qc" ]
=0.4 cm [email protected] Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China [email protected] (corresponding author) Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China [email protected] Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China [email protected] Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China Shanghai Frontier Science Center for Gravitational Wave Detection, Shanghai Jiao Tong University, Shanghai 200240, China [email protected] Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China =0.5 cm A hairy Schwarzschild black hole describes the deformation of Schwarzschild black hole due to including additional sources. It is found that depending on the hairy parameters, the photons' configurations around this black hole can be classified into two cases, corresponding to the hairy Schwarzschild black hole with single photon sphere and double photon spheres, respectively. We focus on the shadows and images of the hairy Schwarzschild black hole under two types of static thin illuminations conditions: disk accretion and spherical accretion, respectively. Under both illuminations, the two hairy parameters (α and l_o) have competitive affects on the shadow and optical appearance image of the hairy Schwarzschild black hole with single photon sphere. This means that even the parameters have significant influences on the rings and shadows, but its images with certain groups of α and l_o could be indistinguishable to that of Schwarzschild black hole, namely, the images degeneracy exists between the hairy Schwarzschild black hole and Schwarzschild black hole. Moreover, the optical appearance image of the hairy Schwarzschild black hole with double photon spheres will exhibit new additional rings and succulent features, which are not present in the images of (hairy) Schwarzschild black hole with single photon sphere. Our theoretical studies on the rings and shadows provide a potential tool to differentiate the hairy Schwarzschild black hole with double photon spheres from Schwarzschild black hole, but they are not always helpful for the cases with single photon sphere due to the degeneracy. Succulent rings and images of hairy Schwarzschild black holes Jian-Pin Wu July 31, 2023 ============================================================== § INTRODUCTION Recent breakthroughs in the observation of black holes have triggered a new era to approach strong gravity field regime to further testify the essence of gravity. One of the most important achievements is that the Event Horizon Telescope (EHT) collaboration has released images of the supermassive black holes in M87* <cit.>, and further in Sgr A* at the center of the Milky Way system <cit.>. Those images show a black central region surrounded by a bright ring-shaped construction, which is the resultant product of light rays in the gravitational field of an object having a photon sphere (or a critical unstable curve) when illuminated by an accretion flow. The central silhouette which is bounded by the critical curve is usually known as the black hole shadow or photon capture region <cit.>. Technically, the critical curve is defined as the light ray received by the observer that would have approached asymptotically a bound photon orbit in ray traced backwards methods. In fact, early on, Synge and Luminet proposed the expression of the angular radius of the photon capture region for Schwarzschild black holes, which is determined by a critical impact parameter <cit.>. Subsequently, Bardeen presented the shadow of the rotating Kerr black hole for the first time, indicating that the spin can deform the black hole shadow <cit.>. Due to the fact that the black hole shadow only depends on the background geometry, various numerical simulations of shadows in general relativity (GR) have been extensively discussed <cit.>. In addition, the black hole shadows in various modified theories of gravity (MoG) and in high-dimensional space-time have been extensively studied <cit.>. Moreover, though the discovery from EHT are mainly based on black holes in GR, it allows plenty of room for alternative compact objects or black holes in theories beyond GR. So, using the EHT observations on the shadow to testify fundamental physics and constrain parameters in MoG is far reaching, see for examples <cit.> and references therein. The bright ring-shaped construction in the EHT images is radiated from the accretion matters surrounding the real astrophysical black holes, of which the geometry and physical properties could significantly determine the optical appearance of the black hole. In theoretical aspect, it is difficult to mimic the realistic accretion flow in astrophysical environment, but it is useful to consider some simplified accretion conditions to investigate the major features of black hole image and capture prospective signal of new physics. The first image of black hole with a thin accretion disk was calculated analytically in <cit.>, which shows that there are primary and secondary images appeared outside black hole shadow. Then the author of <cit.> pointed out that it is relatively easy to distinguish a Schwarzschild black hole from the static wormhole according to shadow images. As another kind of accretion, the spherical accretion has been applied to analyze the image of a Schwarzschild black hole <cit.> in which the shadow is found to be a robust feature and that its size and shape are primarily influenced by rather the spacetime geometry than the details of the accretion. More recent investigations of a Schwarzschild black hole with thin and thick accretion disks <cit.> show that the lensed ring together with photon ring contribute additional observed flux to the image, but the main contribution of the total observed specific intensities comes from direct emissions, while the contribution of the lensed ring emissions is small, and the contribution of the photon ring emissions is negligible. Nowadays, the photon ring and observational appearances of black holes is an exciting area of research <cit.>, since they allow one to differentiate GR black holes from alternative compact objects or black holes in MoG via illumination, just as their respective images can be used to distinguish GR and beyond. On the other hand, due to the additional surrounding sources, the black holes in our Universe could obtain an extra global charge dubbed `hair’ and the spacetime may deviate from the black hole metric in GR. Recently, a hairy Schwarzschild black hole was constructed with the use of the gravitational decoupling (GD) approach <cit.>, which is designed for describing deformations of known solutions of GR due to the inclusion of additional sources. The GD approach and the metric will be reviewed soon in next section. The hairy Schwarzschild black hole and its rotating counterpart have attracted quite a lot of attentions. Plenty of theoretical and observational investigations have been studied, for examples, thermodynamics <cit.>, quasinormal modes and (in)stability <cit.>, strong gravitational lensing, parameter constraint from EHT observations on black hole shadow <cit.>, Precession and Lense-Thirring effect <cit.> and gravitational waves from extreme mass ratio inspirals <cit.>. Those investigations promote the possible test of the no-hair theorem and could provide powerful probes of alternative theories of gravity with additional fields. The main aim of this work is to study the rings and optical appearances of the hairy Schwarzschild black hole proposed in GD approach. The interest mainly stems from two aspects: (i) the hairy black hole in this scenario has great generality because there is no certain matter fields in the GD approach, so this hairy metric allows us to study the light rays and shadow effected by arbitrary type of hair (e.g. scalar hair, tensor hair, fluid-like dark matter, and so on) and compare them to that of Schwarzschild black hole. (ii) The hairy Schwarzschild black hole was addressed in <cit.> to possess two unstable photon spheres outside the event horizon in certain parameters region, therefore, it is natural to expect that the second photon sphere will bring in rich structures in the observed appearance. Thus, we shall firstly analyze the effective potential of the photons in the hairy parameters space, and then study the light rays around the hairy Schwarzschild black hole with the use of ray tracing method, which is significantly affected by the hairy parameters. We find that the photons' configurations in essence can be classified into two cases, corresponding to hairy Schwarzschild black hole with single photon sphere and double photon spheres, respectively. Then by illuminating the hairy black hole with various static thin accretions, we analyze the effects of the hairy parameters on the rings and shadows in the optical appearance images, and also differentiate the image of hairy Schwarzschild black hole with double photon spheres from that with single photon sphere. The paper is organized as follows. In Section <ref>, we briefly review the hairy Schwarzschild black hole constructed by GD approach and then analyze its photon spheres. In Section <ref>, with the use of ray tracing method, we study the light rays distributions around the hairy black hole with various parameters. We then explore the optical appearances images of the hairy Schwarzschild black hole with both single photon sphere and double photon spheres, when it is under the illumination of static thin accretion disk (section <ref>) and spherical accretion (section <ref>), respectively. Finally, section <ref> is our closing remarks. § PHOTON SPHERE OF THE HAIRY SCHWARZSCHILD BLACK HOLES In this section, we will show a brief review on the idea of GD approach and the hairy Schwarzschild black holes constructed from GD approach by Ovalle <cit.>. Then we investigate the nature of motions of photon in the vicinity of the hairy black hole. The no-hair theorem in classical GR states that black holes are only described by mass, electric charge and spin <cit.>. But it is possible that the interaction between black hole spacetime and matters brings in other charge, such that the black hole could carry hairs. The physical effect of these hairs can modify the spacetime of the background of black hole, namely hairy black holes may form. Recently, Ovalle et.al used the GD approach to obtain a spherically symmetric metric with hair <cit.>, in which the corresponding Einstein equation is expressed by R_μν-1/2Rg_μν=8πT_μν. Here T_μν is the total energy momentum tensor written as T_μν=T_μν+ϑ_μν where T_μν and ϑ_μν are energy momentum tensor in GR and the one introduced by matter fields or others, respectively. ∇^μT_μν=0 is satisfied because of the Bianchi indentity. It is direct to prove that when ϑ_μν=0, the solution to (<ref>) degenerates into Schwarzschild metric. The hairy solution with proper treatment (strong energy condition) of ϑ_μν was constructed and the detailed algebra calculations were shown in <cit.>. Here we will omit their steps, and directly write down the formula of the metric for the hairy Schwarszchild black hole ds^2=-f(r)dt^2+dr^2/f(r)+r^2(dθ^2+sin^2θ dϕ^2)   with   f(r)=1-2M/r+α e^-r/(M-l_o/2). This metric describes certain deformation of the Schwarszchild solution due to the introduction of additional material sources, which can be scalar hair, tensor hair, fluid-like dark matter, and so on. When α=0, the metric reduces to the Schwarzchild solution in GR, namely with the absence of the matters. In this solution, M is the black hole mass, α is the deformation parameter due to the introduction of surrounding matters and it describes the physics related with the strength of hairs, and l_o=α l with l a parameter with length dimension corresponds to the charge of primary hair which should satisfy l_o≤ 2M to guarantee the asymptotic flatness. The event horizon is determined by the largest root to the equation r_h+α r_h e^-r_h/(M-l_o/2)=2M, which can only be solved with numeric. Next, we will build the equations for the null geodesic motion of the hairy Schwarzschild black holes. The motions of photons are described by Euler-Lagrange equation, d/dλ(∂ℒ/∂ẋ^μ)-∂ℒ/∂ x^μ=0, where ẋ^μ=dx^μ/dλ represents the four-velocity of photon with λ the affine parameter, and for the metric (<ref>) the Lagrangian of the photon is ℒ=1/2g_μνẋ^μẋ^ν=1/2[-f(r)ṫ^2+1/f(r)ṙ^2+r^2(θ̇^2+sin^2θϕ̇^2)]. Since ∂_t and ∂_ϕ are Killing vector fields in the hairy Schwarzschild spacetime, we can obtain two conservation constants E and L, E≡-∂ℒ/∂ṫ=f(r)ṫ,          L=∂ℒ/∂ϕ̇=r^2 sin^2 θϕ̇, which indicate the energy and angular momentum of the photon. Moreover, due to the spherical symmetry of the spacetime, we can consider the motions of photon on the equatorial plane (θ=π/2) for convenience. Then, considering ℒ=0 for photon and defining the impact parameter b≡ L/E, we can extract three equations of motion for the photons around the hairy Schwarzschild black hole from (<ref>) as ṫ=1/bf(r),    ϕ̇=±1/r^2, ṙ^2=1/b^2-V_eff(r), where the effective potential takes the form V_eff(r)=1/r^2f(r). The radial geodesic equation (<ref>) determines the fate of a given photon depending on its impact parameter. In particular, when b satisfies 1/b^2=V_eff(r_0) at certain r=r_0, we have ṙ=0 which means that the photon is deflected at the minimum distance r_0 from the central black hole. The critical impact parameter, b_ph, is determined by the vanishing of ṙ at the maximum of the potential, b_ph=1/√(V_eff(r_ph)), V_eff'(r_ph)=0, V_eff”(r_ph)<0. Thus, the geodesic trajectories with b_ph are usually known as photon spheres with radius r_ph, and the orbits of these light rays are unstable, meaning that a small radial perturbation will make it either run (b>b_ph) to infinity or to be captured (b<b_ph) into the event horizon of the black hole. For static black hole, the photon sphere generates the boundary of black hole shadows, and plays a key role in the image of black hole. The existence of single photon sphere outside the event horizon of black hole is commonly considered because it physically connects with the respect of dominant energy condition (DEC) and strong energy condition (SEC) <cit.>. More recently, physicists have great interests in the black holes with two photon spheres, inspired by the work <cit.> in which the authors found the existence of two photon spheres outside the event horizon of dyonic black holes when the SEC is violated. It was found that double photon spheres could lead to echo signal in dyonic black holes <cit.> and profound observed appearances with various illuminations around scalarized RN black hole <cit.>. In particular, very recently, many black holes were found to have two photon spheres even though DEC and SEC are satisfied <cit.>. The hairy Schwarzschild black hole (<ref>) is the one they considered, and double photon spheres can exist when both α and l_o are large enough of which the parameter region were present in the figure 6 of <cit.>. Here, with the help of (<ref>), we shall explicitly show the photon spheres and its related critical impact parameters of the hairy Schwarzschild black hole. It is noted that in this sector, we have the quantities (α,l_o,r,b,M), which will be rescaled by M to be dimensionless (α,l_o/M,r/M,b/M,1). Therefore, from now on, all the physical quantities will be evaluated as the dimensionless ones, and so in the calculation we are safe to set M=1. For the sake of the discussion, we shall fix l_o=0.9 and tune α to include different configurations. The radius of photon sphere(s) as a function of α is shown in FIG.<ref>. For most values of α (yellow and pink regions), only one photon sphere with radius r_ph exists corresponding to the single maximum of the potential function, while their exists a region of α (blue region), in which two photon spheres appear corresponding to two maximums of the potential function, and we denotes the radius r_ph1<r_ph2. It is obvious that the radius of photon sphere decreases due to the additional fields. Then in FIG.<ref>, we depict the critical impact parameter(s). To explicitly distinguish the configurations, we only show the related results of b_c for the parameters in the central part of FIG.<ref>. Similarly, b_ph also decreases as the α increases. In particular, in the case with double photon spheres, as α increases, b_ph1 for the inner photon sphere decreases faster and will then be smaller than b_ph2 for the outer photon sphere. From (<ref>), this means that for some parameters, the maximum value of the potential function for inner photon sphere will become larger than that for outer photon sphere. Therefore, in terms of the behavior of b_ph, we shall classify the configurations into three types: 1 single b_ph (yellow and pink regions), 2 b_ph1>b_ph2 (dark blue region) and 3 b_ph1<b_ph2 (light blue region). The typical behavior of the potentials for the three types of configurations is presented in FIG.<ref>-Fig.<ref>. * For the configuration 1 shown in FIG.<ref>, as the photon moves closer to the black hole, the effective potential first increases to the maximum value at r_ph which corresponds to b_ph, and then decreases. For the photon with b<b_ph, it will be bounded by the effective potential and falls into the black hole. On the contrary, for the photon with b>b_ph, it will spread to infinity. * For the double photon spheres case, the configuration 2 with b_ph1>b_ph2 means that the maximum of the effective potential for the inner photon sphere is smaller than that for the outer photon sphere as shown in Fig.<ref>. Thus, the inner photon sphere indeed cannot escape from the binding of gravity of black hole to infinity. This implies that the fate of photon with the configuration 2 is in essence similar to that with configuration 1. * Fig. <ref> shows the configuration 3 with the effective potential of the inner photon sphere larger than that of the outer photon sphere. It is obvious that in this case the light rays of the inner photon sphere may escape away from the binding of the black hole and reach the observer at infinity. Moreover, in all cases, the effective potential vanishes at the event horizon as expected. The above analysis on the fate of various photons will be checked by solving the geodesic equations with the use of ray tracing method in next section. § RAY TRACING AND PHOTON TRAJECTORY We will solve the geodesic equations and figure out the photon trajectory around the hairy Schwarzschild black hole, which is to pave the way to study the optical appearance of the black hole. The trajectory of light ray is determined by the deformed geodesic equation dr/dϕ=ṙ/ϕ̇=±1/r^2√(1/b^2-V_eff(r)). To proceed, we use the backtracked ray-tracing method in which the photon trajectory arriving to the observer's screen is backtracked by employing the above equation to determine the point of the sky where it is omitted. Thus, for the photon with impact parameter b>b_ph, it may turn around the central hairy black hole certain times, subsequently, we can divide the impact parameter region in terms of the total number of photon orbits n=ϕ/2π which accounts the number of its intersection with the equatorial plane. In details, for the observer located at the north pole, the photons are classified into three classes: the first class is defined as the direct emission with 1/4<n≤3/4, where the light ray intersects the equatorial plane only once (m=1). The second class with 3/4<n≤5/4, where light ray intersects the equatorial plane twice (m=2), corresponds to the lensed ring emission. The final class is photon ring emission where the light ray with n>5/4 crosses the accretion disk at least three times (m=3). Then, we shall explore the total number of orbit as a function of impact parameter, and then show the photon trajectory seen by the aforementioned observer. We intend to study the effect of the hairy parameters on the classification of the photon emission, and also check the difference between the cases with single photon sphere and double photon spheres. §.§ Single photon sphere For the configuration with single photon sphere, the typical results of the total number of photon orbits and the photon trajectory in the observer's sky are shown in Fig.<ref> where we choose α=2 and l_o=0.2 as a sample. At b=b_ph=4.27181, the total number of orbits n is divergent, and the photon travels around the black hole many times. For b<b_ph, as b increases, the total number n of orbits increases, while for b>b_ph, n decreases as b increases. The red, gold, and black curves in the left plot correspond to the photon ring emissions with b∈(4.24664,4.37160) , lensed ring emissions with b ∈(3.97754,4.24664)∪ (4.37160,5.87177), and direct emissions with b∉(3.97754,5.87177), which are denoted by the trajectories with the same colors in the right plot where their intersections with the equatorial plane are clear. To study the effects of hairy parameters on this phenomena, we do the parallel calculations for various values of parameters. The data of event horizon, photon sphere radius, critical impact parameters and regions for direct emissions, lensed ring emissions and photon ring emissions for fixed α=2 with samples of l_o, are listed in Table <ref>. We see that as the primary hair parameter l_o increases, the critical impact parameter b_ph increases, and all border values of b for direct, lensed ring and photon ring emissions gradually increases. But the width of photon ring and lensed ring emissions decrease, which implies that the stronger hairy charge of the black hole may make the rings more difficult to be detected. Moreover, those results for fixed l_o=0.2 with different derivation α are listed in Table <ref>. We see that in contract to the effect of l_0, larger α corresponds to smaller event horizon, photon sphere and various impact parameter borders of emissions. Moreover, with the increasing of α, the width of the photon ring emissions and the lensed rings emissions in the region b<b_ph first increases and then decreases, while the width of the lensed ring emissions with b>b_ph increases, indicating that the light deflection to the black hole becomes more intense for larger α. §.§ Double photon spheres We move on to discuss the direct emissions, lensed ring emissions and photon ring emissions of hairy Schwarzschild black holes with double photon spheres. The total number of photon orbits and the photon trajectory for the configuration 2 with α=6.4 and l_o=0.9 are shown in FIG.<ref>, in which the behaviors are similar to those in FIG.<ref> , only with wider photon ring emissions and lensed ring emissions. This is indeed expected because as we aforementioned in last section, that the double photon spheres with configuration 2 is in essence similar to the single photon sphere, and the inner photon sphere cannot escape to the distance observer. Thus, only one peak is observed in the total number of photon orbit, which is located at b=b_ph2, while the inner photon sphere is bounded by the black hole and cannot be captured by the observer. Meanwhile, we show the results for the configuration 3 with α=6.6 and l_o=0.9 in FIG. <ref>. Obviously, the behavior of n is significantly different from that for either single photon sphere or double photon spheres with configuration 2. Two peaks in n=ϕ/2π are observed, which corresponds to two photon spheres captured by the observer at the related critical impact parameters. The photon ring emission in this case is significantly enhanced, which will be more pronounced when the further apart the two critical impact parameters are. In addition, the ranges of direct, lensed ring, and photon ring emissions with respect to the impact parameter b are solved and the results are listed in Table <ref>. According to the above analysis, in view of the observer in the north pole, the light rays around the black hole with two photon spheres with configuration 2 are essentially the same as that with single photon sphere. Only the double photon spheres with configuration 3 will bring additional observation features, as also addressed in <cit.>. So in the following study on optical appearance, we will only consider the cases with single photon sphere and double photon spheres with configuration 3, which for convenience will be denoted by single photon sphere and double photon spheres, respectively. So far, the analysis concentrates on the central depression of the image of the hairy Schwarzschild black hole seen by a far-away observer and the shapes of the light rings, which are idealized observables. Considering that the realistic astrophysical images mainly consist of the physics of the accretions around the central objects, it is of interest to explore the images of hairy Schwarzschild black hole illuminated by accretions. For convenience, we will focus on static and thin (geometrically and optically) accretions, which are disk and spherically symmetric, respectively. Besides the effects of hairy parameters on the optical appearances, we are especially interested in differentiating the optical appearances of the hairy Schwarzschild black hole with double photon spheres from that with single photon sphere. To this end, we choose three couples of hairy parameters, i.e, α=2 & l_o=0.2, α=2 & l_o=0.6 and α=3  &  l_o=0.2 to discuss the optical properties of hairy Schwarzschild black hole with single photon sphere, while α=6.6 & l_o=0.9 for double photon spheres. § SHADOWS AND RINGS WITH STATIC THIN ACCRETION DISK In this section we will explore the images of hairy Schwarzschild black hole illuminated by the optically and geometrically thin accretion disk, which is located at rest on the equatorial plane around the black hole, viewed face-on. Since the light ray will extract energy from the thin accretion disk each time when passing through it, so different types of emissions will contribute differently to the observed light intensity. The analysis in previous section indicates that the hair has a significant effect on widths of various emissions. So it is interesting to further study the observed intensities and see the hairy Schwarzschild black hole's observational appearance. §.§ Observed specific intensities and transfer functions Considering that the thin accretion disk emits isotropically in the rest frame of static worldlines, the specific intensity received by the observer with emission frequency ν_e is I_o(r, ν_o)=g^3 I_e(r,ν_e), where g=ν_o/ν_e=√(f(r)) is the redshift factor, and I_e(r,ν_e) is the specific intensity of the accretion disk. The total observed intensity I_obs(r) can be obtained by integrating all observed frequencies of I_o(r, ν_o) written as I_obs(r)=∫ I_o(r, ν_o) dν_o=∫ g^4 I_e(r,ν_e) dν_e=f(r)^2 I_em(r), where we denote I_em(r)=∫ I_e(r,ν_e) dν_e as the total emitted intensity. We note that if the trajectory of photon followed backward from the observer intersects the disk, the photon from accretion disk emission will contribute the brightness to the observer <cit.>. Thus, ignoring the absorption, the total observed intensities are determined by each intersection, yielding I_obs(b)=∑_mf(r)^2I_em(r)|_r=r_m (b), where r_m(b) denotes for the coordinate of the m-th intersecting position between the light ray emitted with impact parameter b and the accretion disk. r_m (b) is also known as the transfer function because it describes the mapping from the impact parameter of the photon to the m-th hitting position on the disk, and its slope dr/db describes the demagnification factor at each b <cit.>. So, before studying the total observed intensities, we have to evaluate the transfer functions. We shall focus on the first three transfer functions since the higher cases contribute much less to the total luminosity. As illustrated in <cit.>, the first transfer function corresponds to the direct image originating from direct, lensed and photon rings emission; the second transfer function can origin from lensed ring and photon ring emission; while the third transfer function can only origin from photon ring emission. In FIG.<ref>, we depict the transfer functions for the hairy Schwarzschild black hole with single photon sphere (α=2, l_o=0.2) and that with double photon spheres (α=6.6, l_o=0.9). We can read off the following properties. (i) In both cases, the slope of the first transfer function is almost 1, which means that this direct image can be seen as the source profile after redshift. (ii) For the hairy Schwarzschild black hole with single photon sphere, the slopes of the second and third transfer functions are much more than that of the first one, and r_3(b) is steeper than r_2(b). This implies that in this case the first transfer function will give the largest contribution to the total luminosity and the second and third ones are highly demagnified. Additionally, we also check the effects of hairy parameters on the transfer functions. They have some effects on the width of the second and three transfer functions and their slopes, but comparing to the contribution from the first transfer function, those from others are still insignificant. (iii) For the hairy Schwarzschild black hole with double photon spheres, the widths of second and third transfer functions have been significantly widened. In particular, for b≲4.6 their slopes are even smaller than that for first transfer function, indicating that they are not demagnified. These imply that due to the existence of double photon spheres, the second and third transfer functions could make important contributions to the total luminosity. With the transfer function in hands, Eq.(<ref>) indicates that we can then evaluate the observed intensities from each transfer function and so the total observed intensity, once the emission function is given. It is natural to regard the brightness contributed from the first, second and third transfer function as the direct, lensed ring and photon ring intensity, respectively. Next, to testify the key properties read from the transfer functions, we will consider some specific emission profiles of accretion disk, and then evaluate the total observed intensity in terms of each kind of intensity. §.§ Optical appearances We shall consider the following two toy-models emission functions <cit.> to figure out the optical appearances of the hairy Schwarzschild black hole. §.§.§ Model I In Model I, we consider that the emission of the accretion disk starts from the innermost stable circular orbit r_isco, and the emission specific intensity is attenuated by the second-order function of the radial coordinate Here I_0 is the maximum intensity (the same below) and the r_isco of the hairy Schwarzschild (Kerr) black hole was calculated by some of us in <cit.>. The observed intensities and images of the hairy Schwarzschild black hole illuminated by the above thin accretion model I are shown in FIG. <ref> (single photon sphere) and FIG. <ref> (double photon spheres) . In the figures, the leftmost column shows the different observed intensities originated from the direct (black), lensed ring (gold) and photon ring (red) intensity respectively; then we present the total observed intensities which are translated into the optical appearances in their right sides. The same layout will also be used in the other model. Let us firstly analyze the rings and images of the hairy Schwarzschild black hole with single photon sphere, and check the effects of the hairy parameters. The results depicted in FIG.<ref> show significantly different behaviors for different hairy parameters. We discuss the main properties. (i) The central dark region, i.e, the shadow, is smaller for larger deviation parameter α, but it becomes larger for stronger hairy charge l_o. This means that in certain cases, l_o may balance the deviation of hairy black hole image from Schwarzschild black hole. These properties for shadow are held for other accretion model, because the critical impact parameter or the critical curve is mainly determined by the geometry itself, independent of the surroundings. (ii) The optical appearance image with α=2 & l_o=0.6 is similar as that for Schwarzschild black hole <cit.>, namely, the direct intensity dominates the total luminosity under a bright ring (originated from direct intensity) of radiation, which encloses a thinner and dimmer ring contributed by the lensed ring intensity and ends in an even thinner ring originated from the photon ring intensity. When we decrease (increase) l_o (α), the three intensities could mix with each other such that the brightness origination of the total observed intensity at each b could be completely different from the Schwarzschile (-like) cases. For α=2 & l_o=0.2 due to the mixture of the direct intensity and lensed right intensity, inside the bright ring (originated from direct intensity and lensed right intensity) of radiation, there exists a wide bright region enclosing a ring contributed from the direct intensity and then ending in a thinner ring originated from the photon ring intensity. For α=3 & l_o=0.2, the direct intensity runs into the left side of photon ring intensity, so it contributes to the inner-most ring in the image. In a word, the radius and brightness contributions of the light rings both closely depend on the hairy parameters. However, the optical appearance for hairy Schwarzschild black hole for certain hairy parameters could be the same as that for the Schwarzschild black hole, implying the potential degeneracy in the images, because α and l_o may counteract each other's effects. We move on to diagnose the image of hairy Schwarzschild black hole with double photon spheres by the choice of α=6.6 & l_o=0.9. The presence of a second photon sphere induces new ray sources in the region b_ph1≤ b ≤ b_ph2, so it should introduce new ingredients in the optical appearances. The rich features can be seen in FIG.<ref>, where the inner edge of the disk in the emitted luminosity locates at r_isco/M=1.0985. From the left plot, we see that in contract to the single photon sphere case, a new pole appears in each intensity, which leads to the presence of additional peaks in the total observed intensity (in the middle plot). Thus, we see additional new light rings appearing in the inner region of the optical appearance image besides the usual three ones found in single photon sphere case. §.§.§ Model II In Model II, the emission is assumed to start from the photon sphere r_ph, and decay suppressed by the third power The results are depicted in FIG.<ref> (single photon sphere) and FIG.<ref> (double photon spheres). It is obvious from the left plots that this accretion construction allows the direct intensity to cross the critical impact parameter region due to the gravitational redshift, and becomes the dominant contribution there. As the impact parameter increases, we will see the combinations among/beween the photon ring, lensed ring, and direct intensities into the total observed intensity, and the direct intensity with radiation dominates again as we further increase the impact parameter. Therefore, as shown in the right plot of FIG.<ref>, the optical appearance of hairy Schwarzschild black hole with single photon sphere illuminated by Model II is very similar to that for Schwarzschild black hole <cit.>. The general feature is that a dark shadow is surrounded by a wide region of luminosity enclosing two bright rings, and the intensity of the inner ring is contributed by the direct intensity while the outer ring is the joint result of all three intensities. In addition, the decreasing (increasing) of l_o (α) will broaden the wide region of luminosity but reduce the brightness of the light rings. For the hairy Schwarzschild black hole with double photon spheres depicted in FIG. <ref>, the observed intensities and their contributions' origination at small or large impact parameters are similar to that in single photon sphere case. However, in the intermediate impact parameter region, a second photon sphere again causes a new spike in each intensity and so additional peaks appear in the total observed intensity. Therefore, the wide region of luminosity in the optical appearance could enclose more than two rings, comparing to that in single photon region case. § SHADOWS AND RINGS WITH STATIC THIN SPHERICAL ACCRETION In this section, we will investigate the rings and images of hairy Schwarzschild black hole surrounded by an optically and geometrically thin static accretion with spherically symmetric. Thus, the specific intensity I(ν_o) observed by a distant observer at r=∞ (measured in erg s^-1 cm^-2 str^-1 Hz^-1 ) radiated by the static accretion can be obtained by integrating the specific emissivity along the photon path γ <cit.> I(ν_o)=∫_γ g^3 j_e(ν_e)dl_prop, where g=ν_o/ν_e=f(r)^1/2 is the redshift factor, ν_o and ν_e are the observed photon frequency and the emitted photon frequency respectively. j_e(ν_e) is the emissivity per unit volume in the rest frame and we will set j_e(ν_e)∝δ(ν_r-ν_e)/r^2 as usual <cit.>, where ν_r is the emitter’s rest-frame frequency. dl_prop is the infinitesimal proper length, dl_prop=√(1/f(r)dr^2+r^2dϕ^2)=√(1/f(r)+r^2(dϕ/dr)^2)dr, where the formula of dϕ/dr can be read off from (<ref>). Then, integrating Eq.(<ref>) over all the observed frequencies, we get the total intensity observed by a distant observer I_obs=∫_ν_o I(ν_o)dν_o=∫_ν_e∫_γ g^4 j_e(ν_e)dl_prop dν_e=∫_γf(r)^2/r^2√(1/f(r)+r^2(dϕ/dr)^2)dr. It is obvious that the observed intensity depends on the radial distance r and the impact parameter b. The observed intensity for the hairy Schwarzschild black hole with single photon sphere is depicted in FIG.<ref>, which shows that in each case, there is a peak indicating a bright ring in the image as found in Schwarzschild black hole. As l_o (α) decreases (increases), the observed intensity including its peak will be enhanced, which means that the corresponding luminosity of the black hole becomes brighter. Besides, for smaller (larger) l_o (α), the central region inside the bright ring shrinks while the bright luminosity region with radiation region becomes wider. These observations again imply the competition of l_o and α in their effects on the optical appearances of hairy Schwarzschild black hole. The picture described above is clearer seen in the images of the black hole, shown in FIG. <ref>-FIG.<ref> which present the distribution of total observed intensity in two-dimensional plane. We move to check the image of the hairy Schwarzschild black hole with double photon spheres, of which the observed intensity is depicted in FIG. <ref>. The key feature in this case is the appearing of two peaks in the total observed intensity (the left plot), indicating that two bright rings will be observed in the optical appearance image (the right plot). The additional bright ring is again introduced by the second photon sphere, which allows the related black hole to be distinguished from the one with single photon sphere. § CLOSING REMARKS Studying the validity or violations of the no-hair theorem is one of the powerful ways to test gravity, and this subject becomes more fundamental when we refer to the existence of additional matter fields. The remarkable images of black holes published by the EHT collaborators open a new epoch of testing gravity in the strong field regime, including the probe of the validity or violations of no-hair theorem. In this paper, we firstly analyzed the light rays around hairy Schwarzschild black hole, which is a solution to the extended theory of general relativity including surrounding matter sources. Based on this, we then employed the ray tracing method to explore the shadows, rings and optical images of the hairy black hole when it is illuminated by static and (optically and geometrically) thin accretions. We found that comparing to the Schwarzschild black hole, the hairy parameters introduce significant influences into the distributions of light rays around the hairy black hole, such that the rings and images exhibit richer features. Depending on the hairy parameters (α and l_o), the radial effective potential of the photon could have one or two maximum values. Though the configurations of the effective potential are diversified, we pointed out that for a far distance observer at north pole, the observational features of hairy Schwarzschild black hole could be well reflected by two types of configurations, re-denoted by the hairy Schwarzschild black hole with single photon sphere and double photon spheres, respectively. In addition, the hairy charge l_o enhances the event horizon, photon sphere, critical impact parameter and the borders of impact parameter for direct, lensed ring and photon ring emissions (see Table <ref>), while, increasing the deviation parameter α suppresses all those values (see Table <ref>). This competitive effect could interpret the potential degeneracy in the images between the hairy Schwarzschild black hole and Schwarzschild black hole, as they are illuminated by accretions. After the impact parameter region was determined, we considered two toy models of light sources as static thin accretions disk, of which the inner edge locates at the ISCO for timelike object and the photon sphere, respectively. We firstly analyzed the first three transfer functions which connect m-hitting points on the disk with the impact parameter of the light ray. We found that for hairy Schwarzschild black hole with single photon sphere, the brightness contributions from the second and third transfer functions are puny comparing to the first transfer function because they will demagnify sharply (FIG.<ref>a). While for the hairy Schwarzschild black hole with double photon spheres, the widths of the second and third transfer functions have been significantly widened, and in certain region, their demagnification even smaller than that for first transfer function (FIG.<ref>b). By collecting the transfer functions, we have evaluated the corresponding observed intensities respectively originated from the direct, lensed ring and photon ring intensity, and also their resultant total observed intensity for both accretion models, from which we transfer to the optical appearance image in the observer's plane. Our observations can be summarized as follows. * The images of the hairy Schwarzschild black hole are very different depending on the emission model of the accretion disk, which also happens for Schwarzschild black hole. However, the central dark region, i.e, the shadow, is smaller (larger) for larger l_o (α). These properties for shadow are independent of the disk models, because the critical impact parameter or the critical curve is mainly determined by the geometry itself rather than the surroundings. * For the hairy Schwarzschild black hole with single photon sphere, we can always observe three light rings in the images in model I (FIG.<ref>), however, the hairy parameters seriously affect the position and width of the light rings. So we may observe completely different optical appearances from that of Schwarzschild black hole, but due to the competitive affects of the two parameters, we can also obtain the degeneracy of images between hairy Schwarzschild black hole and Schwarzschild black hole. While under the illumination of Modle II, we found that the optical appearance of hairy Schwarzschild black hole with single photon sphere was very similar to that for Schwarzschild black hole. Namely, a dark shadow is surrounded by a wide region of luminosity enclosing two bright rings (FIG.<ref>). However, it was observed that decreasing (increasing) l_o (α) could broaden the wide region of luminosity but obfuscate the brightness of the light rings. * For the hairy Schwarzschild black hole with double photon spheres, it is obvious that a second photon sphere could introduce additional peaks in the total observed intensity in both accretion disk models (FIG.<ref> and FIG.<ref>). Thus, when compared to the case of a single photon sphere, additional new light rings were observed in the images of those black holes, making them distinguishable. Finally, we investigated the rings and images of the hairy Schwarzschild black hole under the illumination of a static, spherically symmetric thin accretion disk. We also found that l_o and α have competitive effect on shadow size, brightness of ring in the optical appearances image (FIG.<ref>). This again could lead to the potential degeneracy between the black holes with and without hair. Moreover, under the spherical accretion, we observed two bright rings in the optical appearance of hairy Schwarzschild black hole with double photon spheres (FIG.<ref>). This also provides an alternative tool to testify black holes with double photon spheres. This work is partly supported by Natural Science Foundation of Jiangsu Province under Grant No.BK20211601, Fok Ying Tung Education Foundation under Grant No.171006, the Postgraduate Research & Practice Innovation Program of Jiangsu Province under Grant No. KYCX22_3452 and KYCX21_3192, and Top Talent Support Program from Yangzhou University. apsrev
http://arxiv.org/abs/2306.08938v2
20230615082141
Scalable Resource Management for Dynamic MEC: An Unsupervised Link-Output Graph Neural Network Approach
[ "Xiucheng Wang", "Nan Cheng", "Lianhao Fu", "Wei Quan", "Ruijin Sun", "Yilong Hui", "Tom Luan", "Xuemin Shen" ]
eess.SY
[ "eess.SY", "cs.LG", "cs.SY" ]
Scalable Resource Management for Dynamic MEC: An Unsupervised Link-Output Graph Neural Network Approach Xiucheng Wang1, Nan Cheng1, Lianhao Fu1, Wei Quan2, Ruijin Sun1, Yilong Hui1, Tom Luan3, Xuemin (Sherman) Shen4 1School of Telecommunications Engineering, Xidian University, Xi'an, 710071, China 2School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China 3School of Cyber Engineering, Xidian University, Xi'an, 710071, China 4 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada Email: {xcwang_1, lhfu}@stu.xidian.edu.cn, dr.nan.cheng, [email protected], {sunruijin, ylhui, tom.luan}@xidian.edu.cn, [email protected] July 31, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Deep learning has been successfully adopted in mobile edge computing (MEC) to optimize task offloading and resource allocation. However, the dynamics of edge networks raise two challenges in neural network (NN)-based optimization methods: low scalability and high training costs. Although conventional node-output graph neural networks (GNN) can extract features of edge nodes when the network scales, they fail to handle a new scalability issue whereas the dimension of the decision space may change as the network scales. To address the issue, in this paper, a novel link-output GNN (LOGNN)-based resource management approach is proposed to flexibly optimize the resource allocation in MEC for an arbitrary number of edge nodes with extremely low algorithm inference delay. Moreover, a label-free unsupervised method is applied to train the LOGNN efficiently, where the gradient of edge tasks processing delay with respect to the LOGNN parameters is derived explicitly. In addition, a theoretical analysis of the scalability of the node-output GNN and link-output GNN is performed. Simulation results show that the proposed LOGNN can efficiently optimize the MEC resource allocation problem in a scalable way, with an arbitrary number of servers and users. In addition, the proposed unsupervised training method has better convergence performance and speed than supervised learning and reinforcement learning-based training methods. The code is available at <https://github.com/UNIC-Lab/LOGNN>. edge computing, link-output graph neural network, scalability, unsupervised learning § INTRODUCTION In recent years, mobile edge computing (MEC) has garnered widespread attention for its ability to reduce task processing latency by providing edge computing resources to users with limited computing capabilities <cit.>. The conventional approach involves transmitting user tasks to the server via a wireless network, with several variables optimized to minimize task computing delays, such as offloading proportion, user transmission power, and server computing resource allocation, among others. Given the critical role of MEC in enhancing user experience, numerous researchers have focused their efforts on designing resource management methods that can further optimize MEC performance <cit.>. Drawing inspiration from the remarkable achievements of deep learning, several studies have employed neural networks (NN) to address the optimization problem in MEC, outperforming traditional optimization methods both in terms of performance and algorithm execution time <cit.>. The dynamics of MEC systems, which are changes in numbers and locations of the edge servers (e.g., flying drones as MEC servers) and users (e.g., vehicular users), raise two challenges in NN-based methods: low scalability and high training costs <cit.>. Scalability plays a crucial role in determining whether a new NN architecture needs to be manually designed and retrained when the number of edge servers and users undergoes changes. Additionally, training costs are a key factor in determining the ability of an NN-based method to be rapidly deployed in the edge network. Some recent works explore using graph neural network (GNN) to deal with the scalability issue in input space, which includes usually the features of network nodes. This is because the inference of GNN relies on the message passing method which is an input dimension-independent method <cit.>. However, in MEC, the dimension of offloading and resource allocation decisions can also change as the network scales. For instance, a task can be offloaded to a varying number of servers, thereby leading to a unique scalability issue, i.e., scalability in the decision space. Unfortunately, the conventional node-output GNN, which allocates resources from an edge node to others through the dimension-fixed node features vector, fails to address the dimension change of the decision space caused by the changing size of the edge network. Thus, the development of a novel approach that can efficiently tackle the scalability issues in both input and decision space remains a crucial area of research in optimizing MEC. This paper presents a novel resource management scheme for dynamic MEC that leverages the capabilities of a link-output Graph Neural Network (LOGNN). Despite the fixed dimension of output link features, the number of graph links increases as the number of edge servers and users increases. Therefore, by defining the proportion of resources allocated from the servers/users to each user/server as the link feature from server/user nodes to user/server node, the proposed LOGNN-based resource management scheme can efficiently deal with the scalability issue in decision space. In addition to scalability, high training costs are another essential drawback of NN-based methods in dynamic MEC. The high training cost comprises two main parts: the extreme costs of obtaining optimal solutions as training labels and the low convergence speed. Although the reinforcement learning (RL)-based training method alleviates the dependency on training labels <cit.>, its convergence speed and performance cannot be guaranteed. To overcome these challenges, this paper exploits a label-free unsupervised method to train the LOGNN. Parameters in the LOGNN are updated by the gradient of edge task processing delays with respect to the LOGNN parameters, where the task processing delays are derived explicitly from the output of the LOGNN. This approach not only reduces the training cost but also improves the convergence speed of the LOGNN. The main contributions of the paper are summarized as follows. * An innovative resource management scheme based on LOGNN is presented that efficiently addresses the scalability issues in both input and decision space, as the number of edge servers and users in dynamic MEC systems changes. The LOGNN is theoretically analyzed and compared to conventional node-output GNNs, demonstrating its superior adaptability. * A label-free unsupervised training method for LOGNN is introduced, which leverages the gradient of task processing delays concerning the GNN parameters as a loss function, accelerating the convergence speed and enhancing overall performance. * Simulation results show combining the proposed LOGNN and the unsupervised training method, can flexibly and efficiently optimize task offloading and resource allocation in MEC. Furthermore, it outperforms supervised learning and RL-based training methods in terms of convergence speed and performance, showcasing its potential for practical implementation in dynamic MEC networks. § GRAPH MODELING OF MEC RESOURCES MANAGEMENT §.§ System Model and Problem Formulation In this paper, we consider the scenario with N users randomly located in the plane, and for each user i,i∈{1,2,⋯, N}, there is a task with size d_i to be processed. Meanwhile, M servers are also randomly located in the plane with computing resources f_j^s for server j; since the computing resources of users are limited, all tasks are transmitted to servers to process. To balance the workloads among servers and reduce task computing latency, similar to <cit.>, each task can be divided into multiple parts[We call the parts of a task as subtasks in this paper.] with different proportions and transmitted to different servers. Each server uses an orthogonal frequency to communicate with users. Thus, the transmission rate between user i and server j is r_i,j = blog_2 (1+p_i,jh_i,j/∑_k≠ i^Np_k,jh_k,j+σ^2), where b is the bandwidth, p_i,j is the transmit power for user i to transmit a subtask to server j, σ^2 is the noise power, h_i,j is the channel gain between user i and server j, and g_0, l_0 are the reference channel gain and distance, respectively. Since different users transmit subtasks with various sizes and transmission rates, to fully exploit the computing resources of edge servers and reduce the computing delay, servers need to determine the amount of computing resources to process a special subtask, and the server computing latency can be calculated as T_i,j^com= x_i,jd_ic/f_i,j, where x_i,j is the offloading proportion of a subtask from user i to server j, c is the constant computing factor that determines the amount of central processing unit cycles to compute one-bit data, and f_i,j is the computing resource allocated to process the subtask offloaded from user i to server j. Therefore, the task computing delay minimization problem is formulated as min_𝐱,𝐩,𝐟∑_i=1^N∑_j=1^Md_ix_i,j/blog_2 (1+p_i,jh_i,j/∑_k≠ i^Np_k,jh_k,j+σ^2)+x_i,jd_ic/f_i,j, s.t. x_i,j≥ 0, ∀ i∈{1,⋯,N}∧∀ j∈{1,⋯,M},3a p_i,j≥ 0, ∀ i∈{1,⋯,N}∧∀ j∈{1,⋯,M},3b f_i,j≥ 0, ∀ i∈{1,⋯,N}∧∀ j∈{1,⋯,M},3c ∑_j=1^M x_i,j= 1, ∀ i∈{1,⋯,N},3d ∑_j=1^M p_i,j≤ p_max, ∀ i∈{1,⋯,N},3e ∑_i=1^N f_i,j≤ f_j^s, ∀ j∈{1,⋯,M},3f where the objective function (<ref>) is to minimize the sum of transmission delay and server computing latency for all user tasks. (<ref>)-(<ref>) constrain the task offloading proportion, transmitting power, and the computing resources allocated to users must be larger than or equal to zero. (<ref>) guarantees that a task is fully transmitted to the servers to process, and (<ref>) constraints that the user can allocate the transmission power for all servers, but the sum of transmission power cannot exceed the maximum power p_max. (<ref>) guarantees that for each server the sum of computing resources allocated to all users cannot be larger than its maximum resources. §.§ Graph Modeling of MEC System To effectively reduce total tasks processing delay, Problem <ref> is modeled as a graph link weights regression problem, where users and servers are modeled as graph nodes and wireless channels between users and servers are modeled as graph links. The node feature matrix 𝐙={𝐙^u,𝐙^s} consists of user node feature matrix 𝐙^u∈ℝ^N×1 which is given by 𝐙^u_(i,:)=[d_i], and the server node feature matrix 𝐙^s∈ℝ^M×1 which is given by 𝐙^s_(j,:)=[f_j^s]. Since in Problem <ref>, users need to allocate subtask sizes and transmit power to servers, and meanwhile, servers need to allocate computing resources to users, for each pair of user and server, the resource allocation is bi-directional. As a consequence, the problem graph is modeled as a bi-direction graph where the adjacency feature array 𝐀∈ℝ^(N+M)× (N+M) is given as 𝐀_(i,j,:) = h_i,j (i∈𝒱_u∧ j∈𝒱_s)∨(i∈𝒱_s∧ j∈𝒱_u), 0 otherwise, where 𝒱_u and 𝒱_s are node sets of users and servers. As illustrated in Fig. <ref>, the optimization variables in Problem <ref> are modeled as the link weights ℰ={ℰ^u,ℰ^s}, where ℰ^u∈ℝ^N× M×2 is given by ℰ^u_(i,:,:)=[[x_i,1,p_i,1],[x_i,2,p_i,2],⋯,[x_i,M,p_i,M]] and ℰ^s∈ℝ^M× N is given by ℰ^s_(j,:)=[f_1,j,f_2,j,⋯,f_N,j]. With the above notations, Problem <ref> can be rewritten as min_ℰ∑_i=1^N∑_j=1^M𝐙^u_(j,2)ℰ^u_(i,j,0)/blog(1+ℰ^u_(i,j,1)𝐀_(i,j)/∑_k≠ i^Nℰ^u_(k,j,1)𝐀_(k,j)+σ^2) +𝐙^u_(j,2)ℰ^u_(i,j,0)c/ℰ^s_(j,i), s.t. ℰ≽0,4a ∑_j=1^Mℰ^u_(i,j,0)=1, ∀ i∈{1,⋯,N},4b ∑_j=1^Mℰ^u_(i,j,1)≤ p_max, ∀ i∈{1,⋯,N},4c ∑_i=1^Nℰ^s_(j,i)≤ f_j^s, ∀ j∈{1,⋯,M},4d where (<ref>) constrains x, p, f larger than 0, which is equal to constraints (<ref>)-(<ref>), and (<ref>), (<ref>), (<ref>) correspond to the (<ref>), (<ref>), (<ref>) in Problem <ref>, which have same constraint condition. § GNN BASED LINK WEIGHT REGRESSION FOR MEC RESOURCE ALLOCATION §.§ Structure of Proposed GNN In order to effectively extract features of MEC, the proposed LOGNN 𝒢 is based on the architecture of graph attention network (GAT), where for a specific node the influence of its different neighbor node is calculated as attention value <cit.>. The details of the proposed LOGNN method are m_i,j =AGG(𝐙_i,𝐙_j), 𝐀_i,j≠0, ς_i =ϕ(𝐙_i,ρ{α_i,jm_i,j:j∈𝒩(i)}, α_i,j =exp(LeakyReLU([W_1ς_i||W_2ς_j]))/∑_k∈𝒩(i)exp(LeakyReLU([W_1ς_i||W_2ς_k])), [p_i,j,x_i,j]=W_3[ς_i,ς_j], i ∈𝒱^u∧ j ∈𝒱^s, f_i,j=W_4[ς_j,ς_i], i ∈𝒱^u∧ j ∈𝒱^s, where 𝒱^u and 𝒱^s are the node set of users and servers, and 𝒩(i) is the set of adjacency nodes for node i, and LeakyReLU(x) is equal to x when x is larger than 0, otherwise is ϵ x that ϵ is a factor larger than 0 but smaller than 1. m_i,j is the extracted message passed from neighbor node i to node j, and AGG(·) is trainable aggregation net to extract features of neighbor nodes. ρ(·) is a message concatenate function used to compress the message from all adjacency nodes to a vector, which is usually set as max (·) or mean(·), a_i,j is the attention value to evaluate the influence of adjacency node j on the current node i, and ϕ is a trainable neural network which is used to update node features from the initial input features Z_i to output features ς_i. W_1, W_2, W_3 and W_4 are matrices of trainable parameters. W_1 and W_2 are used to calculate the attention value a_i,j, W_3 is used to determine the link weight p_i,j and x_i,j from user i to server j, and W_4 is used to determine the link weight f_i,j from server j to user j. §.§ Unsupervised Learning-Based Efficient Training Method Since the objective function (<ref>) is differentiable, we exploit an unsupervised learning-based training method for LOGNN, where the gradient of the objective function of the equation (<ref>) G(ℰ|𝐙^u,𝐙^s,𝐀) with respect to parameters θ of LOGNN can be calculated by chain rule as equation (<ref>), and thus the parameters θ is updated as (<ref>), where lr is the learning rate. θ=θ-lr ∇_θG(ℰ|𝐙^u,𝐙^s,𝐀)11, Through such an unsupervised training method, the LOGNN is trained without optimal solutions as labels, which are challenging and costive to obtain for such a non-convex problem. Moreover, the proposed unsupervised training method outperforms the usually used actor-critic reinforcement training method in both convergence speed and performance <cit.>. §.§ Theoretical Analysis of Proposed LOGNN Method Algorithmic scalability is paramount in determining whether a trained GNN can directly optimize resource allocation for dynamic MEC systems with fluctuating numbers of edge servers and users. However, employing differently sized GNNs and retraining them is impractical due to exorbitant retraining costs. The proposed LOGNN-based resource management method demonstrates superior scalability compared to conventional node-output GNN, as LOGNN retains adaptability in the decision space dimension, making it well-suited for dynamic MEC systems with varying edge servers and user counts. In essence, the number of users for each server dictates the feasible decision space dimension for allocating computing resources. Given a server allocating f computing resources to n users, the feasible decision space resembles an n-dimensional cube with a side length of f, resulting in a space size of f^n. Conversely, the GNN aggregation net (comprising function ϕ(·) in (<ref>)) constrains each node's output features dimension to a fixed number m, which corresponds to the output layer's neural nodes count <cit.>. Employing the Softmax function usually limits the sum of resources allocated to users, ensuring it does not exceed f. Thus, the GNN output space forms an m-dimensional cube with sides of length f, amounting to a space size of f^m. In situations where the user count exceeds m, the node-output GNN fails to identify the optimal solution, as the decision space dimension falls short of the feasible decision space. Likewise, the node-output GNN struggles to manage users' offloading and transmit power allocation when server numbers surpass the output layer's neural nodes count in the aggregation net. As for LOGNN, output link features also present a fixed size. However, the link count increases alongside server and user numbers. By connecting servers to all users, the link-output GNN output space dimension matches the user count, allowing the link-output GNN to address resource allocation challenges for an arbitrary user count. Similarly, LOGNN facilitates users in making offloading and power allocation decisions with an arbitrary server count. § SIMULATION RESULTS AND DISCUSSION §.§ Simulation Settings In this part, simulation experiments are conducted to evaluate the performance of the proposed LOGNN-based MEC resource management method. Similarly to <cit.>, the channel gains are generated randomly with h_i,j∼U(0,1), the package size of users' tasks are randomly generated with d_i∼U(0,1), and the computing resources of servers are randomly generated with f_j^s∼U(0,1), where U(·,·) is the uniform distribution, and p_max is set to 1. We consider the following benchmarks for comparison. ∙MLP(DI): A multi-layer perceptron (MLP) proposed in <cit.> is used to extract features of MEC, which is pre-trained on a specific number of edge servers and users since the dimension of input and output for MLP is a fixed number. When the number of nodes in MEC exceeds the number of pre-trained nodes, the MLP can only optimize the performance of a subset of the nodes and randomly allocates 𝐱,𝐩,𝐟 to other nodes. ∙MLP(TR): Whenever the number of nodes changes, a new MLP is trained and deployed to cope with the change in the dimension of the input-output space. ∙ GA: Genetic algorithm is an algorithm that uses the mechanism of biological evolution and is widely used to solve non-convex optimization problems <cit.>. In the simulation, the proposed LOGNN is pre-trained using numerous data of different numbers and locations of edge servers and users. In the inference procedure, the pre-trained LOGNN is directly used to optimize the MEC task offloading and resource allocation without any re-training. All algorithms are run on the graphic processing unit of the Tesla A100. More hyper-parameters of LOGNN and other compared algorithms are shown in Table <ref>. §.§ Comparison on Convergence of Different Training Methods To effectively evaluate the proposed unsupervised training method, we employ three different methods to train the LOGNN and MLP models. These methods include the default unsupervised learning method, the supervised learning method, and the actor-critic-based RL training method <cit.>. The labels in supervised learning for LOGNN(Sup) and MLP(Sup) are obtained by the GA method. In contrast, the RL training method employs an actor net to optimize resource allocation, while a critic net provides the update gradient for the actor net. As label generation latency is considered as part of the training delay, the supervised training method has the largest training delay for each training epoch, as illustrated in Table <ref>. Additionally, the supervised training method has worse convergence performance than the proposed unsupervised training method on the training set, as shown in Fig. <ref>. This can be attributed to the fact that the performance of an NN trained using supervised learning depends heavily on the quality of the training labels. Since the GA cannot precisely provide optimal solutions for supervised labels, the performance of supervised training cannot be guaranteed. Moreover, the supervised training method has a larger performance gap than the unsupervised counterpart on the test set due to the fact that the supervised training method is easy to overfit the training set. Despite its faster training speed in one epoch, the RL-based method fails to converge. This is due to that the RL-based method requires the joint training of two NNs, with the performance of the critic network determining the accuracy of the updated direction of the actor net. Moreover, the output of the actor net determines the input distribution of the critic net. Therefore, even a slight perturbation can lead to non-convergence of the RL training, as evidenced by the results in Table <ref> and Fig. <ref>. The proposed unsupervised training method outperforms supervised and RL training methods both in training speed and convergence speed, which suggests that when the environment changes dramatically, the unsupervised method can be leveraged to fine-tune the NN with low training costs, even in devices with limited computing resources. §.§ Performance Evaluation on Scalability Table <ref> shows the performance of diverse methods across varying numbers of edge servers and users, with user count N consistently double that of server count M. Owing to MLP's lack of scalability, directly employing a fixed-size MLP with unaltered parameters only optimizes resource allocation for a subset of edge servers and users, with the remainder, allocated randomly. Consequently, MLP(DI)'s performance proves unstable and consistently inferior to alternative methods. In contrast, LOGNN consistently delivers optimal results, barring instances where M=2,3, in which GA outperforms LOGNN. As a near-brute search optimization method, GA can locate the optimal solution through an exhaustive evolutionary search when optimization problem complexity remains low with small M values. However, as the problem size expands, GA struggles to pinpoint the objective function's optimal value, whereas LOGNN more effectively extracts MEC features to inform better decision-making. It is important to note that LOGNN's performance dips when M values are low but stabilizes when M > 15, exhibiting minimal fluctuations. GNN is better suited for extracting graph structure features rather than individual node features. For smaller M values, edge node features like user task size d_i and server computational resources f_j^s substantially influence the optimal solution. Yet, as M increases, cooperation and interference between graph nodes bear greater weight on optimization problem solutions. Given GNN's proficiency in extracting graph structural features, it achieves superior performance. Furthermore, LOGNN's output solutions maintain stable and similar objective values, as graph structures exert a more significant influence on problems than specific node features. Notably, LOGNN consistently outperforms MLP-based approaches, even when MLP(TR) employs an architecture tailored for specific server and user counts and is trained for that particular MEC scale. This highlights that, for a controller overseeing a dynamic MEC, utilizing the proposed LOGNN directly addresses changes in user and server numbers, ensuring high performance without resorting to time-consuming training or fine-tuning. §.§ Performance Evaluation on Computational Efficiency Previous research works on MEC usually divide the processing delay of edge tasks into two main components, i.e., transmission delay and computation delay. However, task processing delay more generally refers to the time between task generation and completion. Thus, the inference delay of the resource allocation algorithm should be considered as a third component of task processing delay. To provide a comprehensive evaluation of different algorithms, we compare the task processing delays comprising algorithm inference delay, transmission delay, and computing delay in Table <ref>. Although the sum of task transmission and computing delay optimized by GA is smaller than LOGNN when M is less than 4, the inference delay of GA is remarkably high. As shown in Table <ref>, the summing delay of task processing and algorithm inference of GA is consistently much larger than LOGNN. Therefore, due to the high performance and small inference delay, the proposed LOGNN achieves the best performance among all methods. This highlights the potential of LOGNN in optimizing real-time resource allocation in MEC, offering valuable insights into its efficacy. § CONCLUSION In this paper, we have proposed a LOGNN-based resource management method for MEC systems to address the scalable challenges in both the dimension of input space and decision space that are caused by varying numbers of servers and users. Furthermore, we have exploited a label-free unsupervised training method to reduce the training cost of LOGNN. Simulation results have demonstrated that the unsupervised LOGNN can efficiently and flexibly optimize the task offloading and resource allocation in MEC with an arbitrary number of servers and users with high performance and convergence speed. By implementing this scheme in MEC, we can effectively handle dynamic changes in the number of users and servers while significantly reducing edge task processing delay. For future research, we will study how to use the pre-trained GNN to generally optimize the MEC without fine-tuning. § ACKNOWLEDGE This work was supported by the National Key Research and Development Program of China (2020YFB1807700), and the National Natural Science Foundation of China (NSFC) under Grant No. 62071356 and No. 62201414, and the fundamental research funds for the central universities under grant ZYTS23175. IEEEtran
http://arxiv.org/abs/2306.02808v1
20230605120012
Deep Active Learning with Structured Neural Depth Search
[ "Xiaoyun Zhang", "Xieyi Ping", "Jianwei Zhang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Shell et al.: Deep Active Learning with a Structured Neural Depth Search Deep Active Learning with Structured Neural Depth Search Xiaoyun Zhang, Yiping Xie, Jianwei Zhang Xiaoyun Zhang and Yiping Xie contributed equally to this work. (Corresponding author: Jianwei Zhang.) Xiaoyun Zhang, Yiping Xie, and Jianwei Zhang are with the College of Computer Science, Sichuan University, Chengdu 610065, China (e-mail: [email protected]; [email protected]; [email protected]). ================================================================================================================================================================================================================================================================================================================================================================================================ Previous work optimizes traditional active learning (AL) processes with incremental neural network architecture search (Active-iNAS) based on data complexity change, which improves the accuracy and learning efficiency. However, Active-iNAS trains several models and selects the model with the best generalization performance for querying the subsequent samples after each active learning cycle. The independent training processes lead to an insufferable computational budget, which is significantly inefficient and limits search flexibility and final performance. To address this issue, we propose a novel active strategy with the method called structured variational inference (SVI) or structured neural depth search (SNDS) whereby we could use the gradient descent method in neural network depth search during AL processes. At the same time, we theoretically demonstrate that the current VI-based methods based on the mean-field assumption could lead to poor performance. We apply our strategy using three querying techniques and three datasets and show that our strategy outperforms current methods. Neural architecture search, deep active learning, variational inference. § INTRODUCTION Deep learning (DL) has recently received promising results on various tasks due to the powerful learning capacities of deep neural networks (DNNs). DNNs always have complex structures and their performances highly rely on a large number of annotated samples. However, collecting a large labeled dataset is rather challenging in practice, especially for the medical domains where the annotations are expensive and slow to produce <cit.>. Active learning (AL) focuses on alleviating the reliance on expensive tremendous annotated datasets <cit.>. It assumes that different samples have different contributions to the DNN performance, and tries to select the most deterministic samples to construct the training set. Traditional AL methods mostly leverage a fixed model to query samples, which is ineffective to fit the increasing scale of the labeled training set. At the early stage of an AL method, an overparameterized model is unlikely to achieve a good generalization performance with a small-scale training set. In contrast to previous AL methods with fixed architectures, Geifman et al. <cit.> proposed an active learning method with incremental neural architecture search (Active-iNAS) to dynamically increase the neural capacity during the AL cycles. As shown in Fig. <ref>, Active-iNAS starts with a small neural capacity at the early stages. Then, an incremental neural architecture search (NAS) <cit.> is performed to monotonically non-decrease the neural capacity. It maintains a well-generalized model to ensure query effectiveness. Active-iNAS can overwhelmingly outperform traditional AL methods with fixed architectures. However, Active-iNAS trains several models and selects the model with the best generalization performance for querying the subsequent samples after each AL cycle. The independent training processes of Active-iNAS lead to an insufferable computational budget, which is significantly inefficient and limits search flexibility and final performance. This paper summarizes the above approaches into a new problem setting, called Neural Depth Search (NDS) in AL context, automatically searching an optimal neural architecture depth for the given task during active learning processes. NDS is a crucial complement to NAS for its ability to adjust neural capacity automatically. In this problem setting, we aim to answer the following key question: How to design an NDS method in AL processes to improve search efficiency and promote model performance? To address the aforementioned issue, Variational Inference (VI) is utilized to learn the neural architecture depth uncertainty aiming at achieving better search flexibility and final performance, which approximates a parameterized posterior over the architectural depth <cit.>. The variational parameters <cit.> are optimized by minimizing the Kullback-Liebler divergence (KL-divergence) between the approximated posterior and the true one. At last, the approximated posterior can reflect the probabilistic reasoning over neural architecture depth for different tasks. Furthermore, we propose a novel AL strategy with the method called Structured Variational Inference (SVI) or Structured Neural Depth Search (SNDS) based on VI, which makes the neural weights dependent on the architecture depth and improves the fidelity of the posterior approximation and final search performance. The differences of these methods are shown in Fig.  <ref>. Specifically, we implement SVI using Pseudo-uniform sampling at the depth for training the sharing weights. We call our learning approach Active-SNDS and apply Active-SNDS using three querying techniques (random, uncertainty entropy <cit.> and coreset <cit.>) and three datasets (CIFAR10 <cit.>, CIFAR100 <cit.>, and MNIST <cit.>). The findings are exciting: 1) SVI can search more reasonable neural network depth according to the current dataset scale and complexity than iNAS and current VI-based method; 2) Active-SNDS outperforms Active-iNAS and VI-based stragety using different query methods in active learning processes. Our contributions can be summarized as follows: * We theoretically demonstrate that the mean-field assumption of current VI-based methods can cause the rich-get-richer problem, i.e., the shallow networks would dominate the search and be outputted as the results. * We proposed the SVI method that can restore the mean-field assumption and search for more reasonable neural network depth according to the current dataset scale and complexity than Active-iNAS and VI-based method; * We propose the Active-SNDS method address problems in Active-iNAS. Extensive experiments demonstrate Active-SNDS outperforms Active-iNAS and VI-based stragety in active learning processes. The remainder of this paper is organized as follows. Section II reviews the related works of AL and VI-based NDS and compares them with our method. We present preliminaries for AL and VI-based NDS problems in Section III and propose our SVI and Active-SNDS methods in Section IV. Section V demonstrate the experimental results of NDS and AL. Finally, Section VI concludes our paper. § RELATED WORK §.§ Deep Active Learning A standard AL method is initialized with an unlabeled sample pool and a manually designed model <cit.>. The model follows a certain strategy to query the most valuable samples from the pool in cycles. The bundle of samples are labeled by an oracle and merged into the training set. The labeled training set is then used for training the model again. This process repeats until exhausting the label budget reaches the expected model performance. The query strategy, which refers to how to select the samples to be labeled, is key to the performance of an AL method <cit.>. Among them, the most popular strategy is the uncertainty-based method, which ranks all the samples with a metric called uncertainty <cit.>. A great uncertainty indicates a higher selecting priority. The Density-based methods attempt to select the core set that represents the distribution of the entire dataset <cit.>. However, previous AL methods mostly leverage a fixed model to query samples, which is ineffective to fit the increasing scale of the labeled training set. At the early stage of an AL method, an overparameterized model is unlikely to achieve a good generalization performance with a small-scale training set. To mitigate this problematic aspect, the discussion of architecture optimization in active learning was considered within the context of deep neural models. Huang et al. <cit.> demonstrated that active learning performance can be improved using a proper choice of (fixed) hyperparameters in the context of linear models. Geifman et al. <cit.> proposed Active-iNAS to dynamically increase the neural capacity during the AL cycles, which further enhances active learning flexibility and accuracy. Our work differs from the above work by combining VI to learn the neural architecture depth uncertainty instead of increasing the basic block thereby it could achieve better search flexibility and final performance. §.§ The current VI-based methods for NDS There are only a few studies related to NDS. Dikov et al. <cit.> proposed to estimate the neural architecture width and depth through BNN. Antorán et al. <cit.> proposed to search the depth of residual networks in the efficient one-shot NAS framework, where the neural weights and architecture are jointly learned. Antorán et al. <cit.> estimated the depth uncertainty through the probabilistic reasoning over a sequential structure of feed-forward networks. Nazaret et al. <cit.> propose a novel VI method to approximate the posterior over the neural weights and depth of an infinitely deep network. Most of the current methods <cit.> approximate the architecture depth posterior with VI based on the mean-field assumption <cit.>, where the neural weights and depth variables are independent. The mean-field assumption can limit the approximation fidelity and introduce the rich-get-richer problem, i.e., the shallow networks would dominate the search. Different from the previous method, a structured variational inference study is proposed to restore the mean-field assumption, significantly improving the search effectiveness. Same to the proposed method, Nazaret et al. <cit.> also utilize the SVI to impose the dependence between the neural weights and depth. The difference is that we theoretically demonstrate how the rich-get-richer problem is caused by the mean-field assumption. And Pseudo-uniform sampling method is proposed to implement the SVI. § PRELIMINARIES In this section, we present the preliminaries including deep active learning, searching the optimal neural depth during active learning, and VI. §.§ Deep Active Learning In active learning setting, a large unlabeled data pool U = {𝒳, 𝒴}^N_u can be obtained, where 𝒳 denotes the sample space, 𝒴 denotes the label space but it is unknown before labeled by oracle. Given an initialized labeled dataset L = {x_i, y_i}^N_init_i=1, we can train a deep model f∈ℱ: 𝒳→𝒴, where ℱ is hypothesis space of the model. Then, we cyclically sample new bunch of samples ℬ^* from U to be labeled: ℬ^*=ℬ⊆𝒰max a(ℬ, f) where a is the query strategy. ℬ^* would be labeled by oracle and added to the L. The query cycle repeats until reaching an ideal model performance or the label budget is exhausted. The goal of AL is to obtain a higher model performance with a lower label budget |L|. From the AL query process Eq. (<ref>), we could find the model's generation performance and its induced query effectiveness are critical to the final performance of AL. However, according to the statistical learning theory <cit.>, with probability at least 1 - δ, the generation gap can be bounded as: R(f) - R̂_L(f) ≤ O( √( d_VClog(N_t/d_VC) - logδ/N_t) ), where R(f) is the expected risk on the unseen sample such as U, R̂_L(f) is the empirical risk on the training set L, d_VC is the VC-Dimension of ℱ, and N_t is the dynamical sample number of L that increases linearly with the AL cycles. Let the term under the root, d_VClog(N_t/d_VC) - logδ/N_t, denoted by G, we have its partial derivative on d_VC: ∂ G/∂ d_V C=1/N_t(log N_t / d_V C-1) When logN_t/d_VC=1, we can always get the minimum of the generation gap bound. This means that d_VC should grow linearly with the increasing N_t. As neural capacity is positively related to d_VC <cit.>, Geifman et al. <cit.> proposed to monotonically non-decrease the neural capacity during the AL process. §.§ Searching the optimal neural depth during active learning A truncated Poisson distribution has been used to model the neural depth <cit.>, denoted as 𝒫̅(λ, d_min, d_max), where λ is the mean and also the variational parameter, and d_min and d_max denote the minimum and maximum of the available neural depths. We can obtain the probability at each depth d : 𝒫̅(X=d)=𝒫(X=d)/∑_j=d_min^d_max𝒫(X=j) . We set d_min to be 1 and d_max is calculated by the following formula: d_max = m(q^0.95(λ)), where {q^δ(λ) | q ∈𝒫} represents the distribution Poisson(λ) truncated to its δ-quantile and m(q) := max{ℓ|q(ℓ) > 0}. §.§ Variational Inference Neural depth search is challenging for two factors. On the one hand, deep neural networks are always over-parameterized and would like to overfit the training data <cit.>. The model's performance on the training set can not determine the optimal architecture depth. On the other hand, navigating the depth search through the generalization performance is impractical because it requires a large validation set for stable evaluation, which is not feasible for the annotation-expensive domains like medical images <cit.>. The current mean-field VI-based studies <cit.> inherit the advantages of BNN such as modeling uncertainty and eliminating overfitting. Given an observed data set 𝒟 = {x_i, y_i}^N_i=1 and a L-layer network, we consider two kinds of variables, i.e., the neural weights w and the architecture depth d. We define a prior distribution over the architecture depth p(d, w) and a likelihood p(𝒟|d, w) are defined. The posterior can be computed through exact inference p(d, w|𝒟) = p(d, w)p(𝒟|d, w)/p(𝒟). However, the posterior is usually difficult to calculate because the evidence p(𝒟) is intractable. Variational inference approximates a parameterized surrogate distribution q(d, w) to the posterior p(d, w|𝒟). Previous VI-based methods are all implicitly under the mean-field assumption, where the neural weights w and the architecture depth d are independent. q(d, w) can be factorized as: q(d, w) = q_λ(d)q_μ, σ(w), where λ is the parameter of the architecture depth distribution, and the neural weights are defined as Gaussian distribution: w ∼𝒩(μ, diag(σ^2)), μ, σ∈ℝ^|w|. In the following context, the variational parameters <cit.> λ, μ, and σ are sometimes omitted for easy presentation. Then, VI aims at finding an optimal member from the variational family q^*(·) closest to the exact posterior p(d, w|𝒟) in Kullback-Leibler divergence (KL-divergence) <cit.>. KL(q(d, w)p(d, w|𝒟)) = -ELBO + logp(𝒟) , where ELBO is an acronym for the evidence lower bound, which is a tractable surrogate objective of KL-divergence. As the term logp(𝒟) does not depend on the variational parameters, maximizing the ELBO is equivalent to minimizing the KL-divergence. VI turns the inference problem into an optimization problem. In the NDS problem setting, we focus on the optimization of the architecture depth parameter λ. Following <cit.>, a point estimation is treated over the neural weights w. ELBO is approximated by the Monte Carlo (MC) sampling over q_λ(d) and maximum a posterior (MAP) estimation over w. Following <cit.>, we directly optimize neural weights w instead of their variational parameters μ and σ. As a result, λ and w can be optimized simultaneously using stochastic gradient descent: ℓ(λ, w) = -ELBO = 𝔼_q_λ(d)[ -∑_i=1^Nlogp(y_i;f(x_i, w_1:d)) ]     +KL(q_λ(d)p(d)) + KL(q(w)p(w)), where B is the batch size and f(x_i, w_1:d) denotes the output of the d-depth network that only have the neural weights from 1-th to d-th layer. For the last line of Eq. (<ref>), the first term is the expectation of cross-entropy (CE) loss under the surrogate distribution over the architecture depth, the second term can be calculated through MC sampling over d, and the third term has a closed form solution and can be computed analytically <cit.>. § PROPOSED METHODS In this section, we theoretically demonstrate that the mean-field assumption of previous VI-based methods can cause the rich-get-richer problem and propose our SVI and Active-SNDS methods to improve the search flexibility and final performance. §.§ The Dark Side of Mean-Field Assumption The mean-field assumption makes it easy to capture any marginal density of the variables and estimate the ELBO <cit.>. However, it can limit the fidelity of the posterior approximation and cause the rich-get-richer problem, i.e., the shallow layers' weights have more chance to be trained and the shallow architectures tend to be outputted. ∂ℓ(λ, w)/∂ w =∂/∂ w( ∑_d^q_λ(d)ℓ_ce(D, d, w_1:d) + ℓ_kl), where ℓ_ce(D, d)=∑_i=1^N-logp(y_i;f(x_i, w_1:d)) is the CE loss of the d-depth network on D, and it only relies on the neural weights from 1-th to d-th layer w_1:d. ℓ_kl denotes the sum of the two KL terms and it is independent with d. ℓ_kl is usually much smaller than the CE loss that is calculated with the sum mode. We have the gradients of l-th layer: ∂ℓ(λ, w)/∂ w_l = ∂/∂ w_l( ∑_d=l^Lq_λ(d)ℓ_ce(D, d, w_1:d) + ℓ_kl). If we consider that ℓ_ce(D, d) under different depths are equal and ℓ_kl is much smaller compared to the CE loss, the ratio of the last layer's gradients to the first layer's could be roughly estimated as ∂ℓ(λ, w)/∂ w_L / ∂ℓ(λ, w)/∂ w_1 = q_λ(L). In other words, the shallow layers will converge much faster than the latter ones. Then, the CE loss of the shallow networks would be smaller than the deep ones. Moreover, we consider the learning of λ: ∂ℓ(λ, w)/∂λ =∂/∂λ( ∑_d^q_λ(d)ℓ_ce(D, d, w_1:d) + ℓ_kl). As the CE loss of the shallow networks is smaller, the sampling probability q_λ(d) over the shallow network would converge to be larger. This entails the rich-get-richer problem <cit.> and navigates the search toward shallow networks as shown with an intuitive example in Fig. <ref>. §.§ Neural Depth Search with Structured Variational Inference The mean-field assumption in previous VI-based methods limits the fidelity of the posterior approximation and introduces local optima in neural depth search. In this paper, we propose to relax the independence between the neural weights and the architecture depth through structured variational inference.Fig. <ref> gives an intuitive example of the mean-field VI and SVI. As a result, the weights of networks with different depths can be customized, which can greatly improve the performance of variational inference in NDS. Specifically, SVI models the neural weights to be dependent on the architecture depth in Eq. (<ref>): q(d, w) = q_μ, σ(w|d)q_λ(d), The loss function over λ and w in Eq. (<ref>) can be reformulated as: ℓ(λ, w)=𝔼_q(w|d)q(d)[logq(w|d)+logq(d)     -logp(w|d)-logp(d) - logp(D|d, w)] =KL(q_λ(d)p(d)) + 𝔼_q_λ(d)[KL(q(w|d)p(w|d)) -∑_i=1^Nlogp(y_i;f(x_i, w|d))]. It could be found that the main differences between Eq. (<ref>) and Eq. (<ref>) are: 1) the KL term of the neural weights are under the expectation over q_λ(d); 2) the neural weights are conditioned on the depth. Specifically, we propose a Pseudo-uniform sampling method to implement Eq. (<ref>). Motivated by some one-shot NAS methods which have already an explored effective way to eliminate the unfair advantages of early dominant operations by uniformly sampling candidate architectures for training the sharing weights <cit.>, we propose to use early uniform sampling of the networks with each depth to address the drawback of mean-field assumption in NDS. The training of the sharing weights could be optimized as: w^* =*arg min𝔼_d∼Γ[ℓ_ce(D, d)], where Γ denotes the uniform distribution over the depth choice set. In this paper, our model is a growing unbounded depth neural network <cit.>, so we cannot simply use uniform sampling as in the one-shot NAS method. In order to make the whole sampling uniform in the process of model growth, we propose a pseudo-uniform sampling method that uses the inverse of the frequency of sampling in each layer as the probability of sampling in the next training. And then substitute w^*_1:d as w|d in Eq. (<ref>). §.§ Active Learning with Structured Neural Depth Search The Active-SNDS technique is described in Algorithm <ref> and works as follows. Given the unlabeled data U and the initial labeled data L, it is used to train the model and obtain the unlabeled data for annotation in the active learning process. The whole process consists of two nested loops. The outer loop is the active learning cycle, and the inner loop is the network training cycle. The depth parameter λ and network parameters w_1:d_max are trained using pseudo-random sampling in the first 1/3 of the network training cycles and mean-field VI loss functions in the last 2/3 of the cycles. During the depth update, basic blocks are generated and stacked by the layer generator l to join the current network model <cit.>. When the network training period is over, the acquisition function is used to collect b labeled data from U for annotation and join the labeled data L. When the active learning cycle is complete, exit the outer loop. More detailed algorithm implementation will be given in Section V. § EXPERIMENTS In this section, we will show the performance comparison of our proposed methods Active-SNDS, mean-field VI and Active-iNAS, as well as two fixed networks Resnet-18 and Resnet-34 in depth search and active learning, and make a comprehensive analysis of the corresponding results. §.§ Datasets We used three datasets to evaluate our proposed method, namely MNIST, CIFAR-10 and CIFAR-100. * MNIST: The MNIST database was constructed from NIST's Special Database 3 and Special Database 1 which contain binary images of handwritten digits. It has a training set of 60,000 examples and a test set of 10,000 examples including 0-9 ten handwritten digit classes. And the digit images we used in the MNIST set were originally selected and experimented with by Chris Burges and Corinna Cortes using bounding-box normalization and centering, so the size of each image is 28*28 . * CIFAR-10: The CIFAR-10 is labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. This dataset consists of 60000 32*32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. * CIFAR-100: The CIFAR-100 is also labeled subset of the 80 million tiny images dataset. It is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. In depth search related experiments, we used all training samples in the corresponding dataset; In the active learning related experiments, the total number of training samples we ended up using was only a part of three datasets, with the final number of samples used on MNIST being 2300 and the final sample size of 45000 on CIFAR-10 and CIFAR-100. §.§ Implementation Details Model Details. In the paper, we used two search strategies, mean-filed VI and iNAS, and two fixed network structures, Resnet-18 and Resnet-34, to compare with our proposed method SVI. Among them, the search space of mean-filed VI and SVI is an infinite depth network, while the search space of iNAS is set to A(B, 1, 1) to A(B, 12, 5) (A(B, i, j): the network structure represented in iNAS consists of stacks; B is a fixed neural block; i is the number of blocks in each stack; j is the number of stacks). In the experiments, based on the Resnet architecture <cit.>, we used the basic block as the basic building block of the search strategies and treated it as a layer in the network depth. Specifically, the basic block contains two convolutional layers of size 3*3 followed by a batch normalization and then a ReLU activation. A residual connection is added before the activation of the second convolutional layer. The overall network architectures of these search methods are shown in Fig. <ref>. * iNAS: This approach divides the depth of the network into stacks, each formed by a certain number of block stacks. When searching for the best network depth, the method iNAS uses A_0 = {A(B, i, j), A(B, [ij/(j+1)]+ 1, j + 1), A(B, i + 1, j)}, each time from the three candidate network structures, selects the model with the best performance on the validation set as the current best model A(B, i, j), and then continues to use the above formula to generate candidate networks, and then select the optimal network until the training iterations are reached. In the first block of each stack (except the first stack), the generated network halves the feature map size and doubles the number of channels. In the final part of the generated network is one classification block, which contains an average pooling layer that reduces the spatial dimension to 1*1, and a fully connected classification layer followed by softmax. * SVI and mean-filed VI: Both methods use the basic block as a layer of the network, searching for the best depth of the network based on the infinite depth of the search space. The general search strategy is to train the parameter λ_d that determines the depth of the network, making the corresponding network achieve better results. The relationship between the parameter λ_d and the network depth is as follows, with λ_d as the parameter of truncated Poisson distribution, the minimum value of the probability distribution reaching the specified value (the hyperparameter is set to 0.95 in the experiment) is obtained as the current network depth, and the network output is the weighted sum of the predicted output of each layer and their respective probabilities corresponding to the truncated Poisson distribution. Since λ_d corresponds to the largest probability value, it can be approximated that the number of layers represented by the integer near it is the best number of layers found. Compared to backbone networks that use a fixed number of layers, infinite depth avoids the performance impact caused by artificially limiting the search space. Unlike iNAS, the networks corresponding to these two methods halve the feature map size and double the number of channels at layers 4 and 9, while for the output layer of each layer of the network, the kernel size of the mean pooling operation contained in it is (4,4) ((3,3) on MNIST). Experimental Settings. In all experiments we have some basic same hyperparameter settings. For example, we trained all models using stochastic gradient descent (SGD) with a batch size of 128 and momentum of 0.9. For this optimizer, we use an initial learning rate of 0.01, with a technique called Cosine rampdown <cit.> adjusting it per epoch. A weight decay of 1e-4 was used, and two data augmentation was applied containing four pixels translates and horizontal flips. Further detailed experimental parameter settings are given below for depth search and active learning, respectively. * Depth Search: In the depth search related experiments, we used all the training samples of the corresponding dataset for each of the three datasets. Since the complexity of the datasets varies, we adjusted the training epoch accordingly. On MNIST we trained 50 epochs, while on CIFAR-10 and CIFAR-100 we trained 150 epochs and 200 epochs, respectively. Since the search method of iNAS is to test a fixed three network architectures per active learning cycle, we performed 10 active learning cycles with every candidate network being trained 10 epochs per cycle to achieve the purpose of depth search using all training samples. * Active Learning: In the active learning related experiments, we used Random Sampling, Coreset and Uncertainty Entropy three querying strategies. And for each active learning experiment, we set the active learning cycle to 12. Then for different datasets and search methods, our parameter settings had some adjustments. Datasets. For MNIST dataset, since our network architecture is relatively complex for this task, the initial number of samples for model training was 100, and the amount of data queried per active learning cycle was 200. For the CIFAR-10 and CIFAR-100 datasets, we used an initial sample size of 2000 and a query number of 2000 each cycle, and when the total number of training samples reaches 10000, we updated the query number to 5000. Search methods. For both types of experiments on fixed network structures and Active-iNAS, we trained for 50 epochs per active learning cycle. We used Resnet-18 and Resnet-34 as our fixed architectures and search space in the range A(Br, 1, 1) to A(Br, 12, 5) was used in the method of Active-iNAS. For experiments on Active-SNDS, we updated the network depth related parameters with a separate optimizer, this depth optimizer also used stochastic gradient descent (SGD) with an initial learning rate 0.05 and when training epochs became 2/3 times the total number of current epochs, the learning rate was adjusted to 0.03. For experiments on mean-field VI, we used the same depth optimizer, but the corresponding learning rate was fixed. §.§ Comparison of depth search experiment results In this subsection, we compare the depth search results of SVI, mean-field VI and iNAS, as well as the performance of these three methods and Resnet-18 and Resnet-34 using all datasets on three datasets: MNIST, CIFAR-10 and CIFAR-100. Taking CIFAR-10 as an example, we can see the results of the CIFAR-10 shown in Fig. <ref> and Table <ref>, the green curve represents the fixed number of layers network around 13, 14 layers presents the highest accuracy, and our proposed method SVI searches the value of parameter λ_d is 12.98, close to the optimal number of traversal layers, and compared with the results of mean-field VI search, it can be seen that its value of λ_d of 10.26 is significantly less than the optimal depth, so we can know that SVI effectively alleviates the rich-get-richer problem. Combining the results of the MNIST and CIFAR-100 in Table <ref>, we can see that our method has a shallower number of search layers on MNIST and a deeper number of search layers on CIFAR-100, which also corresponds to the actual task complexity. On MNIST, the accuracy of SVI is slightly higher than Resnet-18, Resnet-34 and iNAS, but slightly lower than mean-field VI, because the classification task on the MNIST dataset is simple, and all network structures in the experiment are sufficient to obtain better training results. And the reason for the poor performance of the iNAS method on CIFAR-10 and CIFAR-100 is that the method starts from the shallow layer of search, the search range of each time is limited, and the network parameters of the early training cannot participate in the later training process, so the performance is very poor when the training cycle is insufficient. §.§ Comparison of active learning experiment results The results of active learning algorithms are usually represented by a curve between the amount of data and the performance of the training samples. We use accuracy in our experiments to represent the performance of the model. For example, in Fig. <ref> we see the results obtained by three search methods and two fixed architectures for classifying MNIST images using the three querying strategies (Random Sampling, Coreset and Uncertainty Entropy). In red, we see the curve for Active-SNDS. The results of Active-iNAS, mean-field VI, Resnet-18 and Resnet-34 appear in light blue, green, dark blue and light purple, respectively. The X-axis corresponds to the labeled points consumed, starting from k = 100 (the initial seed size), and ending with 2300 for MNIST (starting from k = 2000 and ending with 45000 for CIFAR tasks). We show our experimental results for MNIST, CIFAR-10 and CIFAR-100 in Fig. <ref>, Fig. <ref> and Fig. <ref>. We first analyze the results for CIFAR-10 (Fig. <ref>). Obviously, for all three query strategies, the performance of Active-SNDS is optimal throughout the entire range. At the same time, as shown in Table <ref>, <ref> and <ref>, from the comparison of the number of network layers searched by Active-SNDS and their parameters and the number of network layers searched by Active-iNAS, it can be seen that the actual number of layers used by Active-SNDS will be less than that of Active-iNAS, that is, the value of parameter λ_d is smaller because Active-SNDS continues to train deeper networks on the basis of the previous shallow network, so in the case of few training epochs, Active-SNDS can better train the model to get better performance; but it can also be seen that in Active-SNDS, the actual network depth is deeper because we do not limit the search space, so the depth of the backbone network can be increased infinitely according to the actual situation. And comparing the search results of Active-SNDS and mean-field VI, we can see that Active-SNDS can search for a deeper depth and obtain better results, further verifying the alleviation of Active-SNDS to the rich-get-richer problem. Since we used the basic block as a layer, we can think of Resnet-18 and Resnet-34 as layer 8 and layer 16 networks, respectively. So, combining the number of network layers and the performance of the five types of experiments per cycle, we can know that, Active-SNDS tends to generate simple networks with fewer layers in the early stage when there are few training data and increase network complexity in the later stage with the number of training data increasing. We now see the results of MNIST (Fig. <ref>) and CIFAR-100 (Fig. <ref>) experiments. Since the task on MNIST is well-known simple, all the curves in Fig. <ref> show good performance, but the advantages of Active-SNDS can still be reflected in the figure, and another interesting point is that we only need a smaller amount of data to achieve an almost best performance effect. The classification task on CIFAR-100 is the opposite of MNIST, and the more complex classification task makes network training harder, so we can see that the overall accuracy has decreased, and we can also see that the performance of our method is more obvious than that of other network architectures. Since Resnet-18 and Resnet-34 train the same 50 epochs in each active learning cycle, and such fewer training cycles result in the more complex Resnet-34 not achieving better results than the simpler Resnet-18, the results presented in Fig. <ref> occurs. § CONCLUSION In this paper, we propose a novel active learning strategy Active-SNDS and an efficient neural depth search method SVI. We theoretically demonstrate that the mean-field assumption of previous VI-based methods can cause the rich-get-richer problem and restore it with SVI. Experimental results over enormous datasets and acquire methods show that SVI has better incremental network depth search performance and Active-SNDS outperforms Active-iNAS and the direct VI-based learning method in active learning processes. In summary, our proposed methods demonstrate significant improvements in deep active learning. IEEEtran
http://arxiv.org/abs/2306.17702v1
20230630142906
Why Deep Models Often cannot Beat Non-deep Counterparts on Molecular Property Prediction?
[ "Jun Xia", "Lecheng Zhang", "Xiao Zhu", "Stan Z. Li" ]
cs.LG
[ "cs.LG", "cs.CE" ]
[ Why Deep Models Often Cannot Beat Non-deep Counterparts on Molecular Property Prediction? equal* Jun Xiaequal,yyy Lecheng Zhangequal,yyy Xiao Zhuequal,yyy Stan Z. Liyyy yyyWestlake University, Hangzhou, China Jun [email protected] Stan Z. [email protected] Machine Learning, ICML 0.3in ] Molecular property prediction (MPP) is a crucial task in the drug discovery pipeline, which has recently gained considerable attention thanks to advances in deep neural networks. However, recent research has revealed that deep models struggle to beat traditional non-deep ones on MPP. In this study, we benchmark 12 representative models (3 non-deep models and 9 deep models) on 14 molecule datasets. Through the most comprehensive study to date, we make the following key observations: (1) Deep models are generally unable to outperform non-deep ones; (2) The failure of deep models on MPP cannot be solely attributed to the small size of molecular datasets. What matters is the irregular molecule data pattern; (3) In particular, tree models using molecular fingerprints as inputs tend to perform better than other competitors. Furthermore, we conduct extensive empirical investigations into the unique patterns of molecule data and inductive biases of various models underlying these phenomena. § INTRODUCTION Molecular Property Prediction (MPP) is a critical task in drug discovery, aimed at identifying molecules with desirable pharmacological and ADMET (absorption, distribution, metabolism, excretion, and toxicity) properties. Machine learning models have been widely used in this fast-growing field, with two types of models being commonly employed: traditional non-deep models and deep models. In non-deep models, molecules are fed into traditional machine learning models such as Random Forest and Support Vector Machine in the format of computed or handcrafted molecular fingerprints <cit.>. The other group utilizes deep models to extract expressive representations for molecules in a data-driven manner. Specifically, the Multi-Layer Perceptron (MLP) could be applied to computed or handcrafted molecular fingerprints; Sequence-based neural architectures including Recurrent Neural Networks (RNNs) <cit.>, 1D Convolutional Neural Networks (1D CNNs) <cit.>, and Transformers <cit.> are exploited to encode molecules represented in Simplified Molecular-Input Line-Entry System (SMILES) strings <cit.>. Later, it is argued that molecules can be naturally represented in graph structures with atoms as nodes and bonds as edges. This inspires a line of works to leverage such structured inductive bias for better molecular representations <cit.>. The key advancements underneath these approaches are Graph Neural Networks (GNNs), which consider graph structures and attributive features simultaneously by recursively aggregating node features from neighborhoods <cit.>. More recently, researchers incorporate 3D conformations of molecules into their representations for better performance, whereas pragmatic considerations such as calculation cost, alignment invariance, and uncertainty in conformation generation limited the practical applicability of these models <cit.>. We summarize the widely-used molecular descriptors and their corresponding models in our benchmark, as shown in Figure <ref>. Despite the fruitful progress, previous studies <cit.> have observed that deep models struggled to outperform non-deep ones on molecules. However, these studies neither consider the emerging powerful deep models (e.g., Transformer <cit.>, SphereNet <cit.>) nor explore various molecular descriptors (e.g., 3D molecular graph). Also, they did not investigate the reasons why deep models often fail on molecules. To narrow this gap, we present the most comprehensive benchmark study on molecular property prediction to date, with a precise methodology for dataset inclusion and hyperparameter tuning. Our empirical results confirm the observations of previous studies, namely that deep models generally cannot outperform traditional non-deep counterparts. Moreover, we observe several interesting phenomena that challenge the prevailing beliefs of the community, which can guide optimal methodology design for future studies. Furthermore, we transform the original molecular data to observe the performance changes of various models, uncovering the unique patterns of molecular data and the differing inductive biases of various models. These in-depth empirical studies shed light on the benchmarking results. § BENCHMARKING RESULTS. In this section, we present a benchmark on 14 molecular datasets with 12 representative models. §.§ Observations Table <ref> documents the benchmark results for various models and datasets, from which we can make the following Observations: Observation 1. Deep models underperform non-deep counterparts in most cases. As can be observed in Table <ref>, non-deep models rank as the top one on 10/14 datasets. On some datasets such as MUV, QM7, and BACE, three non-deep models can even beat any deep models. Observation 2. It is irregular data patterns, NOT solely the small size of molecular datasets to blame for the failure of deep models! Intuitively, many previous works <cit.> pointed out that the small size of molecular datasets could be a bottleneck for deep learning models. Here, we provide a second voice to such pre-dominant beliefs with empirical evidence. As shown in Table <ref>, all the non-deep models can outperform any deep ones on some larger-scale datasets (e.g., MUV and QM 7). However, in some small datasets (e.g., ClinTox and ESOL), some deep models can beat partial non-deep ones. Therefore, what matters is the irregular molecule data pattern, not solely the dataset size. We will provide an in-depth analysis to the unique molecule data pattern in Sec. <ref>. Observation 3. Tree models (XGB and RF) exhibit a particular advantage over other models. In the experiments shown in Table <ref>, we can see that the tree-based models consistently rank among the top three on each dataset. Additionally, tree models rank as the top one on 8/15 datasets. We will explore why tree models are well-suited for molecular fingerprints in Sec. <ref>. § WHY ABOVE PHENOMENA WOULD OCCUR? In this section, we attempt to understand which characteristics of molecular data lead to the failure of powerful deep models. Also, we aim to understand the inductive biases of tree models that make them well-suited for molecules, and how they differ from the inductive biases of deep models. Explanation 1. Unlike image data, molecular data patterns are non-smooth. Deep models struggle to learn non-smooth target functions that map molecules to properties. We design two experiments to verify the above explanation, i.e., increasing or decreasing the level of data smoothing in the molecular datasets. Firstly, we transform the molecular data by smoothing the labels based on similarities between molecules. Specifically, let 𝒟 denote the molecular dataset and (x_i, y_i)∈𝒟 be i-th molecule and its label, we smooth the target function as follows, y_i = ∑_x_j∈𝒩_x_is(x_i, x_j)y_j/∑_x_j∈𝒩_x_is(x_i, x_j), where s(·,·) denotes the Tanimoto coefficient of the extended connectivity fingerprints (ECFP) between two molecules that can be considered as their structural similarity. 𝒩_x_i is the k-nearest neighbor set of x_i (including x_i) picked from the whole dataset based on the structural similarities. y_i denotes the label after smoothing. We smooth all the molecules in the dataset in this way and use the smoothed label y_i to train the models. The results are shown in Figure <ref>, where `0-smooth' denotes the original datasets. `10-smooth' and `20-smooth' mean k=10 and k=20, respectively. As can be observed, the performance of deep models improves dramatically as the level of dataset smoothing increases, and many deep models including MLP, GCN, and AFP can even beat non-deep ones after smoothing. These phenomena indicate that deep models are more suitable for the smoothed datasets. Secondly, we decrease the level of data smoothing using the concept of activity cliff <cit.> from chemistry, which means a situation where small changes in the chemical structure of a drug lead to significant changes in its bioactivity. We provide an example activity cliff pairs in Figure <ref>. Apparently, the target function of activity cliffs that map molecules to the activity values is less smooth than normal molecular datasets. We then evaluate the models on the activity cliff datasets <cit.>. The test set contains molecules that are chemically similar to those in the training set but exhibit either a large difference in bioactivity (cliff molecules) or similar bioactivity (non-cliff molecules). As shown in Table <ref>, the non-deep models consistently outperform deep ones on these activity cliff datasets. Furthermore, it is worth noting that deep models exhibit a similar level of performance on both non-cliff and cliff molecules, while non-deep models experience significant changes in performance when transitioning from non-cliff to cliff molecules. These phenomena indicate that deep models are less sensitive to subtle structural changes and struggle to learn non-smooth target functions compared with tree models, especially the activity cliff cases. Our explanation is consistent with the conclusions in deep learning theory <cit.>, i.e., deep models struggle to learn high-frequency components of the target functions. However, tree models can learn piece-wise target functions, and do not exhibit such bias. Our explorations uncover several promising avenues to enhance deep models' performance on molecules: smoothing the target functions or improving deep models' ability to learn the non-smooth target functions. Explanation 2. Deep models mix different dimensions of molecular features, whereas tree models make decisions based on each dimension of the features separately. Typically, features in molecular data carry meanings individually. Each dimension of molecular fingerprints often indicates whether a certain substructure is present in the molecule; each dimension of nodes/edges features in molecular graph data indicates a specific characteristic of the atoms/bonds (e.g., atom/bond type, atom degree). To verify the above explanation, we mix the different dimensions of molecular features x_i ∈ℝ^d using an orthogonal transformation before feeding them into various models, x_i = 𝒬x_i, where 𝒬∈ℝ^d× d is the orthogonal matrix and x_i is the molecular feature after transformation. Kindly note that the meaning of x_i depends on the input molecular descriptors in the experiments. Specifically, for SVM, XGB, RF, and MLP, x_i denotes the molecular fingerprints; for GNN models, x_i can denote the atom features and bond features in the molecular graphs, i.e., we apply orthogonal transformations to both the atom features and bond features. As can be observed in Figure <ref>, the performance of tree models deteriorates dramatically and falls behind most deep models after the orthogonal transformation. It is because each dimension of x_i is a convex combination of all the dimensions of x_i according to the matrix-vector product rule. In other words, the molecular features after orthogonal transformation no longer carry meanings individually, accounting for the failure of tree models that make decisions based on each dimension of the features separately. The learning style of tree models is more suitable for molecular data because only a handful of features (e.g., certain substructures) are most indicative of molecular properties. On the other hand, the performance decreases of deep models are less significant, and most deep models can beat tree models after the transformations. We explain this observation as follows. Without the loss of generality, we assume that a linear layer of deep models can map the original molecular feature x_i to the label y_i, y_i = W^⊤ x_i + b, where W and b denote the parameter matrix and the bias term of the linear layer, respectively. And then, we aim to learn a new linear layer mapping the transformed model feature x_i to label y_i, y_i = W^⊤x_i+b = W^⊤𝒬x_i+b̂, where W and b̂ denote the parameter matrix and the bias term of the new linear layer, respectively. Apparently, to achieve the same results as the original feature, we only have to learn W so that W = 𝒬W because 𝒬^-1 = 𝒬^⊤ as an orthogonal matrix, and also b̂ = b. Therefore, applying the orthogonal transformation to molecular features barely impacts the performance of deep models. The empirical results in Figure <ref> confirm this point although some performance changes are observable due to uncontrollable random factors. This explanation inspires us not to mix the molecular features before feeding them into models. icml2023 § THE PERFORMANCE OF VARIOUS MODELS ON THE ORTHOGONALLY TRANSFORMED DATASET § EXPERIMENTAL SETUPS Fingerprints ⟼ SVM, XGB, RF, and MLP. Following the common practice <cit.>, we feed the concatenation of various molecular fingerprints including 881 PubChem fingerprints (PubchemFP), 307 substructure fingerprints (SubFP), and 206 MOE 1-D and 2-D descriptors <cit.> to SVM, XGB, RF, and MLP models to comprehensively represent molecular structures, with some pre-processing procedures to remove features (1) with missing values; (2) with extremely low variance (variance 0.05); (3) have a high correlation (Pearson correlation coefficient 0.95) with another feature. The retained features are normalized to the mean value of 0 and variance of 1. Additionally, considering that traditional machine models (SVM, RF, XGB) cannot be directly applied in the multi-task molecular datasets, we split the multi-task dataset into multiple single-task datasets and use each of them to train the models. Finally, we report the average performance of these single tasks. SMILES strings ⟼ CNN, RNN, and TRSF. We adopt the 1D CNNs from a recent study <cit.>, which include a single 1D convolutional layer with a step size equal to 1, followed by a fully connected layer. As for the RNN, we use a 3-layer bidirectional gated recurrent units (GRUs) <cit.> with 256 hidden vector dimensions. Additionally, we use the pre-trained SMILES transformer <cit.> with 4 basic blocks and each block has 4-head attentions with 256 embedding dimensions and 2 linear layers. The SMILES are split into symbols (e.g., `Br', `C', `=', `(',`2') and then fed into the transformer together with the positional encoding <cit.>. 2D Graphs ⟼ GCN, MPNN, GAT, and AFP. As in previous studies <cit.>, we exhaustively utilized all readily available atom/bond features in our 2D graph-based descriptors. Specifically, we have incorporated 9 atom features, including atom symbol, degree, and formal charge, using a one-hot encoding scheme. In addition, we included 4 bond features, such as type, conjugation, ring, and stereo. The resulting encoded graphs were then fed into GCN, MPNN, GAT, and AFP models. Further details on the graph descriptors used in our experiments can be found in <cit.>. 3D Graphs ⟼ SPN. We employ the recently proposed SphereNet <cit.> for molecules with 3D geometry. Specifically, for quantum mechanics datasets (QM7 and QM8) that contain 3D atomic coordinates calculated with ab initio Density Functional Theory (DFT), we feed them into SphereNet directly. For other datasets without labeled conformations, we used RDKit <cit.>-generated conformations to satisfy the request of SphereNet. Datasets splits, evaluation protocols and metrics, hyper-parameters tuning. Firstly, we randomly split the training, validation, and test sets at a ratio of 8:1:1. And then, we tune the hyper-parameters based on the performance of the validation set. Specifically, we select the optimal hyper-parameters set using the Tree of Parzen Estimators (TPE) algorithm <cit.> in 50 evaluations. Due to the heavy computational overhead, GNNs-based models on the HIV and MUV datasets are in 30 evaluations; all the models on the QM7 and QM8 are in 10 evaluations. And then, we conduct 50 independent runs with different random seeds for dataset splitting to obtain more reliable results, using the optimal hyper-parameters determined before. Similarly, GNNs-based models on the HIV and MUV datasets are in 30 evaluations; all the models on the QM7 and QM8 are in 10 evaluations. Following MoleculeNet benchmark <cit.>, we evaluate the classification tasks using the area under the receiver operating characteristic curve (AUC-ROC), except the area under the precision curve (AUC-PRC) on MUV dataset due to its extreme biased data distribution. The performance on the regression task are reported using root mean square error (RMSE) or mean absolute error (MAE). kindly note that we report the average performance across multi-tasks on some datasets because they contain more than one task. Additionally, to avoid the overfitting issue, all the deep models are trained with an early stopping scheme if no validation performance improvement is observed in successive 50 epochs. We set the maximal epoch as 300 and the batch-size as 128. § RELATED WORK In this section, we elaborate on various molecular descriptors and their respective learning models. §.§ Fingerprints-based Molecular Descriptors Molecular fingerprints (FPs) serve as one of the most important descriptors for molecules. Typical examples include Extended-Connectivity Fingerprints (ECFP) <cit.> and PubChemFP <cit.>. These fingerprints encode the neighboring environments of heavy atoms in a molecule into a fixed bit string with a hash function, where each bit indicates whether a certain substructure is present in the molecule. Traditional models (e.g., tree or SVM-based models) and MLPs can take these fingerprints as `raw' input. However, the high-dimensional and sparse nature of FPs introduces additional efforts for feature selection when they are fed into certain models. Additionally, it is difficult to interpret the relationship between properties and structures because the hash functions are non-invertible. §.§ Linear Notation-based Molecular Descriptors Another option for molecules is linear notations, among which SMILES <cit.> is the most frequently-used one owing to its versatility and interpretability. In SMILES, each atom is represented as a respective ASCII symbol; Chemical bonds, branching, and stereochemistry are denoted by specific symbols. However, a significant fraction of SMILES strings does not correspond to chemically valid molecules. As a remedy, a new language named SELF-referencIng Embedded Strings (SELFIES) for molecules was introduced in 2020 <cit.>. Every SELFIES string corresponds to a valid molecule, and SELFIES can represent every molecule. Naturally, RNNs, 1D CNN, and Transformers are powerful deep models for processing such sequences <cit.>. However, the poor scalability of the sequential notations and the loss of spatial information limit the performances of these approaches. §.§ 2D and 3D Graph-based Molecular Descriptors Molecules can be represented with graphs naturally, with nodes as atoms and edges as chemical bonds. Initially, <cit.> first adopted convolutional layers to encode molecular graphs to neural fingerprints. Following this work, <cit.> employs the atom-based message-passing scheme to learn expressive molecular graph representations. To complement the atom's information, <cit.> utilized both the atom's and bonds' attributes, and MPNN <cit.> generalized it to a unified framework. Also, multiple variants of the MPNN framework are developed to avoid unnecessary loops (DMPNN <cit.>), to strengthen the message interactions between nodes and edges (CMPNN <cit.>), to capture the complex inherent quantum interactions of molecules (MGCN <cit.>), or take the longer-range dependencies (Attentive FP <cit.>). More recently, some hybrid architectures <cit.> of GNNs and transformers are emerging to capture the topological structures of molecular graphs. Additionally, given that the available labels for molecules are often expensive or incorrect <cit.>, the emerging self-supervised pre-training strategies <cit.> on graph-structured data are promising for molecular graph data <cit.>, just like the overwhelming success of pre-trained language models in natural language processing community <cit.>. The 3D molecular graph is composed of nodes (atoms), and their positions in 3D space and edges (bonds). The advantage of using 3D geometry is that the conformer information is critical to many molecular properties, especially quantum properties. In addition, it is also possible to directly leverage stereochemistry information such as chirality given the 3D geometries. Recently, multiple works <cit.> have developed message-passing mechanisms tailored for 3D geometries, which enable the learned molecular representations to follow certain physical symmetries, such as equivariance to translations and rotations. However, the calculation cost, alignment invariance, uncertainty in conformation generation, and unavailable conformations of target molecules limited the applicability of these models in practice. § DISCUSSION AND CONCLUSION In this paper, we perform a comprehensive benchmark of representative models on molecular property prediction. Our results reveal that traditional machine learning models, especially tree models, can easily outperform well-designed deep models in most cases. These phenomena can be attributed to the unique patterns of molecular data and different inductive biases of various models. Specifically, the target function mapping molecules to properties are non-smooth, and some small changes can incur significant property variance. Deep models struggle to learn such patterns. Additionally, molecular features carry meanings individually and deep models would undesirably mix different dimensions of molecular features. Our study leaves an open question for future research: Can our findings and methods be generalized to other AIDD tasks including drug-target interactions (DTIs) prediction <cit.>, drug-drug interactions (DDIs) prediction <cit.>, and protein representation learning <cit.>?
http://arxiv.org/abs/2306.02009v1
20230603054426
Weight Bank Addition Photonic Accelerator for Artificial Intelligence
[ "Wenwen Zhang", "Hao Zhang" ]
physics.optics
[ "physics.optics", "cs.ET" ]
1Department of Electric Engineering and Computer Science, University of British Columbia, BC, Canada 2Department of Electric Engineering and Computer Science, University of Victoria, BC, Canada sections/abstract § INTRODUCTION sections/introduction § METHODS §.§ Device fabrication and characteristics sections/Device_fabrication_characteristics §.§ experimental setup and testing sections/experimental_setup § WEIGHT BANK DESIGN sections/Weight_bank_design § CONCLUSION sections/conclusion § ACKNOWLEDGMENTS Fabrication support was provided via the Natural Sciences and Engineering Research Council of Canada (NSERC) Silicon Electronic-Photonic Integrated Circuits (SiEPIC) Program and the Canadian Microelectronics Corporation (CMC). Devices were fabricated at Advanced Micro Foundry (AMF) A STAR foundry in Singapore.
http://arxiv.org/abs/2306.01949v1
20230602231331
The disruption index is biased by citation inflation
[ "Alexander M. Petersen", "Felber Arroyave", "Fabio Pammolli" ]
cs.DL
[ "cs.DL", "cs.SI", "econ.GN", "physics.soc-ph", "q-fin.EC" ]
Department of Management of Complex Systems, Ernest and Julio Gallo Management Program, School of Engineering, University of California, Merced, California 95343, USA Politecnico Milano, Department of Management, Economics and Industrial Engineering, Via Lambruschini, 4/B, 20156, Milan, Italy The disruption index is biased by citation inflation Fabio Pammolli July 31, 2023 ===================================================== [1] Send correspondence to: [email protected] A recent analysis of scientific publication and patent citation networks by Park et al. (Nature, 2023) suggests that publications and patents are becoming less disruptive over time. Here we show that the reported decrease in disruptiveness is an artifact of systematic shifts in the structure of citation networks unrelated to innovation system capacity. Instead, the decline is attributable to `citation inflation', an unavoidable characteristic of real citation networks that manifests as a systematic time-dependent bias and renders cross-temporal analysis challenging. One driver of citation inflation is the ever-increasing lengths of reference lists over time, which in turn increases the density of links in citation networks, and causes the disruption index to converge to 0. A second driver is attributable to shifts in the construction of reference lists, which is increasingly impacted by self-citations that increase in the rate of triadic closure in citation networks, and thus confounds efforts to measure disruption, which is itself a measure of triadic closure. Combined, these two systematic shifts render the disruption index temporally biased, and unsuitable for cross-temporal analysis. The impact of this systematic bias further stymies efforts to correlate disruption to other measures that are also time-dependent, such as team size and citation counts. In order to demonstrate this fundamental measurement problem, we present three complementary lines of critique (deductive, empirical and computational modeling), and also make available an ensemble of synthetic citation networks that can be used to test alternative citation-based indices for systematic bias. A measure of disruption was recently developed and applied to empirical citation networks <cit.>. This bibliometric measure, denoted by CD, quantifies the degree to which an intellectual contribution p (e.g. an research publication or invention patent) supersedes the sources cited in its reference list, denoted by the set {r}_p. As defined, CD_p is measured according to the local structure of the subgraph G_p={r}_p∪ p ∪{c}_p comprised of the focal node p, nodes belonging to its reference list {r}_p, and the set of nodes citing either p or any member of {r}_p, denoted by {c}_p. If future intellectual contributions cite p but do not cite members of {r}_p, then it is argued that p plays a disruptive role in the citation network. However, the critical issue we highlight is the following: as the length r_p = |{r}_p| of the reference list increases, so does the likelihood that one of those papers is highly cited. Hence, CD_p is a biased measure because reference lists have increased dramatically over time, and so too have the number of citations that highly-cited papers accrue <cit.> – both phenomena being bi-products of citation inflation <cit.>. Citation inflation (CI) refers to the systematic increase in the number of links introduced to the scientific (or patent) citation network each year. CI is analog to monetary inflation <cit.>, whereby as a government prints more money the sticker price of items tends to go up, rendering the impression that the real cost of goods are increasing (to what degree this relationship is valid depends on wage growth and a number of other economic factors). By analogy, it might also be tempting to attribute the increased volume of scientific production to techno-social productivity increases, yet this explanation neglects the persistent growth rate of the inputs (e.g. researchers and research investment) that are fundamental to the downstream production of outputs (e.g. research articles, patents. Indeed, secular growth underlies various quantities relevant to the study of the scientific endeavor, from national expenditures in R&D to the population size of researchers <cit.> and the characteristic number of authors per research publication <cit.> – all quantities that have persistently grown over the last century. Nevertheless, the degree to which such growth affects the quantitative evaluation of research outcomes is under-appreciated, and can manifest in inconsistent measurement frameworks and metrics. Indeed, the number of citations an article receives is not solely attributable to novelty or prominence of the research, but also depends on the the population size and citing norms of a discipline, and quite fundamentally, the nominal production rate of links in the citation network, among other considerations <cit.>. Hence, there is real need to distinguish nominal counts versus real values in scientific evaluation, which in the analysis of citation networks requires accounting for when each citation was produced, and in further extensions, how the credit is shared <cit.>. So what are the main sources of CI in scientific citation networks and what are the real-world magnitudes of their effects? Figure <ref>(a) illustrates how CI arises through the combination of longer reference lists, denoted by r(t), compounded by an increasing production volume, n(t). By way of real-world example, prior calculation of the growth rate of total number of citations produced per year based upon the entire Clarivate Analytics Web of Science citation network estimated that the total volume C(t) ≈ n(t)r(t) of citations generated by the scientific literature grows exponentially with annual rate g_C = g_n + g_r = 0.033 + 0.018 = 0.051 <cit.>. Hence, with the number of links in the citation network growing by roughly 5% annually, the total number of links in the citation doubles every ln(2)/g_C = 13.6 years! While the dominant contributor to CI is the growth of n(t) deriving from increased researchers and investment in science coupled with technological advances increasing the rate of manuscript production, the shift away from print towards online-only journals, and the advent of multidisciplinary megajournals <cit.>, the contribution to CI from growing reference lists alone is nevertheless substantial and varies by discipline <cit.>. By way of example, consider descriptive statistics based upon analysis of millions of research publications comprising the Microsoft Academic Graph (MAG) citation network <cit.>: in the 1960s, the average (± standard deviation) number of references per articles was r_p=9 (±17); by the 2000s, r_p increased to 23 (± 27), a 2.6-fold increase over the 50-year period – see Fig. <ref>(b). Meanwhile, as research team sizes – denoted by k_p, and used as a proxy for the production effort associated with a research output – increase in order to address research problems featuring greater topical and methodological breadth, there emerges a non-linear relationship between r_p and k_p showing that the modern research article is fundamentally different from those produced even a decade ago – see Fig. <ref>(c). Thus, not only does the nominal value of a citation vary widely by era, but the implications of secular growth on the topology of the citation network and thus citation-based research evaluation are profound <cit.>. A standard solution to taming variables that are susceptible to inflation is to use a deflator index, which amounts to normalizing the cross-temporal variation by way of standardized reference point <cit.>. Another more nefarious problem is the accurate measurement of the quantitative relationship between variables that are independently growing over time, which is susceptible to omitted variable bias if the role of time is neglected. In what follows, we demonstrate the implications of CI that render CD unsuitable for cross-temporal analysis, and call into question the empirical analysis and interpretations of trends in scientific and technological advancement based upon CD <cit.>. To establish how the disruption index suffers from citation inflation and is confounded by shifts in scholarly citation practice, we employ three different approaches: deductive analysis based upon the definition of CD_p, empirical analysis of the Microsoft Academic Graph (MAG) citation network, and computational modeling of synthetic citation networks. In the latter approach, we are able to fully control the sources of the systematic bias underlying CD (namely CI), thereby demonstrating that CD follows a stable frequency distribution in the absence of CI. We conclude with research evaluation policy implications. < g r a p h i c s > `Citation inflation' attributable to the increasing number and length of reference lists. (a) Schematic illustrating the inflation of the reference supply owing to the fact that the annual publication rate n(t) (comprised of increasing diversity of article lengths), along with the number of references per publication r(t), have grown exponentially over time t, which implies a non-stationary cross-generational flow of attribution in real citation networks. Such citation inflation cannot be controlled by way of fixed citation windows <cit.>. (b) The probability density function P_y(r_p) of the number of references per article r_p calculated for articles included in the MAG citation network grouped by the decade of publication y. Vertical dashed lines indicate the average value; vertical solid lines indicate the 90th percentile, such that only the 10% largest r_p values are in excess of this value. (c) Conditional relationship between two quantities that systematically grow over time (k_p and r_p). Note the increasing levels and slope of the relationship over the 50-year period. This relationship indicates that regressing CD_p on k_p – while omitting r_p as a covariate and thereby neglecting the negative relationship between CD_p and r_p – may lead to the confounded conclusion that CD_p decreases as k_p increases. § QUANTITATIVE DEFINITION OF CD AND A DEDUCTIVE CRITIQUE The disruption index is a higher-order network metric that incorporates information extending beyond the first-order links connecting to p – those nodes that cite p and are prospective (forward looking or diachronous), and those nodes that are referenced by p, and thus retrospective (backward looking or synchronous) <cit.>. The original definition of CD was formulated as a conditional sum across the adjacency matrix <cit.>, and was subsequently reformulated as a ratio <cit.>. According to the latter conceptualization, calculating CD_p involves first identifying three non-overlapping subsets of citing nodes, {c}_p = {c}_i∪{c}_j∪{c}_k, of sizes N_i, N_j and N_k, respectively – see Fig. <ref>(a) for a schematic illustration. The subset i refers to members of {c}_p that cite the focal p but do not cite any elements of {r}_p, and thus measures the degree to which p disrupts the flow of attribution to foundational members of {r}_p. The subset j refers to members of {c}_p that cite both p and {r}_p, measuring the degree of consolidation that manifests as triadic closure in the subnetwork (i.e., network triangles formed between p, {r}_p, {c}_j). The subset k refers to members of {c}_p that cite {r}_p but do not cite p. As such, the CD index is given by the ratio, CD_p = N_i-N_j/N_i+N_j+N_k , which can be rearranged as follows, CD_p = (N_i-N_j)/(N_i+N_j)/1+N_k/(N_i+N_j) = CD_p^nok/1+R_k . The ratio R_k = N_k/(N_i+N_j) ∈ [0, ∞) is an extensive quantity that measures the rate of extraneous citation, whereas CD_p^nok∈ [-1,1] is an intensive quantity. The polarization measure CD_p^nok is an alternative definition of disruption that simply neglects N_k in the denominator <cit.>; for this reason, characteristic values of CD_p^nok(t) are larger and decay more slowly over time then respective CD_p(t) values – see ref. <cit.>. Following initial criticism regarding the definition of CD_p <cit.>, other variations on the theme of CD have since been analyzed <cit.> and critiqued according to their advantages and disadvantages <cit.>. To summarize, we argue that a simple deductive explanation trumps the alternative socio-technical explanations offered <cit.> for the decline in CD calculated for publications and patents. Namely, the disruption index CD_p systematically declines, along with similar CD_p variants <cit.>, for the simple reason that CD features a numerator that is bounded and a denominator that is unbounded. More technically, the term R_k is susceptible to CI, which is entirely sufficient to explain why CD converges to 0 over time. § EMPIRICAL CRITIQUE In this section we show empirically that CD_p declines over time due to the runaway growth of R_k(t), and implicitly, r(t). While our results are based upon a single representation of the scientific citation network made openly available by the MAG project <cit.>, the implications are generalizable to citation networks featuring CI characterized by a non-stationary number of new links introduced by each new cohort of new citing items. To be specific, the citation network we analyzed is formed from the roughly 29.5× 10^6 million research articles in the MAG dataset that have a digital object identifier (DOI), were published between 1945-2012, and belonging to a mixture of research areas. Figure <ref>(a) shows a schematic of the sub-graph used to calculate the CD_p value for each publication. To be consistent with <cit.>, we calculate CD_p,CW(t) using a CW=5-year citation window (CW), meaning that only articles published within 5 years of p are included in the subgraph {c}_p ={c}_i∪{c}_j∪{c}_k. As such, Fig. <ref>(b) shows a decline in the average CD_5(t) that is consistent with the overall trend shown in Fig. 2 in ref. <cit.>, where the data are disaggregated by discipline; also note that Fig. 2 and ED Figs. 6 and 9 in <cit.> show that disciplines with higher publication volumes and thus more references produced (life sciences and biomedicine, and physical sciences) tend to have smaller CD_5(t) values in any given year relative to the social sciences (e.g. JSTOR), which is qualitatively consistent with our critique. We also note that while the implementation of a CW may control for right-censoring bias, it does not control CI in any precise way. By way of example, consider the impact of the CW on N_k, the number of extraneous articles that do not cite p but do cite elements of {r}_p. A CW will reduce the number of papers contributing to CD_5(t) via N_k, but it will also reduce N_i+N_j in similar proportions, leaving the ratio R_k(t) unchanged, on average. Consider a more quantitative explanation that starts by positing that N_k increases proportional to n(t)r(t), as the nodes belonging to {c}_k are unconstrained by the first-order citation network {c}_i∪{c}_j∪{r}_p. Following the same logic, N_i+N_j grows proportional to n(t). In both cases, even if the proportionality constant depends weakly on CW, the ratio R_k(t) will grow proportional to r(t). There is likely to be considerable variance in the publication-level relationship between R_k,p and r_p, because if any member of {r}_p is highly cited, then N_k is skewed towards the heavy right tail of the citation distribution. Moreover, the base number of citations associated with extreme values in the citation distribution have increased dramatically over the last half century as a result of CI, such that the number of citations C(Q | t) corresponding to the Q=99th percentile of the citation distribution increased at an annual rate of roughly 2% from roughly 55 citations in 1965 to roughly 125 citations in 2005 – see Fig. 4 in ref. <cit.>. For this reason, the term N_k introduces susceptibility to CI according to two channels. Here we focus on the channel associated with the growth of r(t), which grew at roughly the same rate as C(99 | t), growing from roughly 9 to 23 references per paper over the same period – see Fig. <ref>(c). Consequently, R_k(t) ≫ 1 for nearly the entire period of analysis and that the growth of R_k(t) is largely explained by the growth of r(t) in the empirical data – see Fig. <ref>(d). For this reason, it is more accurate to describe CD as converging to 0 as opposed to decreasing over time. In order to confirm these aggregate-level relationships at the publication level, we applied a linear regression model whereby the unit of analysis is an individual publication. The linear model specification is given by CD_p,5 = b_0 + b_k ln k_p + b_r ln r_p + b_c ln c_p + D_t + ϵ_t which controls for secular growth by way of yearly fixed-effects, denoted by D_t. The results of the ordinary least squares (OLS) estimation using the STATA 13.0 package xtreg are shown in Fig. <ref>(e), and are based upon 3 million publications with 1 ≤ k_p≤ 10 coauthors, 5 ≤ r_p≤ 50 references, and 10 ≤ c_p≤ 1000 citations that were published in the two-decade period 1990-2009. The independent variables are modeled using a logarithmic transform because they are each right-skewed: “LogK” corresponds to ln k_p; “LogNRefs” corresponds to ln r_p; and LogNCites corresponds to ln c_p= ln(c_i+c_j), the number of citations received by p in the 5-year window. This sample of MAG articles were used so that results are more closely comparable to Wu et al. who focus on articles with k_p ∈ [1,10] <cit.>. Results indicate a negative relationship between CD_p,5 and the number of references, consistent with our deductive argument. Figure <ref>(f) shows the marginal relationship with ln r_p, holding all covariates at their mean values, and indicates a net shift in CD of roughly -0.06 units as r_p increases by a factor of 10 from 5 to 50 total references. Similarly, Fig. <ref>(g) shows the marginal relationship with ln k_p, indicating a net shift in CD of roughly +0.01 units as k_p increases by a factor of 10 from 1 to 10 coauthors, which is in stark contrast to the relationship with opposite sign reported in ref. <cit.>. § COMPUTATIONAL CRITIQUE §.§ Generative network model featuring citation inflation and redirection We employ computational modeling to explicitly control several fundamental sources of variation, and to also explore complementary mechanisms contributing to shifts in CD over time – namely, shifts in scholarly citation practice. Our identification strategy is to growth synthetic citation networks that are identical in growth trajectory and size, but differ just in the specification of (i) r(t) and/or (ii) the rate of triadic closure denoted by β that controls the consolidation-disruption difference defining the numerator of CD. We model the growth of a citation network using a model originally developed in ref. <cit.> that applies Monte Carlo (MC) simulation to operationalize stochastic link dynamics by way of a random number generator. This model belongs to the class of growth and redirection models <cit.>, and reproduces a number of statistical regularities established for real citation networks – both structural (e.g. a log-normal citation distribution <cit.>) and dynamical (e.g., increasing reference age with time <cit.>; exponential citation life-cycle decay <cit.>). – see the Appendix Section A1 and Fig. <ref> for more information regarding the empirical validation of our generative network model. The synthetic networks constructed and analyzed in what follows are openly available <cit.> and can be used to test CD and other citation-network based bibliometric measures for sensitivity to CI and other aspects of secular growth. We construct each synthetic citation network by sequentially adding new layers of nodes of prescribed volume n(t) in each MC period t≥0 representing a publication year. Each new node, denoted by the index a, represents a publication that could in principal cite any of the other existing nodes in the network. As such, the resulting synthetic networks are representative of a single scientific community, and also lack latent node-level variables identifying disciplines, authors, journals, topical breadth or depth, etc. We seed the network with n(t=0) ≡ 30 `primordial' nodes that are disconnected, i.e. they have reference lists of size r_a≡ 0. This ensures that the initial conditions are the same for all networks generated. All nodes added thereafter have reference lists of a common prescribed size, denoted by r(t). These rules ensure there is no variation within a given publication cohort regarding their synchronous connectivity. To model the exponential growth of scientific production, we prescribe the number of new “publications” according to the exponential trend n(t)=n(0)exp[g_nt]. We use g_n≡ 0.033 as the publication growth rate empirically derived in prior work <cit.>. Similarly, we prescribe the number r(t) of synchronous (outgoing) links per new publication according to a second exponential trend r(t)=r(0)exp[g_rt]. For both n(t) and r(t) we use their integer part, and plot their growth in Fig. <ref>(a). We set the initial condition r(0) ≡ 25 in scenarios featuring no reference list growth (characterized by g_r=0), such that each new publication cites 25 prior articles independent of t. Alternatively, in scenarios that do feature reference list CI, we use the empirical growth rate value, g_r≡0.018 and r(0)≡ 5. We then sequentially add cohorts of n(t) publications to the network over t=1...T ≡ 150 periods according to the following link-attachment (citation) rules that capture the salient features of scholarly citation practice: Network growth rules * System Growth: In each period t, we introduce n(t) new publications, each citing r(t) other publications by way of a directed link. Hence, the total number of synchronous (backwards) citations produced in period t is C(t)=n(t)r(t), which grows exponentially at the rate g_C = g_n + g_r. * Link Dynamics: illustrated in the schematic Fig. <ref>(b). For each new publication a ∈ n(t): (i) Direct citation a → b: Each new publication a starts by referencing 1 publication b from period t_b≤ t_a (where t_a=t by definition). The publication b is selected proportional to its attractiveness, prescribed by the weight 𝒫_b,t≡ (c_×+c_b,t)[n(t_b)]^α. The factor c_b,t is the total number of citations received by b thru the end of period t-1, thereby representing preferential attachment (PA) link dynamics <cit.>. The factor n(t_b) is the number of new publications introduced in cohort t_b, and represents crowding out of old literature by new literature, net of the citation network. The parameter c_×≡ 6 is a citation offset controlling for the citation threshold, above which preferential attachment “turns on” <cit.> such that a node becomes incrementally more attractive once c_b≥ c_×. (ii) Redirection a →{r}_b: Immediately after step (i), the new publication a then cites a random number x of the publications cited in the references list {r}_b (of size r_b) of publication b. By definition, β represents the fraction of citations following from this redirection mechanism, which is responsible for the rate of non-spurious triadic closure in the network. Hence, by construction β = λ / (λ + 1) ∈ [0,1], where λ represents the average number of citations to elements of {r}_b by publication a (such that the expected value of x is λ). Consequently, λ =β/(1-β) is the ratio of the rate of citations following citing mechanism (ii) by the rate of citation following the `direct' citation mechanism (i). We operationalize the stochastic probability of selecting x references according to the binomial distribution, P(x=k) = r_b k (q)^k(1-q)^r_j-k , with success rate q=λ/r_b to ensure that ⟨ x ⟩ =λ. Put another way, on average, the total number of new citations per period that follow from the redirection citation mechanism (ii) is r_(ii)(t)=β r(t). Once x is determined by way of a random number generator, we then select x_Binomial(r_b,q) members from the set {r}_b (i.e. without replacement). Each publication belonging to {r}_b is selected according to the same weights 𝒫_p,t used in step (i). As such, this second-stage PA also prioritizes more recent elements of {r}_b (i.e., those items with larger t_p), in addition to more highly-cited elements of {r}_b. Note that we do not allow a to cite any given element of {r}_b more than once within its reference list. (c) Stop citing after reaching r(t): The referencing process alternates between mechanisms (i) and (ii) until publication a has cited exactly r(t) publications. * Repeat step 2. Link Dynamics for each new publication entering in period t. * Update the PA weights, 𝒫_p,t, for all existing nodes at the end of each t. * Perform steps (1-4) for t=1...T periods and then exit the network growth algorithm. §.§ Computational simulation results In this section we present the results of a generative citation network model <cit.> that incorporates latent features of secular growth and two complementary citation mechanisms illustrated in Fig. <ref>(b), namely: (i) direct citation from a new publication a to publication b; and (ii) redirected citations from a to a random number of publications from the reference list of b. The redirection mechanism (ii) gives rise to triadic closure in the network, thereby capturing shifts in correlated citation practice – such as the increased ease at which scholars can follow a citation trail with the advent of web-based hyperlinks, as well as self-citation. This redirection is the dominant contributor to 'consolidation' measured by N_j in CD_p. We explicitly control the rate of (ii) with a tunable parameter β∈ [0,1] that determines the fraction of links in the citation network resulting from mechanism (ii). And to simulate the net effect of β, we construct some networks featuring a constant β(t) =0 and other networks featuring an increasing β(t) ≡ t/400 such that β (t=150) = 0.375 corresponding to roughly 1/3 of links arising from mechanism (ii) by the end of the simulation – see Fig. <ref>(c). We construct ensembles of synthetic networks according to six growth scenarios that incrementally add or terminate either of two citation mechanisms: g_r=0 corresponds to no CI; and β = 0 corresponds to no triadic closure (i.e., no `consolidation'). More specifically, the parameters distinguishing the six scenarios analyzed in what follows are: (1) no CI (g_r=0 with r(t)=25); and no explicit redirection mechanism that controls triadic closure (β = 0); (2) no CI (g_r=0 with r(t)=25); and an increasing redirection rate, β(t) = t/400 such that β(150) =0.375; (3) CI implemented using the empirical value (g_r=0.018) with r(0)=5; and increasing redirection rate, β(t) = t/400; (4) same as (3) but calculated using a larger citation window. (5) same as (3) but reference list capped at r(t) = 25 for t≥ T^*≡ 92. (6) same as (4) but reference list capped at r(t) = 25 for t≥ T^*. For each scenario we constructed four distinct synthetic citation networks, each evolving over t ∈ [1,T = 150] periods (i.e years) from a common initial condition at t=0. For scenarios (1-3) we calculate CD_p using a citation window of CW= 5 periods, whereas in (4) we use CW=10 periods. Scenarios (3) and (4) are shown in order to show the non-linear sensitivity of CD_CW to the CW parameter <cit.>, and demonstrates that fixed CWs do not address CI <cit.>. Figure <ref>(d) shows 16 average CD(t) curves calculated for each synthetic network. Because the sources of network variation are strictly limited to the stochastic link dynamics, there is relatively little variance across each ensemble of networks constructed using the same scenario parameters, and so in what follows we show all realizations simultaneously. As there are no latent institution, author or other innovation covariates, then the difference between network ensembles is attributable to either CI or the redirection mechanism. We start by considering scenarios (1,2) for which g_r=0, which show that CD_5(t) systematically increases in the absence of reference list CI. While scenario (1) does capture CI attributable to increased publication volume (g_n>0), it does not appear to be sufficient to induce a negative trend in CD_5(t). Scenario (2) features an increasing β(t), which results in larger CD values because redirected citations tend to fall outside shorter CW and thus are not incorporated into the CD subgraph. Summarily, comparison of (1) and (2) indicate that the redirection mechanism capturing shifting patterns of scholarly citation behavior is the weaker of the two mechanisms we analyzed. The comparison of scenarios (2,3) illustrates the role of CI. Notably, scenario (3) reproduces both the magnitude and rate of the decreasing trend in CD(t) observed for real citation networks <cit.>. Figure <ref>(e) shows that alternative metric CD^nok_5 proposed in ref. <cit.>, which also matches the empirical trends reported in ref. <cit.>. These results demonstrate the acute effect of reference-list CI on CD since the only difference between scenarios (2) and (3) pertains to g_r. Figure <ref>(f) reproduces the linear relationship between r(t) and R_k(t) and confirms the empirical relationship shown in Fig. <ref>(e) – thereby solving the mystery regarding the origins of the decreasing disruptiveness over time <cit.>: as the size of the reference list {r}_p increases, so does the likelihood that {r}_p contains a highly-cited paper, which increases N_k to such a degree that R_k,p≫ 1 and so CD_p→ 0 independent of the relative differences between disruption and consolidation captured by N_i-N_j. Figure <ref>(g) shows that even CD^nok_5 suffers from systematic bias affecting its denominator, and so neglecting the term N_k does not solve the fundamental issue of CI. Scenarios (3,4) reveal the effect of CW, which controls the size of the set {c}_p and thus the magnitude and growth rate of R_k(t). Notably, the number of items included in {c}_p depends on both CW and t because the reference age between the cited and citing article increases with time <cit.>. Regardless, the average CD_CW(t) → 0 as r(t) increases, independent of the CW used. Figure <ref> further explores the implications of CI on CD by modeling a hypothetical scenario in which CI is suddenly `turned off' after a particular intervention time period T^*. In this way, scenarios (5,6) explore the implications of a restrictive publishing policy whereby all journals suddenly agree to impose caps on reference list lengths. Scenarios (5,6) enforce this hypothetical policy at t≥ T^*≡ 92 by way of a piecewise smooth r(t) curve such that: r(t)=r(0)exp[g_rt] for t<T^* and r(t) = r(T^*)=25 – see Fig. <ref>(a). This hypothetical intervention exhibits the potential for the scientific community to temper the effects of CI by way of strategic publishing policy. For completeness, scenarios (3) and (5) use CW=5 and scenarios (4) and (6) use CW =10. Figures <ref>(b,c) show that the average CD(t) and R_k(t) trajectories for each pair of scenarios are indistinguishable prior to T*. Yet immediately after T* the scenarios (5) and (6) diverge from (3) and (4), respectively. Notably, the average CD_5(t) in scenarios (5) and (6) reverse to the point of slowly increasing, thereby matching the trends observed for scenario (2). In the spirit of completeness, Fig. <ref>(c) confirms that this trend-reversal is due to the relationship between r(t) and R_k(t). The shifts in the average CD_5(t) are indeed representative of the entire distribution of CD_p,5 values – see Figs. <ref>(d,e). Interestingly, the distribution P_t(CD_5) converges to a stable Extreme-Value (Fisher-Tippett) distribution in the absence of reference list growth, which exposes candidate avenues for developing time-invariant measures of disruption by rescaling values according to the location and scale parameters. The feasibility of this approach was previously demonstrated in an effort to develop field-normalized <cit.> and time-invariant (z-score) citation metrics <cit.>. § DISCUSSION In summary, Despite the reasonable logic behind the definition of CD, the difference between disruptive and consolidating links appearing in the numerator, N_i-N_j, is systematically overwhelmed by the extensive quantity R_k∼ r(t) appearing in the denominator of CD. More specifically, we show that the CD index artificially decreases over time due to citation inflation deriving from ever-increasing r(t), rendering CD systematically biased and unsuitable for cross-temporal analysis. For the same reasons that central banks must design monetary policy to avoid the ill effects of printing excess money <cit.>, researchers analyzing scientific trends should be wary of citation-network bibliometrics that are not stable with respect to time. Scenarios where achievement metrics are non-stationary and thus systematically biased by nominal inflation are common, including researcher evaluation <cit.>, journal impact factors <cit.>, and even achievement metrics in professional sports <cit.>. In addition to the measurement error induced by CI, the disruption index also does not account for confounding shifts in scholarly citation practice. The counterbalance to disruption, captured by the term N_i in Eq. (1), is consolidation (N_j), which is fundamentally a measure of triadic closure in the subgraph G_p. While triangles may spuriously occur in a random network, their frequency in real networks is well in excess of random base rates due to the correlated phenomena underlying the scholastic practice – in particular, increasingly strategic (personal and social) character of scholarly citing behavior. The source and implications of citation inflation are not inherently undesirable, and if anything point to thriving industry emerging from the scientific endeavor. The advent of online-only journals is a main reason for the steady increase in r(t), as they are not limited by volume print capacity, unlike more traditional print journals. Hence, in the era of megajournals <cit.> there may have emerged a tendency to cite more liberally than in the past. Another mechanism connecting CI and citation behavior derives from the academic profession becoming increasingly dominated by quantitative evaluation, which thereby promotes the inclusion of strategic references dispersed among the core set of references directly supporting the research background and findings <cit.>. Notably, scholars have identified various classes of self-citation <cit.>, which generally emerge in order to benefit either the authors <cit.>, institutional collectives <cit.>, the handling editor <cit.>, and/or the journal <cit.> – but are otherwise difficult to differentiate from `normal' citations. Regardless of their intent, these self-citations are more likely to contribute to triadic closure because if article b cites c as a result of self-citation, then for the same reason a new article a that cites b (or c) is that much more likely to complete the triangle on principal alone. These two issues – citation inflation and shifting scholarly behavior – introduce systematic bias in citation-based research evaluation that extends over significant periods of time. Indeed, time is a fundamental confounder, and so to address this statistical challenge various methods introducing time-invariant citation metrics have been developed <cit.>. A broader issue occurs when different variables simultaneously shift over time, such as the number of coauthors, topical breadth and depth of individual articles, which makes establishing causal channels between any two variables ever more challenging. By way of example, we analyzed the relationship between CD_p and k_p, using a regression model with fixed effects for publication year to superficially control for secular growth, and observe a positive relationship between these two quantities, in stark contrast to the negative relationship reported in ref. <cit.>. We conclude with a policy insight emerging from our analysis regarding interventional approaches to addressing citation inflation. Namely, journals might consider capping reference lists commensurate with the different types of articles they publish, e.g. letters, articles, reviews, etc. An alternative that is more flexible would be to impose a soft cap based upon the average number of references per article page <cit.>. Results of our computational simulations indicate that such policy could readily temper the effects of citation inflation in research evaluation, and might simultaneously address other shortcomings associated with self-citations by effectively increasing their cost. § DATA AVAILABILITY All synthetic citation networks analyzed are openly available at the Dryad data repository <cit.>. The psuedocode for the citation network growth is sufficient to generate additional citation networks with different parameters. naturemag § APPENDIX: REPRODUCTION OF STATISTICAL REGULARITIES IN A REAL-WORLD CITATION NETWORK – THE WEB OF SCIENCE The following is a summary of the structural and dynamical regularities that characterize a typical network produced by our model using the growth parameters indicated along the top bar of Fig. <ref>. In addition to the stylized regularities listed below, the citation model also reproduces the temporal trends in CD_5, CD_5^nok, and the frequency distribution P(CD_5), reported by Park et al. <cit.> – see Figs <ref>, <ref> and <ref>. Figure <ref>(a) shows the time series n(t), r(t), and R(t) as determined by the empirical parameters g_n, g_r, and g_R. Figure <ref>(b) shows the mean of the reference distance Δ_r = t_a - t_p calculated as the time difference between the publication year of a and of any given publication p that it cites. The increasing ⟨Δ_r| t ⟩ conforms with prior theoretical and empirical work <cit.>. Figure <ref>(c) shows the decreasing frequency of publications with less than C = 0, 1, 2,5, 10 citations. This trend is consistent with empirical work <cit.>, and has profound implications on the connectivity of the citation network, and search and retrieval algorithms based upon the connectivity. Figure <ref>(d) shows the average citation life-cycle, Δ c (τ| t) of individual publications conditioned on their publication year t, where τ is the age of the publication in that year, τ_p = t-t_p+1. The exponential decay of the consistent with empirical work <cit.>. Figure <ref>(e) shows the mean and standard deviation of c' = ln (c_p+1), where citations counts c_p are tallied at T, the final period of the model. Naturally, very recent publications have not had sufficient time to accrue citations. Also, very early publications were at the peak of their lifecycle during periods in which there was smaller n(t). Hence, the average μ_LN peaks near the end of the model, and then decays to 0 for the final period. This systematic bias due to citation inflation, as well as the right-censoring bias, may seem difficult to overcome. However, the location and scale given by μ_LN and σ_LN, respectively, provide a powerful solution, which is to normalized citations according to the rescaling, z_p,t= ln(c_p,t+1)-μ_LN,t/σ_LN,t , where μ_LN,t=⟨ln (c_p,t+1) ⟩ and σ_LN,t= σ[ ln (c_p,t+1)] are the mean and the standard deviation of the logarithm of c_p,t+1 calculated across all p within each t. This normalization procedure leverages the property that the distribution P(c_t| t) is log-normally distributed, as shown for real citation networks <cit.>. As such, the distribution P(z_t) takes the form of a standardized z-score distributed according to the normal distribution N(0,1), which is stable over time. As shown in Fig. <ref>(f), P(z) forms an inverted parabola when plotted on log-linear axes, independent of t. This normalization is useful in regression settings aimed at identifying citation effects net of temporal trends, where t is included in the model specification as either as a continuous or dummy variable <cit.>. Figures <ref>(g,h) show the evolution of the citation share of the top and bottom percentile groups F_∑ c(Q | τ, t), consistent with empirical work showing that a small fraction of the top-cited papers from high-impact journals increasingly dominate the future citations of that journal <cit.>. And Figure <ref>(i) shows individual citation trajectories, c_p,t, produced by the model. The shape and distribution of the cohort are consistent with empirical citation trajectories reported in <cit.>.
http://arxiv.org/abs/2306.03631v1
20230606123720
The Anti-de Sitter proof of Thurston's earthquake theorem
[ "Farid Diaf", "Andrea Seppi" ]
math.GT
[ "math.GT", "math.DG" ]
The Anti-de Sitter proof of Thurston's earthquake theorem]The Anti-de Sitter proof of Thurston's earthquake theorem Farid Diaf]Farid Diaf Farid Diaf: Univ. Grenoble Alpes, CNRS, IF, 38000 Grenoble, France. [email protected] Andrea Seppi]Andrea Seppi Andrea Seppi: Univ. Grenoble Alpes, CNRS, IF, 38000 Grenoble, France. [email protected] The second author is member of the national research group GNSAGA [ [ ===== Thurston's earthquake theorem asserts that every orientation-preserving homeomorphism of the circle admits an extension to the hyperbolic plane which is a (left or right) earthquake. The purpose of these notes is to provide a proof of Thurston's earthquake theorem, using the bi-invariant geometry of the Lie group PSL(2,ℝ), which is also called Anti-de Sitter three-space. The involved techniques are elementary, and no background knowledge is assumed apart from some two-dimensional hyperbolic geometry. § INTRODUCTION Since the 1980s, earthquake maps have played an important role in the study of hyperbolic geometry and Teichmüller theory. These are (possibly discontinuous) maps of the hyperbolic plane to itself that, roughly speaking, are isometric in the complement of a subset of the hyperbolic plane which is a disjoint union of geodesics, and they "slip" along the "faults" represented by these geodesics. In particular, they may have points of discontinuity there. In general, an earthquake map can be complicated, and it is an isometry only on the connected components of the complement of a measured geodesic lamination. To achieve the solution of the Nielsen realization problem <cit.>, Steven Kerckhoff proved the so-called earthquake theorem for closed hyperbolic surfaces, that is, the existence of a left (right) earthquake map between any two closed hyperbolic surfaces of the same genus. In <cit.>, William Thurston gave a generalization, proved by independent methods, to a universal setting, which is the statement that we consider in the present notes: he proved that every orientation-preserving homeomorphism of the circle admits an extension to the hyperbolic plane which is a (left or right) earthquake. Earthquake maps have been extensively studied later in various directions, see <cit.> §.§ Mess' groundbreaking work and later developments In his 1990 pioneering paper <cit.>, Geoffrey Mess has first highlighted the deep connections between the Teichmüller theory of hyperbolic surfaces, and three-dimensional Lorentzian geometries of constant sectional curvature. In particular, the so-called Anti-de Sitter geometry is the Lorentzian geometry of constant negative curvature — that is, the Lorentzian analogue of hyperbolic geometry. One of the models of Anti-de Sitter three-space is simply the Lie group (2,), endowed with a Lorentzian metric which is induced by the (bi-invariant) Killing form on its Lie algebra. This is the model that we adopt in the present work. Mess has then observed that convex hulls in Anti-de Sitter space can be used, together with a Gauss map construction for spacelike surfaces, to prove earthquake theorems in hyperbolic geometry. In <cit.>, Mess outlined the proof of the earthquake theorem between closed hyperbolic surfaces. His groundbreaking ideas have been improved and implemented by several authors, leading to many results of existence of earthquake maps in various settings <cit.> and of other interesting types of extensions <cit.>. See also the paper <cit.>, which is a detailed introduction to Anti-de Sitter geometry, contains a general treatment of the Gauss map, but only sketches some of the ideas that appear in the proof of Mess. The literature seems to lack a complete proof of the earthquake theorem, in Thurston's universal version, which relies on Anti-de Sitter geometry. In this note, we will provide a detailed proof of Thurston's earthquake theorem (Theorem <ref>), and we will then recover (Corollary <ref>) the existence of earthquake maps for closed hyperbolic surfaces. While the proofs that appear in <cit.>, and in several of the aforementioned subsequent works, make use of a computation of the holonomy, here we will simply work with the definition of earthquake map. In fact, the proof presented here, although going through several technical steps, entirely involves elementary tools. The only required knowledge for these notes is the hyperbolic plane geometry in the upper half-space model, and the very basic definitions of Lie groups theory and Lorentzian geometry. §.§ A quick comparison of the two proofs It is also worth remarking that the proof presented here, and suggested by Mess, is not entirely different in spirit from Thurston's proof in <cit.>. Indeed, the starting point of Thurston's proof consists in considering, given an orientation-preserving homeomorphism f or the circle, those isometries γ of the hyperbolic plane such that the composition h:=γ∘ f is extreme left: that is, such that h has a lift h̃:→ satisfying h(x)≤ x and whose fixed point set is non-empty. In Thurston's words, "h moves points counterclockwise on the circle, except for those points that it fixes". Then Thurston defines the earthquake map to be equal to γ^-1 on the convex hull of the fixed points of h. This has an interpretation in terms of Anti-de Sitter geometry. Spacelike planes in Anti-de Sitter space, which is simply the Lie group (2,), are isometrically embedded copies of the hyperbolic plane, and are parameterized by elements of (2,) itself, via a natural duality. For instance, the dual plane to the identity consists of all elliptic elements of order two, which is identified with the hyperbolic plane itself via the fixed point map. The “extreme left condition” as above is then exactly equivalent to the condition that the spacelike plane dual to γ is a past support plane of the convex hull of the graph of f, which can be seen as a subset of the boundary at infinity of Anti-de Sitter space. The proof presented here then consists in considering the left and right projections, defined on the past boundary components of the convex hull, and to consider the composition E of one projection with the inverse of the other. It turns out that this composition map E is indeed equal to γ^-1 on the convex hull of the fixed points of γ∘ f, as in Thurston's ansatz. Of course one can replace extreme left by extreme right, and past boundary with future boundary, to obtain right earthquakes instead of left earthquakes. We remark that the main statement proved by Thurston also includes a uniqueness part. In fact, the earthquake map is not quite unique, but it is up to a certain choice that has to be made at every geodesic where it is discontinuous. We will give an interpretation of this phenomenon in terms of a choice of support plane at the points of the boundary of the convex hull that admit several support planes, but we will not provide a proof of the uniqueness part here. §.§ Main elements of the Anti-de Sitter proof Despite the above analogies with Thurston's original proof of the existence of left and right earthquakes, developing the proof in the Anti-de Sitter setting then leads to remarkable differences with respect to Thurston's proof. A large part of our proof is actually achieved by a reduction to the situation of an orientation-preserving homeomorphism of the circle which is equal to the restriction of an element γ_i of (2,) on an interval I_i (i=1,2), where I_1∪ I_2 equals the circle. In this situation the earthquake extension is already well-known, and consists of a simple earthquake. However, understanding this example in detail from the perspective of Anti-de Sitter geometry — which corresponds to the situation where a boundary component of the convex envelope of f is the union of two totally geodesic half-planes meeting along a geodesic — then permits to prove easily some of the fundamental properties that one has to verify in order to show that the composition map E is an earthquake map. There are furthermore two main technical statements that we have to prove. The first is the fact that the left and right projections (although they can be discontinuous) are bijective — which is essential since the earthquake map is defined as the inverse of the left projection post-composed with the right projection, and implies that E itself is a bijection of the hyperbolic plane. While injectivity is easy using the aforementioned example of two totally geodesic planes meeting along a geodesic, surjectivity requires a more technical argument. The second statement is an extension lemma, which ensures that the left and right projections (although sometimes discontinuous) extend continuously to the boundary, and the extension is simply the projection from the graph of f onto the first and second factor. This ensures that the composition E of the right projection with the inverse of the left projection extends to f itself on the circle at infinity. Some of the above steps do of course involve a number of technical difficulties, but the language of Anti-de Sitter geometry is, in our opinion, extremely effective, and permits to stick to quite elementary techniques in the entire work. §.§ Acknowledgements We would like to thank Pierre Will for a remark on the description of timelike planes via composition of orientation-reversing isometries, that is used in Section <ref>. We are grateful to Filippo Mazzoli and Athanase Papadopoulos for useful suggestions that helped improving the exposition. § EARTHQUAKE MAPS Throughout this work, we will use the upper half-plane model of the hyperbolic plane ^2, that is, ^2 is the half-space Im(z)>0 in endowed with the Riemannian metric |dz|^2/Im(z)^2 of constant curvature -1. Its visual boundary ∂_∞^2 is therefore identified with ∪{∞}, and ^2=^2∪∂_∞^2 is endowed with the topology given by the one-point compactification of the closed half-plane Im(z)≥ 0. The isometry group of ^2 is identified with the group (2,) acting by homographies, and its action naturally extends to ∂_∞^2. A geodesic lamination λ of ^2 is a collection of disjoint geodesics that foliate a closed subset X⊆^2. The closed set X is called the support of λ. The geodesics in λ are called leaves. The connected components of the complement ℍ^2∖ X are called gaps. The strata of λ are the leaves and the gaps. Given a hyperbolic isometry γ of ^2, the axis of γ is the geodesic ℓ of ^2 connecting the two fixed points of γ in ∂_∞^2. Therefore the axis ℓ is preserved by γ, and when restricted to ℓ, γ|_ℓ:ℓ→ℓ acts as a translation with respect to any constant speed parameterization of ℓ. Given two subsets A,B of ^2, we say that a geodesic ℓ weakly separates A and B if A and B are contained in the closure of different connected components of ^2∖ℓ. A left (resp. right) earthquake of ^2 is a bijective map E:ℍ^2→ℍ^2 such that there exists a geodesic lamination λ for which the restriction E|_S of E to any stratum S of λ is equal to the restriction of an isometry of ^2, and for any two strata S and S' of λ, the comparison isometry Comp(S,S'):=(E|_S)^-1∘ E|_S' is the restriction of an isometry γ of ^2, such that: * γ is different from the identity, unless possibly when one of the two strata S and S' is contained in the closure of the other; * when it is not the identity, γ is a hyperbolic transformation whose axis ℓ weakly separates S and S'; * moreover, γ translates to the left (resp. right), seen from S to S'. Let us clarify the meaning of this last condition. Suppose f:[0,1]→^2 is smooth path such that f(0)∈ S, f(1)∈ S' and the image of f intersects ℓ transversely and exactly at one point z_0=f(t_0)∈ℓ. Let v=f'(t_0)∈ T_z_0^2 be the tangent vector at the intersection point. Let w∈ T_z_0^2 be a vector tangent to the geodesic ℓ pointing towards γ(z_0). Then we say that γ translates to the left (resp. right) seen from S to S' if v,w is a positive (resp. negative) basis of T_z_0^2, for the standard orientation of ^2. It is important to observe that this condition is independent of the order in which we choose S and S'. That is, if Comp(S,S') translates to the left (resp. right) seen from S to S', then Comp(S',S) translates to the left (resp. right) seen from S' to S. We remark that an earthquake E is not required to be continuous. In fact, in some cases it will not be continuous, for instance when the lamination λ is finite, meaning that λ is a collection of a finite number of geodesics. This is best visualized in the following simple example. The map E:^2→^2 defined in the upper half-space model of ^2 by: E(z)= z if Re(z)<0 a z if Re(z)=0 b z if Re(z)>0 is a left earthquake if 1<a<b, and a right earthquake if 0<b<a<1. The lamination λ that satisfies Definition <ref> is composed of a unique geodesic, namely the geodesic ℓ with endpoints 0 and ∞. It is clear that the earthquake map E from Example <ref> is not continuous along ℓ. Despite the lack of continuity, Thurston proved that any earthquake map extends continuously to an orientation-preserving homeomorphism of ∂_∞^2, meaning that there exists a (unique) orientation-preserving homeomorphism f:∂_∞^2→∂_∞^2 such that the map E(z)= E(z) if z∈^2 f(z) if z∈∂_∞^2 is continuous at every point of ∂_∞^2. Then Thurston provided a proof of the following theorem, that he called “geology is transitive”: Given any orientation-preserving homeomorphism f:∂_∞^2→∂_∞^2, there exists a left earthquake map of ^2, and a right earthquake map, that extend continuously to f on ∂_∞^2. We remark that the earthquake map is not unique, as shown by Example <ref>, which provides a family of left (resp. right) earthquake maps extending the homeomorphism f(x)= x if x≤ 0 b x if x≥ 0 ∞ if x=∞ , parameterized by the choice of a∈(1,b) (resp. a∈ (b,1)). Thurston's theorem is actually stronger than the statement of Theorem <ref> above, since it characterizes the non-uniqueness as well. In short, the range of choices of the earthquake extension as in Example <ref> is essentially the only indeterminacy that occurs, and it happens exactly on each leaf of the lamination where the earthquake is discontinuous. We will not deal with the uniqueness part as in Thurston's work here. Nevertheless, in Subsection <ref> we will show that our proof permits to recover the existence of earthquake maps between homeomorphic closed hyperbolic surfaces, not relying on the uniqueness property. § ANTI-DE SITTER GEOMETRY In this section, we will introduce the fundamental notions in Anti-de Sitter geometry. For more details, the reader can consult <cit.>. §.§ First definitions The three-dimensional Anti-de Sitter space ^3 is the Lie group (2,), that is, the group of orientation-preserving isometries of ^2, endowed with a bi-invariant metric of signature (2,1) (namely, a Lorentzian metric) which we now construct. Consider first the double cover (2,) of (2,), which we realize as the subset of matrices of unit determinant in the four-dimensional vector space of 2-by-2 matrices. Endow with the quadratic form q: q(A)=- (A) . It can be checked that q has signature (2,2). The associated bilinear form is expressed by the formula: ⟨ A,B⟩=-1/2(A·adj(B)) for A,B∈, where adj denotes the adjugate matrix, namely adj[ a b; c d ]=[ d -b; -c a ] . Then (2,) is realized as the subset of defined by the condition q(A)=-1, and the restriction of ⟨·,·⟩ to the tangent space of (2,) at every point defines a pseudo-Riemannian metric of signature (2,1). We will still denote by ⟨·,·⟩ this metric on (2,), and by q the corresponding quadratic form. It can be shown that this metric has constant curvature -1, and the restriction of ⟨·,·⟩ to the Lie algebra (2,) coincides with 1/8 times the Killing form of (2,). Clearly both (2,) and q are invariant under multiplication by minus the identity matrix, hence the quotient (2,)=(2,)/{± 1} is endowed with a Lorentzian metric of constant curvature -1, and is what we call the (three-dimensional) Anti-de Sitter space ^3. It turns out that the group of orientation-preserving and time-preserving isometries of ^3 is the group (2,)×(2,), acting by left and right multiplication on (2,)≅^3: (α,β)·γ:=αγβ^-1 . Although orientation-preserving and time-preserving are notions that do not depend on a chosen orientation, we will fix here an orientation and a time-orientation of ^3≅(2,). To define an orientation on a Lie group, it actually suffices to define it in the Lie algebra, namely the tangent space at the identity . Hence we declare that the following is an oriented basis (which is actually orthonormal) of (2,): V= [ 0 1; 1 0 ] W=[ 1 0; 0 -1 ] U=[ 0 -1; 1 0 ] Observe that the vectors V,W are spacelike (i.e. q(V,V)>0 and q(W,W)>0), while U is timelike (q(U,U)<0). One can check that U is the tangent vector to the one-parameter group of elliptic isometries of ^2 fixing i∈^2, parameterized by the angle of clockwise rotation; V and W are vectors tangent to the one-parameter groups of hyperbolic isometries fixing the geodesics with endpoints (-1,1) and (0,∞) respectively. Analogously, to define a time-orientation it suffices to define it in the Lie algebra, and we declare that U is a future-pointing timelike vector. §.§ Boundary at infinity The boundary at infinity of ^3 is defined as the projectivization of the cone of rank one matrices in : ∂_∞^3=P{A | q(A)=0, A≠ 0} . We endow ^3=^3∪∂_∞^3 with the topology induced by seeing both ^3 and ∂_∞^3 as subsets of the (real) projective space over the vector space . Hence ^3 is the compactification of ^3 in . It will be extremely useful to consider the homeomorphism between ∂_∞^3 and ^1×^1, which is defined as follows: [ δ : ∂_∞^3 → ^1×^1; [X] ↦ (Im(X),Ker(X)); ] where of course in the right-hand side we interpret ^1 as the space of one-dimensional subspaces of ^2. Since we have Im(AXB^-1)=A·Im(X) and Ker(AXB^-1)=B·Ker(X), the map δ is equivariant with respect to the action of the group (2,)×(2,), acting on ∂_∞^3 as the natural extension of the group of isometries of ^3, and on ^1×^1 by the obvious product action. The following is a useful characterization of sequences in ^3 converging to a point in the boundary (see <cit.>): for (γ_n)_n∈ a sequence of isometries of ^2, we have: γ_n→δ^-1(x,y) ⇔there exists z∈^2 such that γ_n(z)→ x and γ_n^-1(z)→ y ⇔for every z∈^2, γ_n(z)→ x and γ_n^-1(z)→ y where of course here we are using the standard identification between ^1 and the visual boundary ∪{∞}=∂_∞^2, mapping the line spanned by (a,b) to a/b. A fundamental step in the proof of the earthquake theorem is that to any map f:∂_∞^2→∂_∞^2 we can associate a subset of ∂_∞^3, namely (via the map δ) the graph of f. By the equivariance of the map δ introduced in (<ref>), we see immediately that, for (α,β)∈(2,)×(2,): (α,β)·graph(f)=graph(β fα^-1) . In the rest of this paper, we will omit the map δ, and we will simply identify ∂_∞^3 with ^1×^1. §.§ Spacelike planes We conclude the preliminaries by an analysis of totally geodesic planes in ^3. They are all obtained as the intersection of ^3 with a projective subspace in the projective space over . Hence they are all of the following form: P_[A]={[X]∈(2,) | ⟨ X,A⟩=0} for some nonzero 2-by-2 matrix A. The notation P_[A] is justified by the observation that the plane P_A defined in the right-hand side of (<ref>) depends only on the projective class of A. The totally geodesic plane P_[A] is spacelike (resp. timelike, lightlike) if and only if q(A)=-(A) is negative (resp. positive, null). It will be called the dual plane of [A], since it can be seen as a particular case of the usual projective duality between points and planes in projective space. In particular, the dual plane P_γ of an element γ∈(2,) is a spacelike totally geodesic plane. The first example, which is of fundamental importance for the following, is for γ= is the identity of (2,). By (<ref>), P_ is the subset of (2,) consisting of projective classes of unit matrices X with (X)=0. By the Cayley–Hamilton theorem, X^2=-𝕀, hence the elements of P_ are order–two isometries of ^2, that is, elliptic elements with rotation angle π. Observe that P_ is invariant under the action of (2,) by conjugation, which corresponds to the diagonal in the isometry group (2,)×(2,) of ^3. Using (<ref>), one immediately sees that the boundary of P_ in ∂_∞^3≅^1×^1 is the diagonal; more precisely: ∂_∞ P_=graph()⊂^1×^1 . Given a point z∈^2, let us denote by ℛ_z the order–two elliptic isometry with fixed point z. We claim that the map ι:^2→ P_ ι(z)=ℛ_z is an isometry with respect to the hyperbolic metric of ^2 and the induced metric on P_⊂^3. First, the inverse of ι is simply the fixed-point map :P_→^2 sending an elliptic isometry to its fixed point, which also shows that ι is equivariant with respect to the action of (2,) on ^2 by homographies and on P_ by conjugation, since (αγα^-1)=α((γ)). That is, we have the relation ι(α· p)=α∘ι(p)∘α^-1 . This immediately implies that ι is isometric, since the pull-back of the metric of P_ is necessarily (2,)-invariant and has constant curvature -1, hence it coincides with the standard hyperbolic metric on the upper half-space. This example is actually the essential example to understand general spacelike totally geodesic planes. Indeed, every spacelike totally geodesic plane is of the form P_γ for some γ∈(2,). To see this, observe that the action of the isometry group of ^3 on spacelike totally geodesic planes is transitive, and that P_γ=(γ,)P_ because the isometry (γ,) maps to γ, and therefore maps the dual plane of to the dual plane of γ. By (<ref>) and (<ref>), we immediately conclude the following: Every spacelike totally geodesic plane of ^3 is of the form P_γ for some orientation-preserving isometry γ of ^2, and ∂_∞ P_γ=graph(γ^-1)⊂^1×^1 . §.§ Timelike planes Let us now consider a matrix A∈ such that (A)=-1. Hence the plane defined by Equation (<ref>) is a timelike totally geodesic plane. Associated with [A] is an orientation-reversing isometry η of ^2 . Indeed, the action of A by homography on ℂP^1 preserves ^1 and switches the two connected components of the complement, that is, the upper and the lower half-spaces. The matrix A thus induces an orientation-reversing isometry, up to identifying these two components via z↦z̅. We will thus denote P_[A] by P_η, by a small abuse of notation. The totally geodesic plane P_η can be parameterized as follows. Consider the map ℐ↦ℐη , defined on the space of reflections ℐ along geodesics of ^2, with values in (2,)≅^3. Its image is precisely P_η. Indeed, it is useful to remark that by the Cayley–Hamilton theorem, a matrix X with (X)=-1 is an involution if and only if and (X)=0. Now, because (A)=-1, adj(A)=-A^-1, and therefore ⟨ XA,A⟩=0 if and only if (X)=0, that is, if and only if X is an involution. This shows that the image of the map (<ref>) is the entire plane P_η. Similarly to the spacelike case, using the transitivity of the action of the group of isometries on timelike planes, every timelike plane is of the form above. Thanks to this description, we can show the following. Every timelike totally geodesic plane of ^3 is of the form P_η for some orientation-reversing isometry η of ^2, and ∂_∞ P_η=graph(η^-1)⊂^1×^1 . It only remains to check the identity for ∂_∞ P_η. For this, we will use the characterization (<ref>) together with the parameterization (<ref>) of P_η. Suppose the sequence ℐ_n is such that ℐ_nη(z)→ x∈∂_∞^2, for any z∈^2. Then, using that ℐ_n is an involution and the continuity of the action of η on ^2, (ℐ_nη)^-1(z)=η^-1ℐ_n^-1(z)=η^-1ℐ_n(z)→η^-1(x). This concludes the proof. It is worth remarking that, since reflections of ^2 are uniquely determined by (unoriented) geodesics, we can consider the map (<ref>) as a map from the space 𝒢(^2) of unoriented geodesics of ^2 to (2,). It turns out that this map is isometric with respect to a natural metric on 𝒢(^2) which makes it identified with the two-dimensional Anti-de Sitter space ^2, see <cit.> for more details. §.§ Lightlike planes The only case left to consider consists of lightlike totally geodesic planes. Those are of the form P_[A] for a nonzero matrix A with (A)=0, that is, for rank(A)=1. We describe their boundary in the following lemma. It is important to remark that, unlike spacelike and timelike planes considered above, the boundary will not be a graph in ^1×^1. Every lightlike totally geodesic plane of ^3 is of the form P_[A] for some rank one matrix A, and ∂_∞ P_[A]=(Im(A)×^1)∪(^1×(A)) . In other words, ∂_∞ P_[A] is the union of two circles in ^1×^1, one horizontal and one vertical, which intersect exactly at the point in ^1×^1 corresponding to [A]∈∂_∞^3 via the map δ introduced in (<ref>). The points in ∂_∞ P_[A] are projective classes of rank one matrices X satisfying ⟨ X,A⟩=0, that is, such that (Xadj(A))=0. Since Xadj(A) has vanishing determinant, by the Cayley-Hamilton theorem Xadj(A) is traceless if and only if it is nilpotent, that is, if and only if Xadj(A)Xadj(A)=0. Since image and kernel of both X and adj(A) are all one-dimensional, it is immediate to see that this happens if and only if Im(adj(A))=(X) or Im(X)=(adj(A)) . Now, since (A)=0 implies adj(A)A=Aadj(A)=0, the relations (adj(A))=Im(A) and Im(adj(A))=(A) hold. Hence X∈ P_[A] if and only if Im(X)=Im(A) or (X)=(A), which concludes the proof, by the definition of δ. § CONVEXITY NOTIONS In this section we develop the necessary tools to tackle the proof of Thurston's earthquake theorem. §.§ Affine charts The starting point of the proof rests in considering the graph of an orientation-preserving homeomorphism f:^1→^1 as a subset of ∂_∞^3, and taking its convex hull. However, the convex hull of a set in projective space can be defined in an affine chart, but ^3 is not contained in any affine chart. The following lemma serves to show that the convex hull of the graph of f is well-defined. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism. Then: * There exists a spacelike plane P_γ in ^3 such that ∂_∞ P_γ∩(f)=∅. * Moreover, given any point (x_0,y_0) ∉(f), there exists a spacelike plane P_γ such that ∂_∞ P_γ∩(f)=∅ and (x_0,y_0)∈∂_∞ P_γ. Before providing the proof, let us discuss an important consequence of the first item. Given a (spacelike) plane P_γ in ^3, let 𝒫_γ be the unique projective subspace in that contains P_γ, which is defined by the equation (<ref>) (where γ=[A]). Let us denote by 𝒜_γ the complement of 𝒫_γ, which we will call a (spacelike) affine chart. The first item of Lemma <ref> can be reformulated as follows: Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism. There exists a spacelike affine chart 𝒜_γ containing (f). The proof of Lemma <ref> below is largely inspired by <cit.>. Clearly the second item implies the first. However, we will first prove the first item, and then explain how to improve the proof to achieve the second item. Recall that (2,) acts transitively on pairs of distinct points of ^1≅∪{∞} — actually, it acts simply transitively on positively oriented triples. Hence for the first point we may assume, up to the action of the isometry group of ^3 by post-composition on f (recall (<ref>)), that f(0)=0 and f(∞)=∞. Then f induces a monotone increasing homeomorphism from to . Since f(0)=0, f preserves the two intervals (-∞,0) and (0,∞). Let now γ=ℛ_i be the order–two elliptic isometry fixing i. Clearly γ is an involution that maps 0 to ∞, and switches the two intervals (-∞,0) and (0,∞). Hence f(x)≠γ(x) for all x∈∪{∞}, that is, (f)∩(γ)=∅. By Lemma <ref> and the fact that γ is an involution, (f)∩∂_∞P_γ=∅. To prove the second item, we will make full use of the transitivity of the (2,)-action on oriented triples, and we will apply both pre and post-composition of an element of (2,). As a preliminary step, let (x_0,y_0)∉(f), and observe that we can find points x and x' such that f maps the unoriented arc of ^1 connecting x and x' containing x_0 to the unoriented arc connecting f(x) and f(x') not containing y_0. The proof is just a picture, see Figure <ref>. Since f preserves the orientation of ^1, up to switching x and x', we have that (x,x_0,x') is a positive triple in ^1, while (f(x),y_0,f(x')) is a negative triple. Having made this observation, using simple transitivity on oriented triples we can assume (x,x_0,x')=(0,1,∞) and (f(x),y_0,f(x'))=(0,-1,∞). Then the choice γ=ℛ_i as in the first part of the proof satisfies the condition in the second item as well, since γ(1)=-1. §.§ Convex hulls Corollary <ref> permits to consider the convex hull of (f), in any affine chart 𝒜_γ that contains (f). Given σ∈(2,), the convex hull of (σ) is the closure of the totally geodesic spacelike plane P_σ^-1 in ^3. Indeed by Lemma <ref> the boundary at infinity of P_σ^-1 equals (σ), and moreover P_σ^-1 is convex, since spacelike geodesics of ^3 (which are the intersections of two transverse spacelike planes) are lines in an affine chart, and any two points in ∂_∞^2 are connected by a geodesic. Hence P_σ^-1 is clearly the smallest convex set containing (σ). This is the only case in which (f) is contained in a plane, and therefore its convex hull has empty interior. If f is not the restriction to ^1 of an element of (2,), then the convex hull of (f) is a convex body in the affine chart 𝒜_γ. Let us study one more important property of the convex hull of (f). Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism, let P_γ in ^3 be a spacelike plane such that ∂_∞ P_γ∩(f)=∅, and let K be the convex hull of (f) in the affine chart 𝒜_γ. Then: • The interior of K is contained in ^3. • The intersection of K with ∂_∞^3 equals (f). In particular, K⊂^3. Before proving Proposition <ref>, we give another technical lemma, which is proved by an argument in a similar spirit as the proof of Lemma <ref>. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism and let P_γ in ^3 be a spacelike plane such that ∂_∞ P_γ∩(f)=∅. Given any two distinct points (x,f(x)) and (x',f(x')) in (f), there exists a spacelike plane, disjoint from P_γ, containing them in its boundary at infinity. Applying the action of (2,)×(2,) we can assume that γ=. The hypothesis ∂_∞ P_∩(f)=∅ then tells us that f has no fixed point. We are looking for a σ∈(2,) such that * P_∩ P_σ^-1=∅; * (x,f(x)),(x',f(x'))∈∂_∞ P_σ^-1=(σ). For the first condition to hold, it clearly suffices that the boundaries of P_ and P_σ^-1 do not intersect, that is to say, σ(y)≠ y for all y∈^1. This is equivalent to saying that σ does not have fixed points on ^1, namely, σ is an elliptic isometry. The second condition is equivalent to σ(x)=f(x) and σ(x')=f(x'). Since f has no fixed points, f(x)≠ x and f(x')≠ x'. There are various cases to distinguish (see also Figure <ref>). First, suppose (x,f(x),x') is a positive triple. Then either (x,f(x'),f(x),x') or (x,f(x),x',f(x')) are in cyclic order, because the remaining possibility, namely that (x,f(x),f(x'),x') are in cyclic order, would imply that f has a fixed point. If (x,f(x'),f(x),x') are in cyclic order, then the hyperbolic geodesics ℓ connecting x to f(x) and ℓ' connecting x' to f(x') intersect, and the order two elliptic isometry σ fixing ℓ∩ℓ' maps x to f(x) and x' to f(x'). If (x,f(x),x',f(x')) are in cyclic order, then the geodesics ℓ_1 connecting x to x' and ℓ_2 connecting f(x) to f(x') intersect, and one can find an elliptic element σ fixing ℓ_1∩ℓ_2 sending x to f(x) and x' to f(x'). Second, if (x,f(x),x') is a negative triple, then the argument is completely analogous. Finally, there is the possibility that f(x)=x'. If f(x')≠ x, the σ we are looking for is for instance an order–three elliptic isometry with fixed point in the barycenter of the triangle with vertices x,f(x)=x' and f(x'). If instead f(x')=x, then clearly we can pick any order–two elliptic isometry with fixed point on the geodesic ℓ from x to x'. In particular, Lemma <ref> shows that given any spacelike affine chart 𝒜_γ containing (f) and any two distinct points in (f), the line connecting them is contained in ^3∩𝒜_γ (except for its endpoints, which are in ∂_∞^3), and is a spacelike geodesic of ^3. We are now ready to prove Proposition <ref>. Given a point p in ∂_∞^3∖(f), by the second item of Lemma <ref> there exists a spacelike plane P_γ' passing through p that does not intersect (f). This implies that P_γ'∩ K=∅, and therefore K∩∂_∞^3=(f). Since K is connected, it is contained in the closure of one component of the complement of ∂_∞^3 in 𝒜_γ. But K is connected and intersects ^3∖ P_γ because, by Lemma <ref>, the line segment connecting any two points of (f) in the affine chart 𝒜_γ is contained in ^3∩𝒜_γ. Hence K is contained in ^3 and its interior is contained in ^3. By Corollary <ref> and Proposition <ref>, we can now give the following definition: Given an orientation-preserving homeomorphism f:ℝℙ^1→ℝℙ^1, we define (f) to be the subset of ^3 which is obtained as the convex hull of (f) in any spacelike affine chart 𝒜_γ such that ∂_∞ P_γ∩(f)=∅. The definition is well posed — that is, it does not depend on the chosen affine chart 𝒜_γ — because lines and planes are well defined in projective space, hence the change of coordinates from an affine chart to another preserves convex sets. When referring to convexity notions in the following, we will implicitly assume we have chosen a spacelike affine chart 𝒜_γ containing (f). §.§ Support planes Let us recall a basic notion in convex analysis. Given a convex body K in an affine space of dimension three, a support plane of K is an affine plane Q such that K is contained in a closed half-space bounded by Q, and ∂ K∩ Q≠∅. If p∈∂ K∩ Q, one says that Q is a support plane at the point p. As a consequence of the Hahn–Banach theorem, there exists a support plane at every point p∈∂ K. We will adopt this terminology for the convex hulls (f) in ^3: we say that a totally geodesic plane P is a support plane of (f) (at p∈∂(f)) if p∈(f)∩P⊂^3 and, in an affine chart containing (f), (f) lies in a closed half-space bounded by the affine plane that contains P. As usual, one easily sees that this definition does not depend on the affine chart as long as it contains (f). Equivalently, we can say that a totally geodesic plane P is a support plane for (f) if there exists a continuous family {P_t}_t∈ [0,ϵ) of totally geodesic planes, pairwise disjoint in ^3, such that P_0=P and P_t∩(f)=∅ for t> 0. Also, recall that we have the following identity for convex hulls: if X is a set, (X) its convex hull and Q an affine support plane for (X), then Q∩(X)=(Q∩ X). Applying this identity in our setting, we obtain for any totally geodesic support plane P: P∩(f)=(∂_∞ P∩(f)) . In the following proposition, we see that all support planes of (f) are allowed to be spacelike, and lightlike only if they touch (f) at a boundary point. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism, and let P be a support plane of (f) at a point p∈∂(f). Then: * If p∈^3, then P is a spacelike plane. * If p∈∂_∞^3, then P is either spacelike or lightlike. The basic observation is that if P is a support plane, then ∂_∞ P and (f)=(f)∩∂_∞^3 do not intersect transversely. To clarify this notion, we say that an intersection point p∈∂_∞ P∩(f) is transverse if, for a small neighbourhood U of p such that ((f)∖ p)∩ U has two connected components, these two connected components are contained in different connected components of U∖∂_∞ P. From Lemma <ref>, if P is timelike, then ∂_∞ P is the graph of an orientation-reversing homeomorphism of ^1, hence it intersects (f) transversely. From Lemma <ref>, if P is lightlike, then ∂_∞ P is the union of the two circles {x}×^1 and ^1×{y}. So if p∈∂_∞ P∩(f) and p is not the point p_0=(x,y), then ∂_∞ P and (f) intersect transversely. So the sole possibility for P to be a lightlike support plane is to intersect (f) only at the point p_0. It remains to show that P∩(f) consists only of the point p_0, that is, it does not contain any point of ^3. By contradiction, if q∈ P∩(f) is different from p_0, then by (<ref>) ∂_∞ P∩(f) would contain another point different by p_0 as well, because the left-hand side must contain not only p_0 but also q. This would give a contradiction as above. Given a spacelike support plane P of (f) at a point p, we say that P is a future (resp. past) support plane if in a small simply connected neighbourhood U of p in ^3, (f) is contained in the closure of the connected component of U∖ P which is in the past (resp. future) of P. This means that there exist future-oriented (resp. past-oriented) timelike curves in U leaving (f)∩ U and reaching P∩ U. Clearly (f) cannot have a future and past support plane at p at the same time, unless (f) has empty interior, which is precisely the situation when f is an element of (2,) as in Example <ref>. In the following we will always assume int (f)≠∅. As a consequence of the previous discussion, we have the following useful statement on the convergence of support planes. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism which is not in (2,), p_n a sequence of points in ∂(f), and P_γ_n a sequence of future (resp. past) spacelike support planes at p_n, for γ_n∈(2,). Up to extracting a subsequence, we can assume p_n→ p and P_γ_n→ P. Then: * If p∈^3, then P=P_γ is a future (resp. past) support plane of (f), for γ_n→γ∈(2,). * If p∈∂_∞^3, then either P is a lightlike plane whose boundary is the union of two circles meeting at p, or the conclusion of the previous point holds. The proof is straightforward, having developed all the necessary elements above. It is clear that we can extract converging subsequences from p_n and P_γ_n, by compactness of (f) and of the space of planes in projective space. Also, the limit of the sequence of support planes P_γ_n at p_n is a support plane P at p, since both conditions that p_n∈(f) and that (f) is contained in a closed half-space bounded by P_γ_n are closed conditions. By Proposition <ref>, if the limit p is in ^3, then P is a spacelike support plane, which is of course future (resp. past) if all the P_γ_n are future (resp. past). This situation can also occur analogously if p∈∂_∞^3; the other possibility being that P is lightlike, and in this case the proof of Proposition <ref> shows that P=P_[A] if p is represented by the projective class of the rank–one matrix A. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism which is not in (2,). Then ∂(f) is the disjoint union of (f)=(f)∩∂_∞^3 and of two topological discs, of which one only admits future support plane, and the other only admits past support planes. It is a basic fact in convex analysis that ∂(f) is homeomorphic to 𝕊^2; by Proposition <ref>, its intersection with ∂_∞^3 equals (f) and is therefore a simple closed curve. By the Jordan curve theorem, the complement of (f) is the disjoint union of two topological discs, each of which is contained in ^3 again by Proposition <ref>. By Lemma <ref>, the set of points p∈∂(f) admitting a future support plane is closed. But it is also open because its complement is the set of points admitting a past support plane, for which the same argument applies. Hence each connected component of the complement of (f) admits only future support planes, or only past support planes. Finally, (f) necessarily admits both a past and a future support plane, otherwise it would not be compact in an affine chart. This concludes the proof. By virtue of Corollary <ref>, we will call the connected component of ∂(f)∖(f) that only admits future support planes the future boundary component, and denote it by ∂_+(f); similarly, the connected component that only admits past support planes is the past boundary component, denoted by ∂_-(f). §.§ Left and right projections We are now ready to introduce the left and right projections, which will play a central role in the proof of the earthquake theorem. These are maps π_l^±:∂_±(f)→^2 π_r^±:∂_±(f)→^2 defined on the future or past components of ∂(f), constructed as follows. Given a point p∈∂_±(f), let P be a support plane of (f) at p. By Proposition <ref>, the support plane is necessarily spacelike, hence of the form P=P_γ for some γ∈(2,). It is important to remark here that P_γ might not be unique, if ∂_±(f) is not C^1 at p. Hence we choose a support plane P_γ at p. Moreover we require that the choice of support planes is made so that the support plane is constant on any connected component of the subset of ∂_±(f) consisting of those points that admit more than one support plane. The definition of the projections then depends (although quite mildly, see Corollary <ref> below) on the choice of P_γ. Now, having chosen the support plane P_γ at p, left or right multiplication by γ^-1 maps γ to , and therefore maps P_γ to P_, which we recall from Example <ref> is the space of order–two elliptic elements and is therefore naturally identified with ^2 via the map :P_→^2. Denote by L_γ^-1:(2,)→(2,) and R_γ^-1:(2,)→(2,) the left and right multiplications by γ^-1; in other words, there are the actions of the elements (γ,) and (,γ^-1) of (2,)×(2,). By what we said above, L_γ^-1(p) and R_γ^-1(p) are elements of P_, since p∈ P_γ, and L_γ^-1(p) (resp. R_γ^-1(p)) maps bijectively P_γ to P_. We can finally define: π_l^±(p)=(R_γ^-1(p)) π_r^±(p)=(L_γ^-1(p)) . It might seem counterintuitive to define the left projection using right multiplication, and vice versa. However, this is the most natural choice by virtue of the property of Lemma <ref> below. Another reason to justify this choice is that these projections can be naturally seen as the left and right components of the Gauss map of spacelike surfaces in ^3 with values in the space of timelike geodesics of ^3, which is naturally identified with ^2×^2, see <cit.> for more details and for several other equivalent definitions. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism, and let (α,β)∈(2,)×(2,). Let us denote K=(f) and K̂=(α,β)·(f) and let π_l^±,π_r^±:∂_± K→^2 and π̂_l^±,π̂_r^±:∂_±K̂→^2 be the left and right projections of K and K̂ respectively. Then π̂_l^±∘ (α,β)=α∘π_l^± π̂_r^±∘ (α,β)=β∘π_r^± . To clarify the statement, let us remark that the isometry (α,β) maps a point p∈ K to a point p̂∈K̂, and maps support planes at p∈ K to support planes at p̂. Hence the relation (<ref>) holds when we consider the projections π̂_l^± and π̂_r^± defined with the choice of support planes of K̂ given by the images P̂ of the support planes P chosen in the definitions of π_l^± and π_r^±. As remarked above, for any p∈∂^± K, we have p̂:=(α,β)· p∈K̂, and for a chosen support plane P=P_γ for K at p, (α,β)· P=P_γ̂ is the chosen support plane for K̂ at p̂. By the duality, γ̂=(α,β)·γ=αγβ^-1. Hence we have: π̂_l^±(p̂) =(R_γ̂^-1(p̂))=(R_(βγ^-1α^-1)(α pβ^-1)) =(R_(γ^-1α^-1)(α p))= (α∘ R_γ^-1( p)∘α^-1) =α((R_γ^-1( p)))=απ_l^±(p) . The computation is completely analogous for the right projection. The simplest example that we can consider is the situation where f=σ∈(2,), so that (f)=P_σ^-1 as in Example <ref>. This case is somehow degenerate, because (σ) has empty interior, hence Corollary <ref> does not hold and it does not quite make sense to talk about the future and past components of the boundary. However, we can still define the left and right projections. Since P_σ^-1 itself is the unique support plane at any of its points, from (<ref>) we have the following simple expressions for the left and right projections π_l,π_r:P_σ^-1→^2. π_l(p)=(p∘σ) π_r(p)=(σ∘ p) . Observe that π_l and π_r extend to the boundary of P_σ^-1: recalling that the boundary of P_σ^-1 is the graph of σ (Lemma <ref>), we have π_l(x,σ(x))=x π_r(x,σ(x))=σ(x) . Equation (<ref>) is indeed immediately checked when σ=, because in that case π_l and π_r coincide with the fixed point map :P_→^2, and we have already observed in Example <ref>, using (<ref>), that extends to the map (x,x)↦ x from ∂_∞ P_ to ∂_∞^2. The general case of Equation (<ref>) then follows from Equations (<ref>) and (<ref>), that is, by observing that the isometry (,σ) maps () to (σ) and P_ to P_σ^-1. Finally, we can compute the map of ^2 obtained by composing the inverse of the left projection with the right projection. Indeed, this is induced by the map P_→ P_ sending an order–two elliptic element ℛ=p∘σ∈ P_ to σ∘ p=σ∘ℛ∘σ^-1. Hence we have π_r∘π_l^-1=σ:^2→^2 . In conclusion, the composition π_r∘π_l^-1 is an isometry and its extension to ∂_∞^2 is precisely the map f=σ of which ∂_∞ P_σ^-1 is the graph. In the next sections we will see that this fact is extremely general, that is, for any orientation-preserving homeomorphism of the circle f, the compositions π_r^±∘(π_l^±)^-1 associated with ∂_±(f) will be the left and right earthquake maps extending f. § THE CASE OF TWO SPACELIKE PLANES Before moving to the proof of Thurston's earthquake theorem, we will now consider another very concrete example, which is only slightly more complicated than Example <ref>. Nevertheless, we will see that this example represents a very general situation, and its comprehension is the essential step towards the proof of the full theorem. §.§ The fundamental example The idea here is to consider piecewise totally geodesic surfaces in ^3, which are obtained as the union of two connected subsets, each contained in a totally geodesic spacelike plane, meeting along a common geodesic. See Figure <ref>. To formalize this idea, we will consider the union of two half-planes, each contained in a totally geodesic spacelike plane P_γ_1 or P_γ_2. The first important observation is the following. Let γ_1≠γ_2∈(2,). Then P_γ_1 and P_γ_2 intersect in ^3 if and only if γ_2∘γ_1^-1 is a hyperbolic isometry. Since P_γ_i is the convex envelope of ∂_∞ P_γ_i=(γ_i^-1) (Example <ref>), the closures P_γ_1 and P_γ_2 intersect in ^3 if and only if (γ_1^-1)∩(γ_2^-1)≠∅. Moreover, by (<ref>), P_γ_1 and P_γ_2 intersect in ^3 if and only (γ_1^-1)∩(γ_2^-1) contains at least two different points. Now, (x,y)∈^1×^1 is in (γ_1^-1)∩(γ_2^-1) if and only if y=γ_1^-1(x)=γ_2^-1(x), which is equivalent to the condition that x is a fixed point of γ_2∘γ_1^-1. But γ_2∘γ_1^-1 is an element of (2,), hence it has two fixed points in ^1 if and only if it is a hyperbolic isometry. Now, let I_1 and I_2 be two closed intervals in ^1 such that ^1=I_1∪ I_2 and I_1∩ I_2 consists precisely of the two fixed points of γ_2∘γ_1^-1. Clearly there are two possibilities to produce a homeomorphism of ^1 by combining the restrictions of γ_1^-1 and γ_2^-1 to the intervals I_j's, that is: f^+_γ_1,γ_2(x)=γ_1^-1 if x∈ I_1 γ_2^-1 if x∈ I_2 and f^-_γ_1,γ_2(x)=γ_2^-1 if x∈ I_1 γ_1^-1 if x∈ I_2  . One easily checks that f^±_γ_1,γ_2 actually are orientation-preserving homeomorphisms, since γ_1^-1 and γ_2^-1 map homeomorphically the intervals I_1 and I_2 to the same intervals J_1:=γ_1^-1(I_1)=γ_2^-1(I_1) and J_2:=γ_1^-1(I_2)=γ_2^-1(I_2), which intersect only at their endpoints. Let us also denote by D_i the convex hull of I_i in ^2, and by ℓ=D_1∩ D_2 the axis of γ_2∘γ_1^-1. Suppose that γ_2∘γ_1^-1 is a hyperbolic isometry that translates along ℓ to the left, as seen from D_1 to D_2. Then: * The future boundary component ∂_+(f^+_γ_1,γ_2) coincides with the union of the convex envelope of (γ_1^-1|_I_1) and of the convex envelope of (γ_2^-1|_I_2). * The past boundary component ∂_-(f^-_γ_1,γ_2) is the union of the convex envelope of (γ_1^-1|_I_2) and of the convex envelope of (γ_2^-1|_I_1). If instead γ_2∘γ_1^-1 translates along ℓ to the right as seen from D_1 to D_2, then: * The past boundary component ∂_-(f^+_γ_1,γ_2) coincides with the union of the convex envelope of (γ_1^-1|_I_1) and of the convex envelope of (γ_2^-1|_I_2). * The future boundary component ∂_+(f^-_γ_1,γ_2) is the union of the convex envelope of (γ_1^-1|_I_2) and of the convex envelope of (γ_2^-1|_I_1). Let us consider the case where γ_2∘γ_1^-1 translates to the left along ℓ, and let us prove the first item. Let x,x' be the fixed points of γ_2∘γ_1^-1, and let y=γ_1^-1(x)=γ_2^-1(x) and y'=γ_1^-1(x')=γ_2^-1(x'). Then the convex envelope of (γ_i^-1|_I_i) is a half-plane A_i in P_γ_i bounded by the geodesic P_γ_1∩ P_γ_2, which has endpoints (x,y) and (x',y'). Clearly both the convex envelope of (γ_1^-1|_I_1) and the convex envelope of (γ_2^-1|_I_2) are contained in (f^+_γ_1,γ_2). Nevertheless, we can be more precise. We claim that P_γ_1 and P_γ_2 are future support planes for (f^+_γ_1,γ_2). This claim implies that the union of A_1 and A_2 is contained in the future boundary component ∂_+(f^+_γ_1,γ_2), because every point p∈ A_1∪ A_2 admits a future support plane through p, which is either P_γ_1 or P_γ_2. However A_1∪ A_2 is a topological disc in ∂_+(f^+_γ_1,γ_2), whose boundary is precisely the curve (f^+_γ_1,γ_2) by construction. Hence the claim will imply that A_1∪ A_2=∂_+(f^+_γ_1,γ_2). We prove the claim for P_γ_1, the proof for P_γ_2 being completely analogous. it is convenient to assume that γ_1= and γ_2=γ is a hyperbolic isometry with fixed points x and x', translating to the left seen from D_1 to D_2. Indeed, we can apply the isometry (,γ_1), which sends P_γ_1 to P_, P_γ_2 to P_γ_2γ_1^-1, and (by (<ref>)) (f^+_γ_1,γ_2) to (f^+_,γ_2γ_1^-1). Having made this assumption, consider a path σ_t, for t∈ [0,ϵ) of elliptic elements fixing a given point z_0∈^2, that rotate clockwise by an angle t. As in the proof of Lemma <ref>, the planes P_σ_t are pairwise disjoint in ^3, because σ_t_2∘σ_t_1^-1 is still an elliptic element fixing z_0 for t_1≠ t_2, hence it has no fixed point in ^1. Moreover, observe that γ^-1 is an isometry fixing ℓ and translates along ℓ on the right as seen from D_1 to D_2. Since f^+_,γ equals the identity on I_1 and γ^-1 on I_2, it fixes I_1 pointwise, and moves points of I_2 clockwise. In particular, the equation f^+_,γ(x)=σ_t^-1(x) has no solutions for t>0, because σ_t^-1=σ_-t moves all points counterclockwise if t is positive. This shows that P_σ_t∩(f^+_,γ)=∅ for t>0, and thus P_ is a support plane for (f^+_,γ) by Remark <ref>. Moreover it is a future support plane: indeed one can check (for instance using (<ref>)) that σ_t+π/2=ℛ_z_0∘σ_t∈ P_σ_t, and the path t↦σ_t is future-directed because, from the discussion after (<ref>), its tangent vector is future-directed, hence (f^+_,γ) is locally in the past of P_. This concludes the proof of the first point. The other cases are completely analogous. See also Figure <ref> to visualize the different configurations. The following is an important consequence of the proof of Proposition <ref>. Suppose that γ_2∘γ_1^-1 is a hyperbolic isometry that translates along ℓ to the left (resp. right), as seen from D_1 to D_2, and write γ_2∘γ_1^-1=exp(𝔞) for 𝔞∈(2,). Let p be a point in the future (resp. past) boundary component of (f^+_γ_1,γ_2). Then: * If p∈int(A_1), then P_γ_1 is the unique support plane of (f^+_γ_1,γ_2) at p. * If p∈int(A_2), then P_γ_2 is the unique support plane of (f^+_γ_1,γ_2) at p. * If p∈ A_1∩ A_2=P_γ_1∩ P_γ_2, then the support planes of (f^+_γ_1,γ_2) at p are precisely those of the form P_σγ_1 where σ=exp(t𝔞) for t∈ [0,1]. Recall the notation from the proof of Proposition <ref>: A_i⊂ P_γ_i is the convex envelope of (γ_i^-1|_I_i), which is a half-plane bounded by the geodesic P_γ_1∩ P_γ_2. Of course we could provide an analogous statement for (f^-_γ_1,γ_2), but we restrict to f^+_γ_1,γ_2 for simplicity. From Proposition <ref>, the pleated surface which is obtained as the union of A_1⊂ P_γ_1 and A_2⊂ P_γ_2 coincides with ∂_+(f^+_γ_1,γ_2) if γ_2∘γ_1^-1 is a hyperbolic isometry that translates along ℓ to the left, and with ∂_-(f^+_γ_1,γ_2) if it translates to the right, by Proposition <ref>. The first two items are then obvious, since P_γ_i are smooth surfaces, hence A_i is smooth at any interior point, and therefore has a unique support plane there. The last item can be proved in the same spirit as Proposition <ref>. First, we can assume γ_1= and γ_2=γ is a hyperbolic isometry translating on the left (resp. right) along ℓ. By (<ref>), if P_σ is a support plane at p, then p is in the convex hull of the pairs (y,σ^-1(y)) where y satisfies σ^-1(y)=f^±_,γ(y). The only possibility is then that p lies in the geodesic connecting the points (x,x) and (x',x') in ^1×^1, where x and x' are the fixed points of γ. Hence σ must have the same fixed points of γ. That is, σ is a hyperbolic isometry with axis ℓ (or the identity). Moreover, by an analogous argument as in Proposition <ref>, P_σ is in the future (resp. past) of (f^+_γ_1,γ_2) if and only if σ translates on the left (resp. right), and its translation length is less than that of γ. Hence σ is of the form exp(t𝔞) for t∈ [0,1]. §.§ Simple earthquake We can now conclude the study of orientation-preserving homeomorphisms obtained by combining two elements of (2,). The following proposition shows that in that situation, the composition of the projections π_l^± and π_r^± provide the earthquake map as in Example <ref>. This is not interesting in its own, since we recover a simple earthquake map which we had already defined explicitly. However, the following proposition will be an important tool to complete the proof of the earthquake theorem in Section <ref>. Let γ_1,γ_2∈(2,) be such that γ_2∘γ_1^-1 is a hyperbolic isometry, and let π_l^±,π_r^± be the projections associated with the convex envelope of f^+_γ_1,γ_2. Then: * π_l^±,π_r^±:∂_±(f^+_γ_1,γ_2)→^2 are bijections; * Assume γ_2∘γ_1^-1 translates along ℓ to the right (resp. left), as seen from D_1 to D_2. Then the composition π_r^-∘ (π^-_l)^-1:^2→^2 (resp. π_r^+∘ (π^+_l)^-1:^2→^2) is a left (resp. right) earthquake map extending f^+_γ_1,γ_2. Again, we considered the case of f^+_γ_1,γ_2 for the sake of simplicity, but one could give an analogous statement for f^-_γ_1,γ_2. Moreover, we remark that Proposition <ref> holds for any choice of support planes that is needed to define the projections. For the first point, recall that A_i⊂ P_γ_i, and that the union A_1∪ A_2 is the past (resp. future) boundary component of (f^+_γ_1,γ_2) if γ_2∘γ_1^-1 translates along ℓ to the right (resp. left). Hence (π_l^±)_int(A_i) and (π_r^±)|_int(A_i) are the restrictions of the projections associated with the totally geodesic plane P_γ_i, which are described in Example <ref>. In particular, (π_l^±)_int(A_i) and (π_r^±)|_int(A_i) are the restrictions to int(A_i) of global isometries of ^3 (defined by multiplication on the left or on the right by γ_i^-1) sending P_γ_i to P_, post-composed with the usual isometry :P_→^2. As a consequence, (π_l^±)_int(A_i) and (π_r^±)|_int(A_i) map geodesics of P_γ_i to geodesics of ^2. Moreover, by Equation (<ref>), π_l^± maps int(∂_∞ (A_i))=(γ_i^-1|_int(I_i)) to int(I_i). Hence π_l^±(int(A_i))=int(D_i). Analogously, π_r^±(int(A_i))=γ_1^-1(int(D_i))=γ_2^-1(int(D_i)). To see that π_l^± and π_r^± are bijective, it remains to show that the image of the geodesic A_1∩ A_2=P_γ_1∩ P_γ_2 via π_l^± is the geodesic ℓ=D_1∩ D_2, while the image via π_r^± is the geodesic γ_1^-1(ℓ)=γ_2^-1(ℓ). The definition of π_l^± and π_r^± on A_1∩ A_2 actually depends on the choice of a support plane. Recall that we must choose the same support plane at any point p∈ A_1∩ A_2. From Corollary <ref>, the possible choices of support planes at p are of the form P_σγ_1, where σ has the same fixed points as γ_2∘γ_1^-1, which are precisely the common endpoints of I_1 and I_2. Using the notation from Lemma <ref>, we thus see that the endpoints at infinity of A_1∩ A_2 are the points (x,y) and (x',y') where x,x' are the fixed points of γ_2∘γ_1^-1 (and of σ). Hence from Equation (<ref>) we have (for any choice of σ as in the third item of Corollary <ref>) π_l^±(x,y)=x and π_l^±(x',y')=x'. Since π_l^± is, as before, the restriction of an isometry between P_σγ_1 and ^2, it maps geodesics to geodesics, hence π_l^±(A_1∩ A_2)=ℓ. Analogously, π_r^±(x,y)=y and π_r^±(x',y')=y', from which it follows that π_l^±(A_1∩ A_2)=γ_1^-1(ℓ)=γ_2^-1(ℓ). This concludes the proof of the first item. For the second item, define E:=π_r^-∘ (π^-_l)^-1, which is a bijection of ^2. Consider the geodesic lamination of ^2 which is composed by the sole geodesic ℓ. Hence the strata of ℓ are: int(D_1),int(D_2) and ℓ. We will show that the comparison isometries Comp(S,S'):=(E|_S)^-1∘ E|_S' all translate to the right or to the left seen from one stratum to another, according to as γ_2∘γ_1^-1 translates to the left or to the right seen from D_1 to D_2. Let us first consider S=int(D_1) and S'=int(D_2). Then by Example <ref> (see in particular Equation (<ref>)) E equals γ_i^-1 on int(D_i), because (π_l^±)^-1(int(D_i))=int(A_i)⊂ P_γ_i^-1. Hence the comparison isometry Comp(int(D_1),int(D_2)) equals γ_1∘γ_2^-1, and it translates to the left (resp. right) seen from int(D_1) to int(D_2) exactly when γ_2∘γ_1^-1, which is its inverse, translates to the right (resp. left). The proof when one of the two strata S or S' is ℓ is completely analogous, by using the third item of Corollary <ref>. Indeed (recalling Remark <ref>), by any choice of σ of the form σ=exp(t𝔞) with t∈ (0,1), Comp(ℓ,int(D_2))=σ∘γ_2^-1 translates to the left (resp. right) seen from ℓ to int(D_2), and Comp(int(D_1),ℓ)=γ_1∘σ^-1 translates to the left (resp. right) seen from int(D_1) to ℓ. If instead σ=exp(t𝔞) with t∈{0,1}, then σ coincides either with γ_1 or with γ_2, which means that one of the comparison isometries Comp(int(D_1),ℓ) and Comp(ℓ,int(D_2)) translates to the left, and the other is the identity, which is still allowed in the definition of earthquake because ℓ is in the boundary of int(D_i). §.§ The example is prototypical The case of simple earthquakes that we have considered above may appear as very special. However, it turns out that it is the prototypical example, that will serve to treat the general case in the proof of the earthquake theorem. The following lemma shows that the situation of two intersecting planes occurs quite often. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism which is not in (2,). Then: * Any two future support planes of (f) at points of ∂_+(f) intersect in ^3. Analogously, any two past support planes of (f) at points of ∂_-(f) intersect in ^3. * Given a point p∈∂_±(f), if there exist two support planes at p, then their intersection (which is a spacelike geodesic) is contained in ∂_±(f). As a consequence, any other support plane at p contains this spacelike geodesic. Let us consider future support planes, the other case being analogous. For the first item, let P and Q be support planes intersecting ∂_+(f), which are spacelike by Proposition <ref>, and suppose by contradiction P and Q that they are disjoint. We can slightly move them in the future to get spacelike planes P' and Q' such that P, Q, P' and Q' are mutually disjoint and P'∩∂_+ (f)=Q'∩∂_+(f)=∅. (For instance, if P=P_γ_1 and Q=P_γ_2, then we can use Lemma <ref> and consider P'=P_σγ_1 and Q'=P_σγ_1 for σ an elliptic element of small clockwise angle of rotation.) Now, observe that ^3∖ (P'∪ Q') is the disjoint union of two cylinders and P and Q lie in different connected components of this complement. See Figure <ref>. However, ∂_+(f) is connected and has empty intersection with P and Q, leading to a contradiction. For the second item, let P=P_γ_1 and Q=P_γ_2 be support planes such that p∈∂_+(f)∩ P∩ Q. By Lemma <ref>, γ_2∘γ_1^-1 is hyperbolic. Up to switching the roles of γ_1 and γ_2, we can assume that γ_2∘γ_1^-1 translates to the left seen from D_1 to D_2, where as usual D_i is the convex envelope of the interval I_i, and the common endpoints x,x' of I_1 and I_2 are the fixed points of γ_2∘γ_1^-1. Hence ∂_∞ P_γ_1∩∂_∞ P_γ_2={(x,y),(x',y')} where y=γ_1^-1(x)=γ_2^-1(x) and y'=γ_1^-1(x')=γ_2^-1(x'). Now, by (<ref>), P_γ_i∩(f) consists of at least two points for i=1,2. We claim that (f)∩ P_γ_i contains at least (x,y) and (x',y'). Indeed, since P_γ_2 is a support plane, (f)∩ P_γ_1 is contained in the half-plane A_1⊂ P_γ_1. If (f)∩ P_γ_1 had not contained (x,y) and (x',y'), then (f)∩ P_γ_1 would not contain the boundary geodesic A_1∩ A_2, and thus would not contain p. The same argument applies for P_γ_2. This shows that both (x,y) and (x',y') are in (f), and therefore the spacelike geodesic P_γ_1∩ P_γ_2 is in ∂_±(f). In the first item of Lemma <ref>, the hypothesis that P and Q are support planes at points of ∂^±(f) (hence not at points of (f)⊂∂_∞^3) is necessary. Recall that by Proposition <ref> support planes of (f) are either spacelike or lightlike, and they are necessarily spacelike if they intersect (f) at points of ∂_±(f). Now, if one of the two planes P or Q is a support plane at a point of (f), then the proof only shows that P and Q must intersect in ^3, but not necessarily in the interior. It can perfectly happen that two future (or two past) support planes (one of which possibly lightlike) at a point (x,f(x)) of (f) intersect at (x,f(x)) but not in the interior of ^3. Lemma <ref> has an important consequence. Recall that the definition of the projections π_l^±,π_r^±:∂_±(f)→^2 depends on the choice of a support plane at all points p that admit more than one support plane. Moreover, we require that this support plane is chosen to be constant on any connected component of the subset of ∂_±(f) consisting of those points that admit more than one support plane (Remark <ref>). We will now see that, roughly speaking, their image does not depend on this choice of support plane. Let f:ℝℙ^1→ℝℙ^1 be an orientation-preserving homeomorphism which is not in (2,), and suppose p∈∂_±(f) has at least two support planes. Then there exist γ_1,γ_2∈(2,) with γ_2∘γ_1^-1=exp(𝔞) a hyperbolic element, such that all support planes at p are precisely those of the form P_σγ_1 where σ=exp(t𝔞) for t∈ [0,1]. The same conclusion holds for all other points p'∈ P_γ_1∩ P_γ_2. In particular, the image of the spacelike geodesic P_γ_1∩ P_γ_2 under the projections π_l^±,π_r^±:∂_±(f)→^2 is a geodesic in ^2 that does not depend on the choice of the support plane as in the definition of π_l^± and π_r^±. Suppose P_γ̂_1 and P_γ̂_2 are (say, future) distinct support planes at p. Write γ̂_2∘γ̂_1^-1=exp(𝔞̂), which is a hyperbolic element by Lemma <ref> and the first item of Lemma <ref>. By the second item of Lemma <ref>, any other support plane at p must be of the form P_σγ̂_1 for σ an element having the same fixed points as γ̂_2∘γ̂_1^-1. That is, σ is of the form exp(s𝔞̂) for some s∈. We claim that the set I={s∈ | exp(s𝔞̂) is a support plane of (f) at p} is a compact interval. This will conclude the proof, up to applying an affine change of variable mapping the interval I=[s_1,s_2] to [0,1], and defining γ_i=exp(s_i𝔞̂). To prove the claim, suppose that s,s'∈ I. Then (f) is contained in the past of a pleated surface obtained as the union of two half-spaces, one contained in P_exp(s𝔞̂)γ̂_1 and the other in P_exp(s'𝔞̂)γ̂_1, meeting along the spacelike geodesic P_γ̂_1∩ P_γ̂_2. Then every support plane for this pleated surface is a support plane for (f) as well. That is, by the last item of Corollary <ref>, [s,s']⊂ I. This shows that I is an interval. It is compact by Lemma <ref>, applied to the constant sequence p_n=p and to γ_n=exp(s_n𝔞̂)γ̂_1, showing that s_n must be converging (up to subsequences) and its limit is in I. This concludes the proof. § PROOF OF THE EARTHQUAKE THEOREM We are now ready to enter into the details of the proof of the earthquake theorem. The outline of the proof is now clear: given an orientation-preserving homeomorphism f:ℝℙ^1→ℝℙ^1 (which we can assume is not in (2,)), we consider the projections π_l^±,π_r^±:∂_±(f)→^2, and we want to show that the composition π_r^±∘ (π_l^±)^-1 is well-defined and is a (left or right) earthquake map extending f. We will prove this in several steps: the proof of Theorem <ref> will follow from Proposition <ref>, Corollary <ref> and Proposition <ref> below. §.§ Extension to the boundary The first property we study is the extension of the projections π_l^± and π_r^± to the boundary. The projections π_l^±,π_r^±:∂_±(f)→^2 extend to (f). More precisely, if p_n∈∂_±(f)→(x,y)∈(f), then π_l^±(p_n)→ x and π_r^±(p_n)→ y. Observe that the conclusion of Proposition <ref> holds for any choice of the projections π_l^± and π_r^±, regardless of the chosen support planes when several choices are possible, as in Remark <ref>. The proof involves two well-known properties of isometries in plane hyperbolic geometry; for the sake of completeness, we provide elementary, self-contained proofs in the Appendix. Let p_n∈∂_±(f) be a sequence converging to (x,y)∈(f), and let P_γ_n be a sequence of support planes of (f) at p_n, which are necessarily spacelike by Proposition <ref>. By Lemma <ref>, up to extracting a subsequence, there are two possibilities: either γ_n→γ and P_γ_n converges to the spacelike support plane P_γ, or γ_n diverges in (2,) and P_γ_n converges to the lightlike plane whose boundary is ({x}×^1)∪(^1∪{y}). We will treat these two situations separately, and we will always use the characterization of the convergence to the boundary given in (<ref>). Consider the former case, namely when γ_n→γ. We have by hypothesis that p_n(z_0)→ x and p_n^-1(z_0)→ y , for any point z_0∈^2. Observe moreover that, from the definition of the projections, π_l^±(p_n)=(p_nγ_n^-1) and π_r^±(p_n)=(γ_n^-1p_n) . Recalling (see (<ref>)) that the boundary of P_ is identified with ^1 via the map (x,x)↦ x, we thus have to show (choosing for instance the point z_0=i) that: p_nγ_n^-1(i)→ x and γ_n^-1 p_n(i)→ y. However, since γ_n→γ, p_nγ_n^-1(i) is at bounded distance from p_nγ^-1(i). Applying the hypothesis (<ref>) to z_0=γ^-1(i), we have p_nγ^-1(i)→ x and therefore p_nγ_n^-1(i)→ x. The argument is analogous to show that γ_n^-1 p_n(i)→ y, except that it is useful to observe that γ_n^-1 p_n=p_n^-1γ_n since it is an order–two isometry. Now p_n^-1γ_n(i) is at bounded distance from p_n^-1γ(i), which converges to y by hypothesis. Hence p_n^-1γ_n(i)→ y and the proof is complete for this case. Let us move on to the latter case, that is, γ_n diverges in (2,). Here we must use not only the previous assumption (<ref>), but also the following: γ_n(z_0)→ x and γ_n^-1(z_0)→ y , for any z_0∈^2. The condition (<ref>) holds because γ_n converges to the projective class of a rank one matrix A, such that P_[A] is a lightlike support plane; we have already observed that the boundary at infinity of P_[A] must be equal to ({x}×^1)∪(^1∪{y}). Combining (<ref>), (<ref>) and Lemma <ref>, we deduce that γ_n(z_0)→ x and γ_n^-1(z_0)→ y as claimed. Having made this preliminary observation, now we can rewrite (<ref>) as the identities: p_n= ℛ_π_l^±(p_n)∘γ_n and p_n^-1= ℛ_π_r^±(p_n)∘γ_n^-1 , where we recall that ℛ_w denotes the order two elliptic isometry with fixed point w∈^2. Up to extracting a subsequence, we can assume that π_l^±(p_n)→x̂_± and π_r^±(p_n)→ŷ_±, for some points x̂_±,ŷ_±∈^2∪∂_∞^2. We need to show that x̂_±=x and ŷ_±=y. For this purpose, suppose by contradiction x̂_±≠ x. Suppose first that x̂_±∈^2. We will use the fact (Lemma <ref> in the Appendix) that if w_n→ w∈^2, then ℛ_w_n converges to ℛ_w uniformly on ^2∪∂_∞^2. From (<ref>), and the fact that, from (<ref>) and (<ref>), both p_n(z_0) and γ_n(z_0) converge to x, we would then have x=lim_n p_n(z_0)=lim_n (ℛ_π_l^±(p_n)(γ_n(z_0)))=ℛ_x̂_±(x)≠ x since ℛ_x̂_± does not have fixed points on ∂_∞^2, thus giving a contradiction. If ŷ_±∈^2, we get a contradiction by an analogous argument. Finally, if x̂_±∈∂_∞^2, we can find a neighbourhood U of x̂_± not containing x, such that for n large ℛ_π_l^±(p_n) maps the complement of U inside U (see Lemma <ref> in the Appendix). This gives a contradiction with (<ref>) because p_n(z_0) and γ_n(z_0) are in the complement of U for n large, but at the same time ℛ_π_l^±(p_n)(γ_n(z_0)) should be in U for n large. The argument for ŷ_± is completely analogous. We remark that the proof of Proposition <ref> does not use the full hypothesis that the surface on which the projections are defined is a boundary component of (f), but only the property that whenever a sequence P_γ_n of spacelike support planes converges to a lightlike plane, then this limit is a support plane too, which is true for any convex surface. §.§ Invertibility of the projections The next step in the proof is to show that the projections π_l^± and π_r^± are bijective. The projections π_l^±,π_r^±:∂_±(f)→^2 are bijective. We give the proof for π_l^±, the proof for π_r^± being completely identical. Let us first show that π_l^± and π_r^± are injective. Given p_1,p_2∈∂_±(f), let P_γ_1 and P_γ_2 be the support planes at p_1 and p_2 respectively. (If there are several support planes, we choose one, as in the definition of π_l^± and π_r^± — see Remark <ref>.) By Lemma <ref> and Lemma <ref>, γ_2∘γ_1^-1 is a hyperbolic isometry; let D_1 and D_2 be the convex envelopes in ^2 of the two intervals I_1 and I_2 with endpoints the fixed points of γ_2∘γ_1^-1. Up to switching γ_1 and γ_2, we can moreover assume that γ_2∘γ_1^-1 translates to the left seen from D_1 to D_2. Now, we will use the example studied in Section <ref>. Let f^+_γ_1,γ_2 be defined as in (<ref>). By Corollary <ref>, P_γ_i is the support plane of (f^+_γ_1,γ_2) at the point p_i∈∂_±(f^+_γ_1,γ_2), for i=1,2. Hence π_l^±(p_i)=π̂_l^±(p_i), where π̂_l^± is the left projection associated with (f^+_γ_1,γ_2). Since π̂_l^±(p_i) is bijective by Proposition <ref>, π_l^±(p_1)≠π_l^±(p_2). This shows the injectivity. To prove the surjectivity, we first show that the image is closed. Suppose z_n=π_l^±(p_n) is a sequence in the image, with lim z_n=z∈^2. Up to extracting a subsequence, we can assume p_n→ p∈∂_±(f)∪(f). From Proposition <ref>, we have that p∈∂_±(f), because if p=(x,y)∈(f), then π_l^±(p_n)→ x∈∂_∞^2, thus contradicting the hypothesis z_n→ z∈^2. Now, let P_γ_n be a support plane at p_n, which is spacelike by Proposition <ref>. By Lemma <ref>, up to extracting a subsequence, γ_n→γ∈(2,) and P_γ is a spacelike support plane at p. It is important to remark that ∂_±(f) might admit several support planes at p, and P_γ might not be the support plane that has been chosen in the definition of π_l^±; however, by Corollary <ref> the image does not depend on this choice. Hence we can assume that P_γ is the support plane chosen at p. That is, from (<ref>), π_l^±(p)=(p∘γ^-1). We can now conclude that z is in the image of π_l^±: on the one hand z_n=π_l^±(p_n)=(p_n∘γ_n^-1) converges to z by hypothesis, and on the other it converges to π_l^±(p)=(p∘γ^-1) because p_n→ p, γ_n→γ and is continuous. This shows that z∈π_l^±(∂_±(f)), and therefore the image is closed. We now proceed to show that π_l^± is surjective. Suppose by contradiction that there is a point w∈^2 which is not in the image of π_l^±. Let r_0=inf{r | B(w,r)∩π_l^±(∂_±(f))≠∅}, where B(w,r) is the open ball centered at w of radius r with respect to the hyperbolic metric of ^2. Since the image of π_l^± is closed, we have that r_0>0, B(w,r_0) is disjoint from the image of π_l^±, and there exists a point z∈∂ B(w,r_0) which is in the image of π_l^±. Say that z=π_l^±(p). We will obtain a contradiction by finding points close to p which are mapped by π_l^± inside B(w,r_0). Let P_γ be a support plane of (f) at p. By (<ref>), P_γ∩(f) is the convex hull of ∂_∞ P_γ∩(f), which contains at least two points. If p is in the interior of P_γ∩(f) (which is non-empty if and only if ∂_∞ P_γ∩(f) contains at least three points), then the restriction of π_l^± to the interior of P_γ∩(f) is an isometry onto its image in ^2, because P_γ is the unique support plane at interior points p', and π_l^±(p')=(p'∘γ^-1). Hence π_l^± maps a small neighbourhood of p to a neighbourhood of z, which intersects B(w,r_0), giving a contradiction. We are only left with the case where p is not in the interior of P_γ∩(f). In this case, there is a geodesic L contained in P_γ∩(f) such that p∈ L. (The geodesic L might be equal to P_γ∩(f) or not.) As before, the image of L is a geodesic ℓ in ^2 because (π_l^±)|_L is an isometry onto its image, and z∈ℓ. We claim that in the image of π_l^± there are two sequences of geodesics ℓ_n⊂Im(π_l^±) converging to ℓ (in other words, such that the endpoints of ℓ_n converge to the endpoint of ℓ); moreover the two sequences are contained in different connected components of ^2∖ℓ. This will give a contradiction, because for one of these two sequences, ℓ_n must intersect B(w,r_0) for n large. To show the claim, and thus conclude the proof, observe that L disconnects ∂_±(f) in two connected components, and let p_n be a sequence converging to p contained in one connected component of ∂_±(f)∖ L. Let P_γ_n be the support plane for (f) at p_n which has been chosen to define π_l^±. By Lemma <ref>, P_γ_n converges to a support plane P_γ at p, which as before we can assume is the support plane that defined π_l^± at p, since the image does not depend on this choice by Corollary <ref>. Also, we can assume that each p_n is contained in a geodesic L_n in P_γ_n∩∂_±(f): indeed, it suffices to replace p_n by the point in P_γ_n∩∂_±(f) which is closest to p (where closest is with respect to the induced metric on ∂_±(f), or to any auxiliary Riemannian metric). If P_γ_n∩∂_±(f) is not already a geodesic, with this assumption p_n now belongs to a boundary component which is the geodesic L_n. As observed before, π_l^± maps the geodesic L_n to a geodesic ℓ_n=π_l^±(L_n) in ^2, and (as in the argument that showed that Im(π_l^±) is closed), the limit of π_l^±(p_n) is a point in ℓ=π_l^±(L). Moreover ℓ_n∩ℓ=∅, and the ℓ_n are all contained in the same connected component of ^2∖ℓ: this follows from observing again (compare with the injectivity at the beginning of this proof) that (π_l^±)|_L_n∪ L equals the left projection associated with the surface ∂_±(f^+_γ_n,γ) studied in Section <ref>, where f^+_γ_1,γ_2 is defined in (<ref>), and thus maps ∂_±(f)∩ P_γ_n (which in particular contains L_n) to a subset (containing ℓ_n) disjoint from ℓ and included in a connected component of ^2∖ℓ which does not depend on n. This implies that ℓ_n converges to ℓ as n→+∞. Clearly if we had chosen p_n in the other connected component of ∂_±(f)∖ L, then the ℓ_n would be contained in the other connected component of ^2∖ℓ. This concludes the claim and thus the proof. As a consequence, the composition π_r^±∘ (π_l^±)^-1 is well-defined and is a bijection of ^2 to itself. Combining with Proposition <ref>, we get: The composition π_r^±∘ (π_l^±)^-1 extends to a bijection from ^2∪∂_∞^2 to itself, which equals f on ∂_∞^2 and is continuous at any point of ∂_∞^2. Since π_l^± and π_r^± are bijective and extend to the bijections from (f) to ∂_∞^2 sending (x,y) to x and y=f(x) respectively, the composition π_r^±∘ (π_l^±)^-1 extends to a bijection of ^2∪∂_∞^2 to itself sending x to f(x). We need to check that this extension is continuous at any point of ∂_∞^2. Proposition <ref> shows that the extensions of π_l^± and π_r^± to ∂_±(f)∪(f) are continuous at any point of (f). Hence it remains to check that (π_l^±)^-1 is continuous at any point of ∂_∞^2. This follows from a standard argument: let z_n be a sequence in ^2∪∂_∞^2 converging to x∈∂_∞^2, and let p_n=(π_l^±)^-1(z_n). Up to extracting a subsequence, p_n→ p. The limit p must be in (f), because if p∈∂_±(f), although π_l^± might not be continuous there, we have already seen in Proposition <ref> (see the proof that the image of π_l^± is closed) that lim_nπ_l^±(p_n)=lim_n z_n is a point of ^2, thus giving a contradiction with lim_n z_n=x∈∂_∞^2. If p∈(f), then we can use the continuity and injectivity of π_l^± on (f) to infer that p=(π_l^±)^-1(x). This concludes the proof. §.§ Earthquake properties The last step which is left to prove is the verification that π_r^±∘ (π_l^±)^-1 satisfies the properties defining earthquake maps. The composition π_r^-∘ (π_l^-)^-1:^2→^2 is a left earthquake map. Analogously, π_r^+∘ (π_l^+)^-1:^2→^2 is a right earthquake map. First, let us define a geodesic lamination λ. Let us consider all the support planes P_γ of (f) at points of ∂_±(f) (which are necessarily spacelike by Proposition <ref>). Define ℒ to be the collection of all the connected components of (P_γ∩∂_±(f))∖int(P_γ∩∂_±(f)), as P_γ varies over all support planes. As observed before, by (<ref>) P_γ∩∂_±(f) is the convex hull in P_γ of ∂_∞ P_γ∩(f), which consists of at least two points. If it consists of exactly two points, then P_γ∩∂_±(f) is a spacelike geodesic L; otherwise P_γ∩∂_±(f) has nonempty interior and each connected component of its boundary is a spacelike geodesic. Now, π_l^± is an isometry onto its image when restricted to any L∈ℒ (which might depend on the choice of a support plane if there are several support planes at points of L, but the image does not depend on this choice by Corollary <ref>). Hence we define λ to be the collection of all the π_l^±(L) as L varies in ℒ. To show that λ is a geodesic lamination, we first observe that the geodesics ℓ∈λ are pairwise disjoint, because the spacelike geodesics L in ℒ are pairwise disjoint and π_l^± is injective. Then it remains to show that their union is a closed subset of ^2. This follows immediately from the proof of Proposition <ref>. Indeed, suppose that ℓ_n=π_l^±(L_n) converges to ℓ=π_l^±(L), and let z_n=π_l^±(p_n)∈ℓ_n be a sequence converging to z∈ℓ. Since Im(π_l^±) is closed, z∈Im(π_l^±), and since π_l^± is injective, z=π_l^±(p) for some p∈ L. Then in the last part of the proof of Proposition <ref> we have shown that in this situation ℓ_n converges to ℓ. Having shown that λ is a geodesic lamination, we are ready to check that π_r^-∘ (π_l^-)^-1 is an earthquake map. Observe that the gaps of λ are precisely the images under π_l^± of the interior of the sets P_γ∩∂_±(f) (when this intersection is not reduced to a geodesic), as P_γ varies among all support planes. Let S_1 and S_2 be two strata of λ, and let Σ_i=(π_l^±)^-1(S_i). Hence Σ_i⊂ P_γ_i∩∂_±(f), where P_γ_i is a support plane. As usual, there might be several support planes at points of Σ_i, and this can occur only if Σ_i is reduced to a geodesic by Lemma <ref>. Recalling from Remark <ref> that the chosen support plane is assumed to be constant along Σ_i, we can suppose that P_γ_i is the support plane chosen in the definition of π_l^± and π_r^±. Now we proceed as in the proof of injectivity in Proposition <ref>. Consider first the case that γ_1≠γ_2. By Lemma <ref> and Lemma <ref>, γ_2∘γ_1^-1 is a hyperbolic isometry; let D_1 and D_2 be the convex envelopes in ^2 of the two intervals I_1 and I_2 with endpoints the fixed points of γ_2∘γ_1^-1. Up to switching γ_1 and γ_2, we assume that γ_2∘γ_1^-1 translates to the left seen from D_1 to D_2. Then (π_l^±)|_Σ_i=(π̂_l^±)|_Σ_i and (π_r^±)|_Σ_i=(π̂_r^±)|_Σ_i, where π̂_l^± and π̂_r^± are the left and right projections associated with (f^+_γ_1,γ_2), and moreover S_i⊂ D_i. By the second part of Proposition <ref>, the comparison isometry Comp(D_1,D_2) of the map π̂_r^±∘ (π̂_l^±)^-1 translates to the left (for π_r^- and π_l^-) or right (for π_r^+ and π_l^+) seen from D_1 to D_2. Then Comp(S_1,S_2), which is indeed equal to Comp(D_1,D_2), translates to the left (or right) seen from S_1 to S_2. Finally, we instead consider the case γ_1=γ_2, which can only happen either if Σ_1=Σ_2 (hence S_1=S_2) or if Σ_1 has nonempty interior and Σ_2 is one of its boundary components (or vice versa). In this case we clearly have Comp(S_1,S_2)=id. But the comparison isometry is indeed allowed in Definition <ref> to be the identity, when one of the two strata is contained in the closure of the other. This concludes the proof. The proof of Thurston's earthquake theorem (Theorem <ref>) is thus complete. §.§ Recovering earthquakes of closed surfaces In this final section, we recover (Corollary <ref>) the existence of earthquake maps between two homeomorphic closed hyperbolic surfaces. Given a group G and two representations ρ,ϱ:G→(2,), we say that a map F from ^2 (or ∂_∞^2) to itself is (ρ,ϱ)-equivariant if it satisfies F∘ρ(g)=ϱ(g)∘ F for every g∈ G. Let S be a closed oriented surface and let ρ,ϱ:π_1(S)→(2,) be two Fuchsian representations. Then there exists a (ρ,ϱ)-equivariant left earthquake map of ^2, and a (ρ,ϱ)-equivariant right earthquake map. Let f:∂_∞^2→∂_∞^2 be the unique (ρ,ϱ)-equivariant orientation-preserving homeomorphism. We claim that there exists a left (resp. right) earthquake as in Theorem <ref>, which is itself (ρ,ϱ)-equivariant. For this purpose, observe that for any g∈π_1(S), the pair (ρ(g),ϱ(g))∈(2,)×(2,) acts on ∂_∞^3 preserving (f), since by (<ref>) and the definition of (ρ,ϱ)-equivariant, (ρ(g),ϱ(g))·(f)=(ϱ(g)∘ f∘ρ^-1(g))=(f) . Hence the convex hull (f) is preserved by the action of (ρ(g),ϱ(g)) for all g∈π_1(S). To conclude the proof, we need to show that we can choose support planes at every point of both boundary components of (f)∖(f) in such a way that this choice of support planes is also preserved by the action of (ρ(g),ϱ(g)) for all g∈π_1(S). (Clearly it suffices to consider the situation at points that admit more than one support plane, because if p∈∂_±(f) has a unique support plane P, then (ρ(g),ϱ(g))· P is the unique support plane at (ρ(g),ϱ(g))· p.) When we have shown this, we will take the left and right projections π_l^±,π_r^± defined via this invariant choice of support planes. By Lemma <ref>, we will then deduce that the left projection π_l^±:∂_±(f)→^2 is equivariant with respect to the action of (ρ(g),ϱ(g)) on ∂_±(f) and the action of ρ(g) on ^2; analogously the right projection π_r^±:∂_±(f)→^2 is equivariant with respect to the action of (ρ(g),ϱ(g)) on ∂_±(f) and the action of ϱ(g) on ^2. Following the proof of Theorem <ref>, the left and right earthquake maps obtained as the composition (π_r^∓)^-1∘π_l^∓ will be (ρ,ϱ)-equivariant, and the proof will be concluded. First, we need to prove an intermediate claim. Suppose p∈∂_±(f) admits several support planes. By Lemma <ref>, there is a spacelike geodesic L⊂∂_±(f) containing p. Let g∈π_1(S) be such that (ρ(g),ϱ(g))· L=L. Then we claim that (ρ(g),ϱ(g)) maps every support plane at p to itself. To prove this claim, we use Corollary <ref> and suppose up to an isometry (so that, in the notation of Corollary <ref>, γ_1=) that all the support planes at p are of the form P_exp(t𝔞) with t∈ [0,1], where γ:=exp(𝔞) is a hyperbolic element. Clearly (ρ(g),ϱ(g)) must preserve the pair of "extreme" support planes P_ and P_γ. Hence there are two possibilities: either (ρ(g),ϱ(g)) maps to and γ to γ, or it switches and γ. However, the latter possibility cannot be realized, since the identities ρ(g)ϱ(g)^-1=γ and ρ(g)γϱ(g)^-1= would imply that γ has order two, and this is not possible for a hyperbolic element. We thus have (ρ(g),ϱ(g))·= and (ρ(g),ϱ(g))·γ=γ. This implies first that ρ(g)=ϱ(g). Moreover ρ(g)γρ(g)^-1=γ, which shows that ρ(g)=ϱ(g)=exp(s𝔞) for some s∈. Therefore ρ(g)exp(t𝔞)ρ(g)^-1=exp(t𝔞) for all t, that is (ρ(g),ϱ(g))=(ρ(g),ρ(g)) maps every support plane P_exp(t𝔞) to itself. Having shown the claim, we can conclude as follows. Observe that the set of points p∈∂_±(f) that admit several support planes form a disjoint union of spacelike geodesics in ∂_±(f), and that this set (say X) is invariant under the action of (ρ(g),ϱ(g)) for all g∈π_1(S). Pick a subset {L_i}_i∈ I of this family of geodesics such that its π_1(S)-orbit is X, and that the orbits of L_i and L_j are disjoint if i≠ j. Pick a support plane P_i at p∈ L_i, and then we declare that (ρ(g_0),ϱ(g_0))· P_i is the chosen support plane at every point of (ρ(g_0),ϱ(g_0))· L_i. This choice is well-defined by the above claim, which showed that if (ρ(g),ϱ(g)) leaves L_i invariant, then it also leaves every support plane at L_i invariant. Moreover this choice of support planes is invariant by the action of π_1(S) by construction. This concludes the proof. § APPENDIX: TWO LEMMAS IN THE HYPERBOLIC PLANE We provide here the proofs of two properties on the action on ^2∪∂_∞^2 of sequences of elements in (2,). We prove them by elementary arguments in the specific case of sequences of order–two elliptic isometries. The first elementary property that we prove here is the uniform convergence of the action of elliptic isometries on the compactification of ^2. Let w_n be a sequence in ^2 converging to w∈^2. Then ℛ_w_n converges to ℛ_w uniformly on ^2∪∂_∞^2. Up to conjugation, we may assume w=i. Writing w_n=| w_n | e^iη_n, it is easy to check that ℛ_w_n(z)=cos(η_n)z-| w_n|/| w_n |^-1z-cos(η_n) . Let us conjugate ℛ_w_n by the map ψ(z)=(iz+1)/(z+i), which maps ^2 to the disc, and show that it converges to z↦ -z uniformly on the closed disc. For z∈^2∪∂_∞^2 we have ψ∘ℛ_w_n∘ψ^-1(z)+z=(| w_n |^-1-| w_n |-2icos(η_n))z^2+(| w_n |^-1-| w_n |+2icos(η_n))/(| w_n |^-1-| w_n |-2icos(η_n))z+i(| w_n|+| w_n |^-1) Hence |ψ∘ℛ_w_n∘ψ^-1(z)+z |≤2|α_n|/|α_n z+β_n| where α_n=|| w_n |^-1-| w_n |-2icos(η_n)| and β_n=i(| w_n|+| w_n |^-1). Thus |ψ∘ℛ_w_n∘ψ^-1(z)+z |≤2/| z+β_n/α_n|≤2/||β_n/α_n| -| z || Since |β_n|≥ 2, |α_n|→ 0 and | z|≤ 1, there exists n_0∈ℕ such that the right-hand side is smaller than ϵ for all z in the closed disc. This completes the proof. The second property is a special case of the so-called North-South dynamics. Let w_n be a sequence in ^2 converging to w∈∂_∞^2. Then, for every neighbourhood U of w, there exists n_0 such that ℛ_w_n((^2∪∂_∞^2)∖ U)⊂ U for n≥ n_0. We adopt the same notation as in the proof of Lemma <ref>. Up to conjugation, we may assume that w=∞. It is sufficient to consider neighbourhoods U of the form U_r={| z|>r }⊂^2∪∂_∞^2. By a direct computation, |ℛ_w_n(z)|=|cos(η_n)z-| w_n||/|| w_n |^-1z-cos(η_n)|≥| w_n|- |cos(η_n)|| z|/| w_n |^-1| z|+|cosη_n|. Since w_n converges to ∞, for all r we have | w_n|≥ r≥| z|≥|cosη_n|| z| if n is sufficiently large and z is in the complement of U_r. Then |ℛ_w_n(z)|≥| w_n|-r/| w_n |^-1r+|cosη_n|⟶ +∞. It follows that |ℛ_w_n(z)|>r for n≥ n_0, that is, ℛ_w_n maps the complement of U_r to U_r. 99 bonschkra Bonsante, Francesco, Krasnov, Kirill and Schlenker, Jean-Marc, Multi-black holes and earthquakes on Riemann surfaces with boundaries. IMRN, 011, No. 3, 487-552, 2011. bonsch Bonsante, Francesco; Schlenker, Jean-Marc, AdS manifolds with particles and earthquakes on singular surfaces. Geom. Funct. Anal. 19, No. 1, 41-82, 2009. bonsch3 Bonsante, Francesco; Schlenker, Jean-Marc, Maximal surfaces and the universal Teichmüller space. Invent. Math. 182, No. 2, 279-333, 2010. bonsch2 Bonsante, Francesco; Schlenker, Jean-Marc, Fixed points of compositions of earthquakes. Duke Math. J. 161, No. 6, 1011-1054, 2012. bonsep Bonsante, Francesco; Seppi, Andrea, Area-preserving diffeomorphisms of the hyperbolic plane and K-surfaces in anti-de Sitter space. J. Topol. 11, No. 2, 420-468, 2018. bonsep2Bonsante, Francesco; Seppi, Andrea, Equivariant maps into anti-de Sitter space and the symplectic geometry of ℍ^2×ℍ^2. Trans. Am. Math. Soc. 371, No. 8, 5433-5459, 2019. survey Bonsante, Francesco; Seppi, Andrea, Anti-de Sitter geometry and Teichmüller theory. Ohshika, Ken'ichi (ed.) et al., In the tradition of Thurston. Geometry and topology. Cham: Springer. 545-643, 2020. elemamseppi El Emam, Christian and Seppi, Andrea, On the Gauss map of equivariant immersions in hyperbolic space. Journal of Topology 15.1, 238-301, 2022. ghlGardiner, Frederick P.; Hu, Jun; Lakic, Nikola, Earthquake curves. Earle, Clifford J. (ed.) et al., Complex manifolds and hyperbolic geometry. II Iberoamerican congress on geometry, CIMAT, Guanajuato, Mexico, January 4-9, 2001. Providence, RI: American Mathematical Society (AMS) (ISBN 0-8218-2957-2/pbk). Contemp. Math. 311, 141-195, 2002. hu Hu, Jun, Earthquake measure and cross-ratio distortion. Abikoff, William (ed.) et al., In the tradition of Ahlfors and Bers, III. Proceedings of the 2nd Ahlfors-Bers colloquium, Storrs, CT, USA, October 18-21, 2001. Providence, RI: American Mathematical Society (AMS) (ISBN 0-8218-3607-2/pbk). Contemporary Mathematics 355, 285-308, 2004. ker Kerckhoff, Steven P., The Nielsen realization problem. Ann. Math. (2) 117, 235-265, 1983. mess Mess, Geoffrey, Lorentz spacetimes of constant curvature. Geom. Dedicata 126, 3-45, 2007. misaric Miyachi, Hideki; Šarić, Dragomir, Uniform weak* topology and earthquakes in the hyperbolic plane. Proc. Lond. Math. Soc. (3) 105, No. 6, 1123-1148, 2012. pfeil Pfeil, Mareike, Earthquakes in the hyperbolic plane. Master thesis, Heidelberg University, 2017. rosmondi Rosmondi, Daniele, Earthquakes on hyperbolic surfaces with geodesic boundary and Anti de Sitter geometry. PhD thesis, Università di Pavia, 2017. saric Šarić, Dragomir, Real and complex earthquakes. Trans. Am. Math. Soc. 358, No. 1, 233-249, 2006. saric2 Šarić, Dragomir, Bounded earthquakes. Proc. Am. Math. Soc. 136, No. 3, 889-897, 2008. saric3 Šarić, Dragomir, Some remarks on bounded earthquakes. Proc. Am. Math. Soc. 138, No. 3, 871-879, 2010. seppi Seppi, Andrea, Maximal surfaces in Anti-de Sitter space, width of convex hulls and quasiconformal extensions of quasisymmetric homeomorphisms. J. Eur. Math. Soc. (JEMS) 21, No. 6, 1855-1913, 2019. thurston Thurston, William P., Earthquakes in two-dimensional hyperbolic geometry. Low dimensional topology and Kleinian groups, Symp. Warwick and Durham 1984, Lond. Math. Soc. Lect. Note Ser. 112, 91-112, 1986.
http://arxiv.org/abs/2306.07434v1
20230612213727
Frustrated multipoles in an icosahedral quasicrystal
[ "Junmo Jeon", "SungBin Lee" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
[email protected] Advanced Institute of Science and Technology, Daejeon 34141, South [email protected] Advanced Institute of Science and Technology, Daejeon 34141, South Korea Multipolar degrees of freedom and their hidden orders have been widely discussed in the context of heavy fermions, frustrated magnets and exotic Kondo effects. Although there has been extensive search for multipolar degrees of freedom in magnetic systems, there are few examples that allow pure multipolar degrees of freedom, such as electric quadrupoles or magnetic octupoles, in the absence of magnetic dipoles. In this work, for the first time, we theoretically show that the magnetic behavior in an icosahedral quasicrystal is generally described by the pure magnetic octupoles in the absence of magnetic dipoles, resulting from the interplay of spin orbit coupling and crystal field splitting. Importantly, we point out that the non-crystallographic symmetries lead to pure multipolar degrees of freedom, which are only allowed in quasicrystals but forbidden in periodic crystals, and are thus a unique feature of magnetic quasicrystals. We show that Yb^3+ with J=7/2 admits the doublet, which only has magnetic octupolar degrees of freedom without magnetic dipoles and quadrupoles. We first discuss the characteristics of magnetic ocutupoles and derive the effective spin Hamiltonian on symmetry grounds. Then, based on the self-similar triangular structure of the icosahedron, we argue the long-range frustration in terms of the Ising spin model. We further classify the possible quantum phases including quantum fluctuations, in terms of the entanglement of the ground state. It turns out that an arbitrary small quantum fluctuation produces entanglement, by lifting the extensive degeneracy originated from the geometrical frustration. Our study offers the magnetic icosahedral quasicrystal as a new platform to search for the novel multipolar degrees of freedom and their exotic phenomena. Frustrated multipoles in an icosahedral quasicrystal SungBin Lee July 31, 2023 ==================================================== Introduction— In condensed matter systems, there are several examples that cannot be easily observed by conventional experimental techniques. Such hidden orders have been in debate for several decades and have been waiting to be discovered<cit.>. In particular, unusual higher rank multipole moments, beyond the conventional electric and magnetic dipole moments, have been suggested as a key player to exhibit various hidden orders<cit.>. Multipolar degrees of freedom are not only famous for their ability to give rise to hidden orders, but also for their role in driving a variety of other interesting and complex phenomena. For example, in heavy fermion materials, multipolar degrees of freedom can lead to the emergence of unconventional superconductivity and non-Fermi liquid behavior with exotic Kondo physics<cit.>. Moreover, beyond their hidden orders, magnetic frustration between multipole moments can give rise to the emergence of exotic ground states, so called multipolar quantum spin liquids<cit.>. Hence, understanding the properties of multipolar physics have been the focus of intense research with broad implications and new insights with potential applications<cit.>. However, there are limited examples of this kind and most of the magnetic systems which exhibit multipolar degrees of freedom, contain not only multipoles but also magnetic dipoles at the same time and there are few systems that are described by pure multipole moments. Finding pure multipolar degrees of freedom in the magnetic systems requires a delicate combination of spin-orbit couplings and crystalline electric field (CEF) splitting based on the point group symmetries<cit.>. In conventional crystals, the point group symmetry, which should be compatible with the translational symmetry, restricts searching pure multipolar degrees of freedom in the magnetic systems<cit.>. On the other hand, the quasicrystals could exhibit the point group symmetries beyond the space group because they are ordered without spatial periodicity. Thus, the quasicrystalline materials would be a good platform for finding multipolar degrees of freedom. Especially, several rare-earth magnetic quasicrystals are present having icosahedral symmetry but have never been explored in terms of their multipolar physics and related exotic phenomena<cit.>. In this paper, we consider the icosahedral quasicrystal whose f-orbital electrons carry J=7/2<cit.>. Interestingly, as the result of the interplay of the spin-orbit coupling and the CEF splitting of icosahedral point group symmetry, there is a unique Kramers doublet that carries only magnetic octupole moments without dipole moments. Remarkably, this corresponds to the rare earth icosahedral quasicrystal composed of Yb^3+. On symmetry grounds, we introduce the generic spin Hamiltonian. In the antiferromagnetic Ising limit, we first discuss the degenerate ground state born of the geometrically frustrated icosahedron structure. When the quantum fluctuation is introduced, the unique ground state is stabilized having non-zero entanglement. Depending on (anti-) ferromagnetic XY interaction, it is represented by the specific linear combination of the degenerate ground states found in the Ising limit. Our work provides a perspective for finding the multipolar degrees of freedom and their magnetic frustration originated from non-crystallographic symmetries. Furthermore, it opens a new paradigm for enriching hidden orders, spin liquids and novel Kondo effects in quasicrystals. Pure octupolar Kramers doublet— The CEF Hamiltonian in an icosahedral symmetry is given as following, using the Stevens operators. H_=B_6(O_6^0-42O_6^5), where the Stevens coefficient obtained by the radial integral is B_6=A_6γ_J⟨r^6|$⟩<cit.>.γ_Jis the Stevens factor andris the radial position.A_6=-33/100q_0|e|/R_0^7, whereq_0is the charge of the ligands andR_0is the distance between the surrounding ligands and central atom. Here, we assume the point charge model<cit.>. TheO_5^0andO_5^5are Stevens operators with respect to the total angular momentum operators (See Supplementary Materials for the detailed form of O_5^0 and O_5^5.). Eq.(<ref>) is block diagonalizable for given values of the total angular momentumJ. It is noteworthy that forJ=7/2there are two eigenspaces ofH_, the Kramers doublet and the sextet<cit.>. The CEF energy gap is given by25200 |B_6|∼𝒪(10meV). From now on, let us define the Kramers doublet as|±⟩, which are written in terms of the eigenstates of theJ_zoperator, |+⟩=-√(7/10)|J_z=-3/2⟩+√(3/10)|+7/2⟩ |-⟩=√(3/10)|-7/2⟩+√(7/10)|+3/2⟩. In the case of Yb^3+ion, it is known thatJ=7/2andB_6<0<cit.>. Thus, the Kramers doublet is the ground eigenspace of the CEF Hamiltonian, well separated from the sextet. Hence one can expect the magnetic properties at low temperature are explained within the this Kramers doublet. From Eq.(<ref>), one can easily find that⟨±|J_i|∓|$⟩ and ⟨±|J_i|±|$⟩ vanish wherei=x,y,z. Importantly,J_zalso vanishes due to the symmetric coefficients of the states. This confirms that there is no magnetic dipole moment. Thus, one should consider the multipolar degrees of freedom given by the irreducible tensor operators. However, since|±⟩are the Kramers doublet, the time-reversal even operators such as quadrupole moments vanish. Hence, we can expect the higher order degrees of freedom such as octupoles, in the absence of any dipolar or quadrupolar degrees of freedom. To show that the Kramers doublet,|±⟩in Eq.(<ref>), describes the octupolar degrees of freedom, let us define the pseudospin ladder operators,Σ^±, as follows. Σ^+=|+⟩⟨-| Σ^-=(Σ^+)^†, andΣ^z=[Σ^+,Σ^-]/2. Now define the octupolar operators as the rank 3 spherical tensor operators,T_m^(3)in terms ofJ_+,J_-andJ_z. Note that octupolar operators are time-reversal odd, under the time-reversal transformation,𝒯, it satisfies𝒯Σ^±𝒯^-1= -Σ_∓,𝒯Σ^z𝒯^-1=-Σ^z. As a result,Σ^z∼ T_0^(3)andΣ^±∼ T_m^(3)for non-zerom. However,T_1^(3)andT_-1^(3)vanish becauseT_± 1^(3)|±⟩is not in the doublet eigenspace. Note thatT_± 1^(3)changes the eigenvalue of theJ_zoperator by± 1. Similarly, sinceJ_±^2|±⟩andJ_±^3|∓⟩are not in the doublet, the only non-trivial matrix elements are⟨±|T_± 2^(3)|∓|$⟩ and ⟨∓|T_± 3^(3)|±|$⟩. This leads toT_± 2,∓ 3^(3)∼Σ^±. In detail, one can represent the octupole pseudospin operators in the doublet as, Σ^z≡ T_0^(3), Σ^±≡1/2√(2/15)T_±2^(3)∓1/2√(1/5)T_∓ 3^(3) . Specifically,T_±2^(3)=1/4√(105/π)J_±^2J_z,T_±3^(3)=∓1/8√(35/π)J_±^3, andT_0^(3)=1/4√(7/π)(5J_z^3-3J_zJ^2), where𝒪is the symmetrization of the operator𝒪<cit.>. Each pseudospin operator behaves like the rank 3 tensors. From Eq.(<ref>), one can write, Σ_x(y)=1/4[ √(2/15)(T_2^(3)± T_-2^(3))±√(1/5)(T_3^(3)-T_-3^(3))], wherex(y)takes+(-)sign in Eq.(<ref>), respectively. By applying the symmetry transformation of the icosahedron group (I_h) and the time reversal symmetry transformation, one can find the generic spin Hamiltonian of the nearest neighbor interactions. Let us define the localz-axis pointing to the center of the icosahedron shell. Then, under the 5-fold rotational symmetries ofI_h, we haveΣ^±_i→ e^∓ 4iπ/5Σ^±_j, whereiandjare the nearest neighbor sites. This leads to the bond dependent phase factors in the Hamiltonian, such asΣ^+_iΣ^+_jorΣ^+_iΣ^z_j. As a result, the generic symmetry allowed Hamiltonian under the icosahedral point group symmetry contains the four independent parameters,J_±±,J_± z,J_±andJ_zz, and is given as, H =∑_⟨i,j|⟩[ J_zzΣ_i^zΣ_j^z + J_±(Σ_i^+Σ_j^- +Σ_i^- Σ_j^+ ) + J_±±( α_ijΣ_i^+Σ_j^+ + α_ij^* Σ_i^-Σ_j^- ) + J_± z( Σ_i^z (β_ijΣ_j^+ +β_ij^* Σ_j^- ) + i ↔ j ) ]. Here,α_ijtakes the values1,e^± i 2π /5ande^± i 4π /5depending on the bond orientation due to the 5-fold rotational symmetry, andβ_ij=(α_ij^*)^2. The Hamiltonian in Eq.(<ref>) is written in terms of local coordinate axes, where the localz-axis for each sites as pointing in the icosahedron (See Supplementary Materials for detailed derivation of the effective pseudospin Hamiltonian and local axes.). The magnitudes of these spin exchange parameters would vary from case to case. Here, for the simplest case, we first study the Ising limit with a finiteJ_zzand then consider the quantum fluctuations in the presence ofJ_±. Geometrical frustration— Let us first consider the Ising model, where onlyJ_zzis non-zero in Eq.(<ref>). Considering the icosahedral quasicrystal descended from 6-dimensional hyperspace<cit.>, Fig.<ref> represents the structure of icosahedral quasicrystal. As shown in Fig.<ref>, the distances between inter-shell of icosahedron vary (See Supplementary Materials for detailed cut-and-project scheme for the icosahedral quasicrystal.). In the real materials, depending on the structures of quasicrystals and approximants, the distances between the shells could differ<cit.>. Furthermore, it is known that the inter-shell distance can be also controlled in terms of the external pressure<cit.>. Thus, for general argument, we mainly focus on the nearest neighboring sites in a single icosahedron and discuss the magnetic states. Depending on the inter-shell distances, one may consider the perturbative approach (See Supplementary Materials for perturbative approach.). For ferromagneticJ_zz, it is obvious there are only two degenerate ground states exist, where every octupole points in local+zand-zdirections, respectively. For antiferromagnetic Ising,J_zz>0, the triangular faces of the icosahedron cause geometric frustration. In this case, 72 degenerate states exist and they are classified into two groups on symmetry grounds: (i) 60 degenerate states without 5-fold rotational symmetry, (ii) 12 degenerate states with 5-fold rotational symmetry. Fig.<ref> (a) show an example of the first group of ground states. Note the octupolar moments arranged on the icosahedron in Fig.<ref> (a) do not have a 5-fold rotational symmetry axis. Since there are 6 independent choices of theZaxis, by applying 5-fold rotational transformation around eachZaxis, we have 30 degenerate states. In addition, for each 30 degenerate states, the energy is invariant under the swap of two octupoles on the sites Q and R in Figs.<ref> (a) and (c). HenceM_σ, the spatial mirror reflection with respect to theσ-plane depicted in Fig.<ref> (c) doubles the number of degenerate states with no 5-fold rotational symmetry, resulting in total 60 degenerate states. (See Supplementary Materials for detailed discussion of the symmetry argument.). Figs.<ref> (b) and (d) illustrate 5-fold rotational symmetric ground state. There are 12 independent choice of the rotational symmetry axis,Zaxis in Fig.<ref> (b) of the second group. Quantum fluctuation— Now let us consider non-zero but smallJ_±and study the effect of quantum fluctuations. To study the fluctuation effects, we introduce three subsets of the 72 degenerate states,A,BandCin terms of the orientation preserving icosahedral rotation group,I⊴ I_h. Specifically, the subsetsAandCare generated by applying the spatial rotations inIto the states in Figs.<ref> (a) and (b), respectively. While, the subsetBis generated by applying the coset,IM_σ={gM_σ| g∈ I}to the state in Fig.<ref> (a). Thus, forH_±=J_±∑_⟨i,j|⟩(Σ_i^+Σ_j^-+Σ_i^-Σ_j^+), two states in the same subset admit zero matrix element ofH_±. One can let|ψ_A_n⟩,|ψ_B_l⟩and|ψ_C_r⟩, where1≤ n,l≤ 30and1≤ r≤ 12be the states inA,BandC. Hence, in the sub-Hilbert space of the 72 Ising ground states,H_±has the matrix representation,[H_±]_A,B,C, given by [H_±]_A,B,C=[ 0 T_AB T_AC; T_BA 0 T_BC; T_CA T_CB 0 ] whereT_BA=T_AB^†is a30×30matrix, whileT_AC=T_CA^†andT_BC=T_CB^†are30×12matrices. Here, each non-zero matrix element isJ_±. On symmetry grounds, we can write the general form of the ground state,|GS⟩, as, |GS⟩=a∑_n=1^30|ψ_A_n⟩+b∑_l=1^30|ψ_B_l⟩+c∑_r=1^12|ψ_C_r⟩, where we have only three real coefficients,a,bandcfor|ψ_A_n⟩,|ψ_B_l⟩and|ψ_C_r⟩, respectively (See Supplementary Materials for detailed discussion for the perturbative method.). The energy correction isE(a,b,c)=⟨GS|H_±|GS|$⟩. First, considering J_±<0, the Lagrange multiplier method leads to a=b=(1+√(6))c/5 for the ground state. Next, if J_±>0, E(a,b,c) is minimized when a=-b and c=0. Remarkably, we have no degeneracy in either cases. Thus, any small quantum fluctuation given by H_± leads to a unique ground state with particular superpositions of degenerate states (See Supplementary Materials for detailed derivation). To capture the entanglement, we compute the entanglement negativity of the state defined by 𝒩_E=∑_i(|λ_i|-λ_i)/2, where λ_i are all eigenvalues of ρ^T_A, the partial transpose of the density matrix of the ground state, ρ<cit.>. 𝒩_E=0 if ρ is separable, while 𝒩_E>0 for an entangled state. For the icosahedron shell, 𝒩_E is computed by partitioning 12 vertices into two hemispherical region (one of them is highlighted as the blue shaded region in the inset of Fig.<ref>). Fig.<ref> (a) illustrates the entanglement is instantaneously generated for non-zero J_±. Conclusion— In summary, we discover the magnetic quasicrystalline systems host pure octupolar degrees of freedom, as a result of the interplay of spin-orbit coupling and CEF splitting of the icosahedral point group symmetry. On symmetry grounds, we also derive the spin exchange Hamiltonian with four independent parameters. Interestingly, for antiferromagnetic Ising model, magnetic frustration leads to 72 degenerate states for a single icosahedron. For a small but finite J_±, quantum fluctuations make a particular mixture of these degenerate states. It makes different but unique ground states for (anti-) ferromagnetic J_±, producing a finite entanglement even for arbitrary small J_±. Depending on the inter-shell distances, possible macroscopic degeneracy and entanglement of octupoles would be an interesting future work. Also, the studies in the presence of J_±± and J_± z, which do not preserve the total Σ^z, can be explored which we leave as a future work. Such octupolar degrees of freedom can be found in rare-earth Yb based magnetic quasicrystals such as Au-Al-Yb alloys, Cd-Mg-Yb alloys and etc<cit.>. However, most of the currently present magnetic quasicrystals have issues of intermediate valence with mixture of 4f^13 of Yb^3+ and 4f^14 non-magnetic Yb^2+<cit.>. In addition, some mixed sites between non rare-earth atoms makes imperfect symmetries, allowing small deviation from I_h point group symmetries<cit.>. Nonetheless, one expects that advancements in chemical synthesis techniques could overcome these obstacles enabling the successful synthesis of finely controlled icosahedral quasicrystals<cit.>, and it may give us a chance to discover pure magnetic octupoles and their interesting physics. Our work for the first time shows that multipolar degrees of freedom naturally emerge in the icosahedral quasicrystals. It breaks new ground in the magnetism of quasicrystals, and opens several interesting questions. One could explore magnetic quasicrystals searching for hidden phases, magnetic frustration induced long range entanglement such as spin liquids and non-Fermi liquids due to exotic Kondo effect<cit.>. Our study motivates to experimentally find new rare-earth icosahedral quasicrystals beyond conventional magnetism in periodic crystals. It is worth noting that the field of magnetic quasicrystals is an interesting research area, and continued advancements in both experimental and theoretical studies lead us to discover new magnetic phenomena in quasicrystals. §.§ Acknowledgement We thank Takanori Sugimoto and Taku J Sato for useful discussions. J.M.J and S.B.L. are supported by National Research Foundation Grant (No. 2021R1A2C1093060)). § SUPPLEMENTARY MATERIAL FOR FRUSTRATED MULTIPOLES IN AN ICOSAHEDRAL QUASICRYSTAL [email protected] Advanced Institute of Science and Technology, Daejeon 34141, South [email protected] Advanced Institute of Science and Technology, Daejeon 34141, South Korea Supplementary Material for Frustrated multipoles in an icosahedral quasicrystal SungBin Lee July 31, 2023 =============================================================================== § CRYSTAL ELECTRIC FIELD HAMILTONIAN AND STEVENES OPERATOR The crystal field Hamiltonian under the icosahedral point group symmetry would be written as H_CEF=B_6(O_6^0-42O_6^5). Here, B_6=A_6γ_J⟨r^6|$⟩ is the Stevens coefficient obtained by the radial integral.γ_Jis the Stevens factor, andris the radial position. Especially,A_6=-33/100q_0|e|/R_0^7whereq_0is the charge of the ligands andR_0is the distance between ligands and central rare earth atom<cit.>. We focus on the angular parts which areO_5^0andO_5^5, Stevens operators. They are given by<cit.> O_5^0=231J_z^6-105(3J(J+1)-7)J_z^4+(105J^2(J+1)^2-525J(J+1)+294)J_z^2 -5J^3(J+1)^3+40J^2(J+1)^2-60J(J+1) O_5^5=J_z(J_+^5+J_-^5)+(J_+^5+J_-^5)J_z/4 If the central rare earth atom is^3+, the4felectrons are the valance electron. It is known thatq_0would be positive for some materials such as Au-Al-Yb compounds. In such case, we haveA_6<0. Furthermore,^3+has the total angular momentumJ=7/2withγ_J=1.48× 10^-4, so we haveB_6<0<cit.>. In this case the ground state sector ofH_CEF, is Kramer doublet. § LOCAL AXES AND SYMMETRY ALLOWED HAMILTONIAN We consider an icosahedron whose one of the 5-fold rotationally symmetric axes is the globalz-axis. We define the local coordinate axes for the icosahedron as follows. z⃗_1=(0,0,-1) z⃗_2=-1/√(5)(2,0,1) z⃗_3=-(5-√(5)/10,√(5+√(5)/10),1/√(5)) z⃗_4=-(-5-√(5)/10,√(5-√(5)/10),1/√(5)) z⃗_5=-(-5-√(5)/10,-√(5-√(5)/10),1/√(5)) z⃗_6=-(5-√(5)/10,-√(5+√(5)/10),1/√(5)) Notex⃗_i=(0,1,0)×z⃗_iandy⃗_i=z⃗_i×x⃗_i. The other six coordinates are obtained by inversion symmetry. Specifically, the localzandxaxes ofi↔ 13-isites are related by the inversion transformation. Note thatz⃗_iis the position vector of the site from the center of the icosahedron shell. The symmetry allowed spin Hamiltonian with respect to the local coordinate axes is obtained by applying two mirror reflection symmetries,M_1andM_2whose normal vectors of the mirror planes aren⃗_1=(0,1,0)andn⃗_2=(1,0,φ)/√(1+φ^2). SinceΣ^μ, whereμ=x,y,zare the pseudospin operators which are the rank 3 tensor operators, the unitary representation of the reflection transformation is given byπ-rotation around these normal vectors of the mirror planes for angular momentumj=3, octupoles. ConsideringU(M_1(2))as the unitary representation of the mirror reflection,M_1(2), then theT_m^(3)operators transform under the mirror reflectionsM_1(2)as in Eq.(<ref>). T_m^(3)→ U(M_1(2))T_m^(3) U^†(M_1(2))=∑_m'D_1(2),m'm^(3)T_m'^(3) whereD_1(2),m'mare the Wigner D-matrix elements for the angular momentumj. Each of the spherical tensor operator is represented as the pseudospin operators in the Kramers doublet, as discussed in the main text. To be more specific, the Wigner D-matrices are given by D_1,m'm^(3)=[ 0 0 0 0 0 0 1; 0 0 0 0 0 -1 0; 0 0 0 0 1 0 0; 0 0 0 -1 0 0 0; 0 0 1 0 0 0 0; 0 -1 0 0 0 0 0; 1 0 0 0 0 0 0 ] D_2,m'm^(3)=[ -5-2√(5)/25 1/5√(3/5(7+3√(5))) -1/5√(3/2(3+√(5))) 2/5 -1/5√(3/2(3-√(5))) 1/5√(3/5(7-3√(5))) -5+2√(5)/25; 1/5√(3/5(7+3√(5))) -15-√(5)/50 -1/5√(3-√(5)) √(6)/5 -1/5√(3+√(5)) 15-√(5)/50 1/25√(3/2)(5-3√(5)); -1/5√(3/2(3+√(5))) -1/5√(3-√(5)) 1/√(5) 0 -1/√(5) √(3+√(5))/5 -1/5√(3/2(3-√(5))); 2/5 √(6)/5 0 -1/√(5) 0 √(6)/5 -2/5; -1/5√(3/2(3-√(5))) -1/5√(3+√(5)) -1/√(5) 0 1/√(5) √(3-√(5))/5 -1/5√(3/2(3+√(5))); 1/5√(3/5(7-3√(5))) 15-√(5)/50 √(3+√(5))/5 √(6)/5 √(3-√(5))/5 1/50(-15-√(5)) -1/5√(3/5(7+3√(5))); 1/25(-5+2√(5)) 1/25√(3/2)(5-3√(5)) -1/5√(3/2(3-√(5))) -2/5 -1/5√(3/2(3+√(5))) -1/5√(3/5(7+3√(5))) 1/25(-5-2√(5)) ] Note that the Hermitian and time-reversal invariant Hamiltonian possesses 5 independent parameters. Under the mirror reflection symmetry,M_1given by Eqs.(<ref>) and (<ref>), we haveΣ^±_4(5)→ -Σ^∓_5(4)andΣ^z_4(5)→-Σ^z_5(4), where the subscripts 4 and 5 are the site indices (Refer to Fig.<ref>). Hence, there are only four independent parameters,J_±±,J_z±,J_±andJ_zzfor the Hamiltonian invariant under the mirror reflectionM_1. On the other hand, we note that Eq.(<ref>) does not give any further constraints on the parameters of the Hamiltonian. Under the 5-fold rotation, the pseudospin operators transform asΣ_±^(i)→ e^∓4π/5Σ_±^(j). Thus, the bond dependent phase factors are added in order to make the Hamiltonian invariant under the 5-fold rotations. Letα_ijandβ_ijbe the matrix of the additional phase factor for the interaction term between thei-th site and thej-th site<cit.> (See the main text for the definition ofα_ijandβ_ij). Fig.<ref> shows the five types of nearest neighbor bond orientations, red, green, blue, cyan, and black. Referring to the indices in Fig.<ref>, the bond-orientation dependent phase factor are given as, α_ij=1, (i,j) (j,i)∈{(1,2),(4,5),(3,7),(6,10),(8,9),(11,12)}, α_ij=e^i2π/5, (i,j) (j,i)∈{(2,3),(4,7),(1,5),(6.9),(10,11),(8,12)}, α_ij=e^i4π/5, (i,j) (j,i)∈{(1,6),(3,4),(2,8),(5,11),(9,10),(7,12)}, α_ij=e^-i4π/5, (i,j) (j,i)∈{(1,3),(5,6),(7,8),(4,11),(2,9),(10,12)}, α_ij=e^-i2π/5, (i,j) (j,i)∈{(1,4),(3,8),(7,11),(9,12),(5,10),(2,6)}. For a givenα_ij,β_ij=(α_ij^*)^2. § ICOSAHEDRON QUASICRYSTAL DERIVED FROM CUT-AND-PROJECT SCHEME The cut-and-project scheme for constructing the icosahedral quasicrystal is introduced in this section<cit.>. The icosahedral point group symmetry is compatible in 6D space. Therefore, one can construct the icosahedral quasicrystal in 3D space as the lattice points descended from the 6D hypercubic lattice. In detail, let us consider the 6D hypercubic latticeℒ={x | x=m_i e_i, m_i∈ℤ, 1≤ i≤ 6}, wheree_iis a standard unit vector. The 6D space is decomposed by two projection mapsπandπ^⊥. Each of them projects the lattice points inℒonto the subspace of the quasicrystal (physical space) and its orthogonal complement subspace (perpendicular space), respectively. To produce the nontrivial quasicrystalline pattern, the physical space should have an irrational angle to the lattice surface. However, for such an irrational angle, the images of the projection of whole lattice points,π(ℒ), densely cover the physical space. This violates the uniform discreteness of the definition of quasicrystals. So one should choose a subset ofℒ, which is the (relatively) compact subset of the perpendicular spaceπ^⊥and is often called the window, e.g.K. Only if the image ofπ^⊥belongs to the windowK, we project the lattice points. The resulting projection image in physical space is the discrete quasicrystalline lattice structure. As a standard choice of the window,K=π^⊥(𝒲(0)), where𝒲(0)is the Wigner-Seitz cell of the origin. π =[ 0 τ -1 0 1 -τ; τ -1 0 -τ 0 -1; 1 0 -τ 1 -τ 0 ] π^⊥ =1/√(2+τ)[ 0 -1 -τ 0 τ 1; 1 τ 0 -1 0 τ; τ 0 1 τ 1 0 ]. Then, the windowKbecomes an icosahedron whose vertices areπ^⊥-projection images of the vertices of the Wigner-Seitz cell (See Fig.<ref> (a).). The icosahedral quasicrystal (See Fig.<ref> (b)) is given by theπ-projection of 6D hyper-cubic lattice points, whose image ofπ^⊥belongs to the windowKin Fig.<ref> (a). § POSSIBLE INTERSHELL MAGNETIC ORDERINGS AND FRUSTRATIONS This section shows an example of possible intershell magnetic states resulting from the intershell interaction terms. For each pair of nearest neighboring icosahedral shells, there are two pairs of nearest neighboring sites (see Fig. <ref>). Fig.<ref> shows an example of possible long-range intershell magnetic states regarding the octupolar degrees of freedom. In particular, Fig.<ref> (a) shows the octupolar magnetic state whenJ_zzis ferromagnetic for both the intra- and the intershell sites. Every icosahedron is ordered as one of the two ground states of the Ising model. On the other hand, Fig.<ref> (b) shows the octupolar magnetic state whenJ_zzis ferromagnetic for the intrashell sites but antiferromagnetic for the intershell sites. This gives rise to the inter-shell geometric frustration. Note that the orange dashed lines representing inter-shell interactions form the triangles leading to intershell geometric frustration. Specifically, 12 icosahedron shells sit on the vertices of the inflated icosahedron shape in the icosahedral quasicrystal. Therefore, considering|FM_±⟩as two ferromagnetic Ising ground states of a single icosahedron shell, the antiferromagnetic interaction between the shells results in an antiferromagnetic Ising order on the inflated icosahedron in terms of|FM_±⟩. This is how the geometrical frustration is created in the enlarged spatial scale in the icosahedral quasicrystal. Although we have given the two particular examples, depending on the intra-shell distances and their magnetic exchange couplings, it can stabilize different magnetic ground state case by case. § DERIVATION OF THE UNIQUE GROUND STATE FOR |J_±|≪ J_ZZ Here, we perturbatively derive the unique ground states emergent by smallJ_±compared toJ_zz. We apply the degenerate perturbation theory for 72-fold degenerate ground states forJ_zz>0. Note that based on the symmetry ground, we can classify the 72 states into three groups,|ψ_A_n⟩,|ψ_B_l⟩and|ψ_C_r⟩, where1≤ n,l≤ 30, while1≤ r≤ 12. Figs.<ref> (a,b,c) show the representative states in each group,A,B,C, respectively. Each group is generated by applying orientation-preserving icosahedral symmetry operations to these states in Figs.<ref> (a,b) and (c), respectively. Every orientation-preserving operations are in the maximal normal subgroup of the full icosahedral symmetry group,I⊴ I_h. NoteIis called as the icosahedral rotation group of order 60<cit.>. First, the states shown in Figs.<ref> (a) and (b) represent the states without 5-fold rotational symmetry. Instead, they have another orientation-preserving icosahedral symmetry, which maps theZaxis to theZ'axis in Figs.<ref> (a) and (b) respectively. Since the icosahedral rotation group has the order,|I|=60, for each state in Figs.<ref> (a) and (b), we have 30 degenerate states generated byI, respectively. We denote them as|ψ_A_n⟩and|ψ_B_l⟩, where1≤ n,l≤ 30. Note that the states in Fig.<ref> (a) and (b) are related by the mirror reflection of the sites with respect to theσplane depicted in Fig.<ref> (d). Here,σplane contains theZaxis and the site P. This mirror reflection,M_σwith respect to theσ-plane only exchanges the octupoles on the sites Q and R. Hence, the states,|ψ_B_l⟩are also obtained by applying the cosetIM_σ={gM_σ| g∈ I }to the state in Fig.<ref> (a). On the other hand,|ψ_C_r⟩are 12 degenerate states with 5-fold rotational symmetry (Refer to Fig.<ref> (c) and the inset representing the viewpoint along the rotational symmetry axis,Zaxis.). Since the state itself has 5-fold rotational symmetry, we have only 12 distinct orientation preserving transformations which give rise to the degenerate states in the groupC. Hence, we may let the states in the groupCas|ψ_C_r⟩, where1≤ r≤ 12. Now, let us apply perturbative method to investigate the quantum fluctuation based on above groups. Take H_±=J_±∑_⟨i,j|⟩(Σ_i^+Σ_j^-+Σ_i^-Σ_j^+). Then, we have ⟨ψ_A_n|H_±|ψ_A_m|=⟩⟨ψ_B_l|H_±|ψ_B_k|=⟩⟨ψ_C_r|H_±|ψ_C_s|=⟩0, where1≤ n,m,l,k≤ 30, 1≤ r,s≤ 12. This is becauseH_±has zero matrix element between two states related by the orientation preserving transformation. Thus, one can write the block off-diagonal matrix representation ofH_±for the 72-fold degenerate states, say[H_±]_A,B,Cas [H_±]_A,B,C=[ 0 T_AB T_AC; T_BA 0 T_BC; T_CA T_CB 0 ] Here, the subscripts,A,BandCstand for the each groups.T_BA=T_AB^†is a30×30matrix, whileT_AC=T_CA^†andT_BC=T_CB^†are30×12matrices. Here, each non-zero matrix element isJ_±. We find that for each state inA (B), there are 4, and 2 different states inB (A)andC, respectively such that the matrix elements of[H_±]_A,B,CisJ_±. On the other hand, for each state inC, there are 5 different states inAandB, respectively, and hence in total 10 states such that the matrix elements of[H_±]_A,B,CisJ_±. One can use the graph,Gin Fig.<ref> to examine above facts. Here, the nodes of the graphGrepresents to each state (red circle, green pentagram, and blue square represent the states inA,BandC, respectively), and two nodes are connected by an edge if they admit nonzero matrix element of[H_±]_A,B,C. Thus,[H_±]_A,B,C/J_±is the adjacency matrix of the graphG<cit.>. The graphGis the decorated pentakis icosidodecahedron (See Fig.<ref>.). The pentakis icosidodecahedron (Fig.<ref> (a)) has 42 vertices in two different types depending on the local shape of the vertex, which are called pentagonal and hexagonal sites whose connectivity is 5 and 6, respectively<cit.>. Here, on the other hand, we term by the decorated pentakis icosidodecahedron, whose hexagonal sites are doubly occupied representing the states inA(red circles) andB(green pentagram), while the pentagonal sites of the pentakis icosidodecahedron are representing the states inC(blue square) (See Fig.<ref> (b).). Hence, in the graphG, each node for the states inA (B)are connected to the four nodes for the states inB (A), and two nodes for the states inC, while each node for the states inCare connected to the five nodes for the states inAandB, respectively. It allows us to write the general form of the ground state,|GS⟩, as |GS⟩=a∑_n=1^30|ψ_A_n⟩+b∑_l=1^30|ψ_B_l⟩+c∑_r=1^12|ψ_C_r⟩, where we have three real coefficients,a,bandcfor|ψ_A_n⟩,|ψ_B_l⟩and|ψ_C_r⟩, respectively. Note that[H_±]_A,B,Cis a real symmetry matrix, so we can assume without loss of generality thata,b,care real. Then we have, E(a,b,c)=⟨GS|H_±|GS|=⟩240ab+120ac+120bc. In addition, we have the normalization conditionN(a,b,c)=30a^2+30b^2+12c^2=1. The critical point is found by equating, ∇ E(a,b,c)=λ∇ N(a,b,c), whereλis the Lagrange multiplier. Fora=b=(1+√(6))c/5,E(a,b,c)is maximized, while fora=-bandc=0,E(a,b,c)is minimized. Each case corresponds to the unique ground state for the ferromagnetic and antiferromagneticJ_±. Remarkably, there is no degenerate ground state in either case. Thus, any small quantum fluctuation given byH_±completely eliminates the degeneracy, by superposing 72 ground states of the antiferromagnetic Ising model.
http://arxiv.org/abs/2307.00365v1
20230701152608
Understanding recent deep-learning techniques for identifying collective variables of molecular dynamics
[ "Wei Zhang", "Christof Schütte" ]
cs.LG
[ "cs.LG", "math.OC" ]
Understanding recent deep-learning techniques for identifying collective variables of molecular dynamics Wei Zhang ^* Christof Schütte ^*, ======================================================================================================== ^*Zuse Institute Berlin, Takustrasse 7, 14195 Berlin, Germany ^Institute of Mathematics, Freie Universität Berlin, Arnimallee 6, 14195 Berlin, Germany Email: [email protected], [email protected] The dynamics of a high-dimensional metastable molecular system can often be characterised by a few features of the system, i.e. collective variables (CVs). Thanks to the rapid advance in the area of machine learning, various deep learning-based CV identification techniques have been developed in recent years, allowing accurate modelling and efficient simulation of complex molecular systems. In this paper, we look at two different categories of deep learning-based approaches for finding CVs, either by computing leading eigenfunctions of infinitesimal generator or transfer operator associated to the underlying dynamics, or by learning an autoencoder via minimisation of reconstruction error. We present a concise overview of the mathematics behind these two approaches and conduct a comparative numerical study of these two approaches on illustrative examples. molecular dynamics, collective variable identification, eigenfunction, autoencoder, variational characterisation, deep learning § INTRODUCTION Molecular dynamics (MD) simulation is a mature computational technique for the study of biomolecular systems. It has proven valuable in a wide range of applications, e.g. understanding functional mechanisms of proteins and discovering new drugs <cit.>. However, the capability of direct (all-atom) MD simulations is often limited, due to the disparity between the tiny step-sizes that the simulations have to adopt in order to ensure numerical stability and the large timescales on which the functionally relevant conformational changes of biomolecules, such as protein folding, typically occur. One general approach to overcome the aforementioned challenge in MD simulations is by utilizing the fact that in many cases the dynamics of a high-dimensional metastable molecular system can be characterised by a few features, i.e. collective variables (CVs) of the system. In deed, many enhanced sampling methods (see <cit.> for a review) and approaches for building surrogate models <cit.> rely on knowing CVs of the underlying molecular system. While empirical approaches and physical/chemical intuition are still widely adopted in choosing CVs (e.g. mass centers, bonds, or angles), it is often difficult or even impossible to intuit biomolecular systems in real-life applications due to their high dimensionality, as well as structural and dynamical complexities. Thanks to the availability of numerous molecular data being generated and the rapid advance of machine learning techniques, data-driven automatic identification of CVs has attracted considerable research interests. Numerous machine learning-based techniques for CV identification have emerged, such as the well-known principal component analysis (PCA) <cit.>, diffusion maps <cit.>, ISOMAP <cit.>, sketch-map <cit.>, time-lagged independent component analysis (TICA) <cit.>, as well the kernel-PCA <cit.> and kernel-TICA <cit.> using kernel techniques. See <cit.> for reviews. The recent developments mostly employ deep learning techniques and largely fall into two categories. Methods in the first category are based on the operator approach for the study of stochastic dynamical systems. These include VAMPnets <cit.> and the variant state-free reversible VAMPnets (SRV) <cit.>, the deep-TICA approach <cit.>, and ISOKANN <cit.>, which are capable of learning eigenfunctions of Koopman/transfer operators. The authors of this paper have also developed a deep learning-based method for learning eigenfunctions of infinitesimal generator associated to overdamped Langevin dynamics <cit.>. Methods in the second category combine deep learning with dimension reduction techniques, typically by training autoencoders <cit.>. For instance, several approaches are proposed to iteratively train autoencoders and improve training data by “on-the-fly” enhanced sampling. These include the Molecular Enhanced Sampling with Autoencoders (MESA) <cit.>, Free Energy Biasing and Iterative Learning with Autoencoders (FEBILAE) <cit.>, the method based on the predictive information bottleneck framework <cit.>, the Spectral Gap Optimisation of Order Parameters (SGOOP) <cit.>, the deep Linear Discriminant Analysis (deep-LDA) <cit.>. Besides, various generalized autoencoders are proposed, such as the extended autoencoder (EAE) model <cit.>, the time-lagged (variational) autoencoder <cit.>, Gaussian mixture variational autoencoder <cit.>, and EncoderMap <cit.>. Motivated by these rapid advances, in this paper we study the two aforementioned categories of deep learning-based approaches for finding CVs, i.e. approaches for computing leading eigenfunctions of infinitesimal generator or transfer operator associated to the underlying dynamics and approaches that learn an autoencoder via minimisation of reconstruction error. We focus on theoretical aspects of these approaches in order to gain better understanding on their capabilities. The remainder of this article is organized as follows. In Section <ref>, we present an overview of the approaches for CV identification based on computing eigenfunctions. We give a brief introduction to infinitesimal generator and transfer operator, then we discuss motivations for the use of eigenfunctions as CVs in studying molecular kinetics, and finally we present variational characterisations as well as loss functions for learning eigenfunctions. In Section <ref>, we study autoencoders. We discuss the connection with PCA and present a characterisation of the optimal (time-lagged) autoencoder. In Section <ref>, we illustrate the numerical approaches for learning eigenfunctions and autoencoder by applying them to two simple yet illustrative systems. Appendix <ref> contains the proofs of two lemmas in Section <ref>. § EIGENFUNCTIONS AS CVS FOR THE STUDY OF MOLECULAR KINETICS ON LARGE TIMESCALES In this section, we consider eigenfunctions of infinitesimal generator and transfer operator that are associated to the underlying dynamics. We begin by introducing the relevant operators, whose eigenfunctions will be the focus of this section. After that we present two different perspectives, which motivate the use of eigenfunctions as CVs for kinetics study on large timescales. Finally, we discuss variational formulations of leading eigenvalues and eigenfunctions, which will be useful in designing loss functions for training artificial neural networks. §.§ Operator approach Generator. Molecular dynamics can be modelled by stochastic differential equations (SDEs). For both simplicity and mathematical convenience, we consider here the following SDE, often called the overdamped Langevin dynamics, dX_s = -∇ V(X_s) ds + √(2β^-1) dW_s , where X_s ∈ℝ^d is the system's state at time s∈ [0,+∞), V: ℝ^d→ℝ is a smooth potential function, W_s is a d-dimensional Brownian motion that mimics the effect of the noisy environment, and the noise strength β=(k_BT)^-1 is proportional to the inverse of the system's temperature T. We assume that dynamics (<ref>) is ergodic with respect to its unique invariant measure dμ(x) = π(x)dx, with π(x)= 1/Z e^-β V(x), x∈ℝ^d , where Z is a normalising constant. The infinitesimal generator of (<ref>) is a second-order differential operator, defined by ℒ f = -∇ V ·∇ f + 1/βΔ f = 1/βe^β V (e^-β V∇ f) , for a test function f: ℝ^d→ℝ. Corresponding to the reversibility of dynamics (<ref>), the generator ℒ is self-adjoint in L^2(μ) endowed with the weighted inner product ⟨ f, g⟩_μ:= ∫_ℝ^d fg dμ. In fact, using (<ref>)–(<ref>) and integration by parts, one can verify that ⟨ (-ℒ) f, g⟩_μ = ⟨ f, (-ℒ) g⟩_μ = 1/β𝐄_μ(∇ f·∇ g) , for two C^2-smooth test functions f, g: ℝ^d→ℝ, where 𝐄_μ(·) denotes the mathematical expectation with respect to the measure μ in (<ref>). We also define the energy ℰ(f)=1/β𝐄_μ(|∇ f|^2) , f: ℝ^d→ℝ , which is considered to be +∞ if the right hand side in (<ref>) is undefined. Under certain conditions on V, the operator -ℒ has purely discrete spectrum, consisting of a sequence of eigenvalues <cit.> 0 = λ_0 < λ_1 ≤λ_2 ≤⋯ , with the corresponding (orthogonal and normalised) eigenfunctions φ_0≡ 1, φ_1, φ_2, ⋯∈ L^2(μ). The leading (smallest) nontrivial eigenvalues in (<ref>) encode the large timescales of the underlying dynamics, whereas the corresponding eigenfunctions are closely related to its metastable conformations. Transfer operator. In contrast to the discussion above based on SDEs, transfer operator approach offers an alternative way to study dynamical systems without specifying the governing equations <cit.> and is hence attractive in developing numerical algorithms. In this framework, one assumes that the trajectory data is sampled from an underlying (equilibrium) system whose state y at time t+τ given its state x at time t can be modelled as a discrete-time Markovian process with transition density p_τ(y|x), for all t≥ 0, where τ>0 is called the lag-time and the process is assumed to be ergodic with respect to the unique invariant distribution μ in (<ref>). The transfer operator associated to this discrete-time Markovian process is defined as <cit.> 𝒯 u (x) = 1/π(x)∫_ℝ^d p_τ(x|y) u(y) π(y) dy , x ∈ℝ^d for a density (with respect to μ) u: ℝ^d→ℝ^+. We assume that the detailed balance condition is satisfied, i.e. p_τ(y|x)π(x) = p_τ(x|y)π(y) for all x,y∈ℝ^d. As a result, we have 𝒯 u (x) = 1/π(x)∫_ℝ^d p_τ(x|y) u(y) π(y) dy = ∫_ℝ^d p_τ(y|x) u(y) dy = 𝐄(u(X_τ)|X_0 = x) , which shows that in the reversible setting the transfer operator coincides with the element (at time τ) of the semigroup associated to the underlying process <cit.> [In literature, the expression in the last line of (<ref>) is also used to define Koopman operators for stochastic dynamics <cit.>. We stick to the notion of transfer operator and note that both operators are identical for reversible processes. We refer to <cit.> and the references therein for extensive study of stochastic dynamics using Koopman operators.]. Similar to the generator, one can show that 𝒯 is self-adjoint in L^2(μ) with respect to ⟨·,·⟩_μ (see (<ref>)). Also, in analogy to (<ref>), for a function f∈ L^2(μ) we define the energy ℰ_τ(f) = 1/2∫_ℝ^d∫_ℝ^d(f(y) - f(x))^2 p_τ(y|x) π(x) dx dy . The following lemma provides an alternative expression of (<ref>) using the transfer operator 𝒯. Denote by I: L^2(μ)→ L^2(μ) the identity map. For all f ∈ L^2(μ), we have ℰ_τ(f) = ∫_ℝ^d[(I-𝒯)f(x)] f(x) dμ(x) = ⟨ (I-𝒯)f, f⟩_μ . The proof of Lemma <ref> is straightforward and we present it in Appendix <ref>. Lemma <ref> and (<ref>) imply that all eigenvalues of 𝒯 are no larger than one. We assume that the spectrum of 𝒯 consists of discrete eigenvalues 1=ν_0 > ν_1 ≥⋯ and the largest eigenvalue ν_0=1 (corresponding to the trivial eigenfunction φ_0≡ 1) is non-degenerate. These eigenvalues and their corresponding eigenfunctions are of great interests in applications, since they encode the information about the timescales and metastable conformations of the underlying dynamics, respectively <cit.>. For the process defined by SDE (<ref>), in particular, the transfer operator and the generator satisfy 𝒯=e^τℒ, which implies that their eigenvalues are related by ν_i=e^-τλ_i with the identical eigenfunctions φ_i, for i≥ 0 <cit.>. §.§ Motivations for using eigenfunctions as CVs There is a large amount of literature on the study of eigenfunctions of the infinitesimal generator, transfer operator, or Koopman operator. For the transfer operator 𝒯, for instance, many of these studies are motivated by the connection between the (pairwise orthogonal and normalised) eigenfunctions and the consecutive actions of 𝒯 on test functions f ∈ L^2(μ), i.e. in the reversible case, 𝒯^n f(x) = 𝐄(f(X_nτ)|X_0=x) = 𝐄_μ(f) + ∑_i=1^+∞⟨ f, φ_i⟩_μν_i^n φ_i(x) , x∈ℝ^d,   n = 1,2, … . Since ν_1, ν_2, … are all smaller than 1, for large integers n, the action 𝒯^n f is mainly determined by the leading eigenvalues of 𝒯 in (<ref>) and the corresponding eigenfunctions. Therefore, knowing the leading eigenvalues and eigenfunctions of 𝒯 helps study the map 𝒯^n for large n, which in turn helps understand the behavior of the underlying dynamics at large time T=nτ. For Koopman operator, the leading eigenfunctions define the optimal linear Koopman model for features (functions) <cit.>. Here, we contribute to this discussion by providing two different perspectives that directly connect eigenfunctions to the underlying dynamics and to the choices of CVs. We assume that the dynamics satisfies SDE (<ref>) and we will work with its generator ℒ. Most of the results below can be extended to a more general setting, e.g. overdamped Langevin dynamics with state-dependent diffusion coefficients. It is also possible to obtain parallel results for the discrete-time Markovian process involving the transfer operator 𝒯 [This is an ongoing work that will be published in future.]. Let ξ = (ξ_1, ξ_2, …, ξ_k)^⊤: ℝ^d→ℝ^k be a smooth CV map, where 1 < k≪ d. Ito's formula gives dξ(X_s) = ℒξ(X_s) ds + √(2β^-1)∇ξ(X_s) dW_s , where ∇ξ(x) ∈ℝ^k× d denotes the Jacobian matrix of ξ at x∈ℝ^d. Given the projection dimension k, we are interested in finding a good CV map ξ that is both non-trivial and non-degenerate. In other words, ξ should be non-constant and the components ξ_1,ξ_2,…, ξ_k are linearly independent (or, the image of ξ spans a k-dimensional space). These two requirements can be met by imposing the following conditions without loss of generality 𝐄_μ(ξ_i) = 0 , ⟨ξ_i, ξ_j⟩_μ = 𝐄_μ(ξ_iξ_j)=δ_ij , 1 ≤ i ≤ j ≤ k . Optimal CVs for the study of slow motions. For the first perspective, we make an analogy between the dynamics of (<ref>) on large timescales and the slow motions in it. This suggests that a good CV map ξ that is capable of capturing the behavior of (<ref>) on large timescales should meet the following criteria: ξ(X_s) evolves much more slowly comparing to the dynamics X_s itself. (C1) Since ξ(X_s) satisfies SDE (<ref>), to meet criteria (C1) it is therefore natural to require the magnitude of both terms on the right hand side of (<ref>) to be small (in the sense of average with respect to the invariant distribution μ in (<ref>)). The latter can be formulated as an optimisation problem min_ξ_1, …, ξ_k∑_i=1^k ω_i ∫_ℝ^d(|ℒξ_i|^2(x) + |∇ξ_i|^2(x)) dμ(x), , where ω_1 ≥ω_2 ≥…≥ω_k >0 are weights assigned to the k equations in (<ref>). One can choose the weights to be identical, but using pairwise distinct weights could help eliminate non-uniqueness of the optimiser of (<ref>) due to permutations. We make the following claim concerning the optimiser of (<ref>). Assume that -ℒ has purely discrete spectrum consisting of the eigenvalues in (<ref>). Then, the minimum of (<ref>) is attained by the first k (non-trivial) eigenfunctions of -ℒ, i.e. when ξ_i=φ_i for i=1,…, k. Using the identities in (<ref>), one can reformulate the optimisation problem (<ref>) as min_ξ_1, …, ξ_k∑_i=1^k ω_i ⟨ [(-ℒ)^2 + β (-ℒ)] ξ_i, ξ_i⟩_μ, . The conclusion follows once we show that the minimum of (<ref>) is attained when ξ_i=φ_i for i=1,…, k. This can be done straightforwardly by repeating the proof of Theorem <ref> below (see <cit.>) for the operator (-ℒ)^2 + β (-ℒ) and using the fact that both (-ℒ)^2 + β (-ℒ) and -ℒ have the same set of eigenfunctions. It is not difficult to see that the eigenfunctions φ_1, …, φ_k actually minimise both terms in the objective (<ref>) simultaneously (subject to (<ref>)). The following identity provides an explicit expression for the first term in (<ref>), which involves the operator (-ℒ)^2. For any smooth function f:ℝ^d→ℝ such that ⟨ (-ℒ)^2 f, f⟩_μ <+∞, we have ∫_ℝ^d |ℒf|^2 dμ = ⟨ (-ℒ)^2 f, f⟩_μ = 1/β∫_ℝ^d[HessV(∇ f, ∇ f) + 1/β|∇^2 f|^2_F ] dμ , where |∇^2 f|_F denotes the Frobenius norm of the matrix ∇^2 f. The proof of Lemma <ref> is given in Appendix <ref>. The integrand of the rightmost integral in (<ref>) consists of the Hessian of the potential V and a regularising term. Loosely speaking, since the eigenfunctions minimise (<ref>) subject to (<ref>), (<ref>) reveals the connection between the (global) eigenfunctions of (-ℒ) to the (local) eigenvectors of the potential function V. A final remark on (<ref>) is that it does not rely on the specific form of the SDE. Therefore, in principle it can be used as a criteria of good CVs in the case where the SDE has a more general form, e.g. a non-reversible SDE or underdamped Langevin dynamics. In these general settings, it will be interesting to study whether (<ref>) can be solved efficiently using approaches such as physics-informed neural networks (PINN) <cit.>. Optimal CVs for building effective dynamics. The second perspective is related to the effective dynamics of (<ref>) using conditional expectations <cit.>. Specifically, note that the SDE of ξ(X_s) given by (<ref>) is non-closed, in the sense that the terms on its right hand side still depend on the full state X_s ∈ℝ^d. The authors in <cit.> proposed an effective dynamics as a Markovian approximation of (<ref>), which is described by the SDE dz(s) = b(z(s)) ds + √(2β^-1)σ(z(s)) dw(s) , where w(s) is a k-dimensional Brownian motion, the coefficients b: ℝ^k →ℝ^k and σ∈ℝ^k× k are defined by b_l(z) = 𝐄_μ_z(ℒξ_l) ,  1 ≤ l ≤ k, (σσ^⊤)(z) = 𝐄_μ_z(∇ξ∇ξ^⊤) , for  z ∈ℝ^k , respectively. In the above, for z ∈ℝ^k, 𝐄_μ_z(·) denotes the conditional expectation on the level set Σ_z = {x∈ℝ^d |ξ(x) = z} with respect to the so-called conditional measure μ_z on Σ_z: dμ_z(x) = 1/Q(z)e^-β V(x)/Z[(∇ξ∇ξ^⊤)(x)]^-1/2 dν_z(x) = 1/Q(z)e^-β V(x)/Zδ(ξ(x)-z) dx , where the first equality follows from the co-area formula, Q(z) is a normalising constant, and ν_z denotes the surface measure on Σ_z. We refer to <cit.> for detailed discussions about the definition and properties of the effective dynamics (<ref>). Note that the effective dynamics (<ref>) can be defined with a general CV map ξ. A natural question is how to choose ξ such that the resulting effective dynamics is a good approximation of the original dynamics <cit.>. One way to quantify the approximation quality of (<ref>) is by comparing its timescales to the timescales of the original dynamics <cit.>. For the overdamped Langevin dynamics (<ref>), in particular, the infinitesimal generator of its effective dynamics (<ref>), denoted by ℒ, is again self-adjoint in an appropriate Hilbert space <cit.>. Assume that -ℒ has purely discrete spectrum, which consists of eigenvalues 0=λ_0 < λ_1 ≤λ_2 ≤⋯, and let φ_i: ℝ^k→ℝ be the corresponding orthonormal eigenfunctions. The following result estimates the approximation error of the effective dynamics in terms of eigenvalues. Recall the energy ℰ defined in (<ref>). For i=1,2,…, we have λ_i ≤λ_i ≤λ_i + ℰ(φ_i - φ_i ∘ξ) . In particular, when ξ(x) = (φ_1(x), φ_2(x), ⋯, φ_k(x))^⊤∈ℝ^k, we have λ_i = λ_i, for 1 ≤ i ≤ k. Proposition <ref> implies that, for a general CV map ξ, the eigenvalues associated to the effective dynamics are always larger or equal to the corresponding true eigenvalues, and the approximation error depends on the closeness between the corresponding eigenfunctions (measured by the energy ℰ). Also, choosing eigenfunctions associated to the original dynamics as the CV map ξ yields the optimal effective dynamics (<ref>), in the sense that it preserves the corresponding eigenvalues (timescales). §.§ From variational characterisations to loss functions In the following, we discuss variational characterisations of eigenfunctions for both generator and transfer operator. These characterisations are useful in developing numerical algorithms <cit.>, in particular in designing loss functions in recent deep learning-based approaches <cit.>. For the generator ℒ, note that (<ref>) has already given a variational characterisation of the leading eigenfunctions φ_1, …, φ_k thanks to Proposition <ref>. However, as mentioned in Section <ref>, the leading eigenfunctions actually minimise both terms in (<ref>) simultaneously and a simpler characterisation is preferred for numerical purposes. In this regard, we record the following characterisation obtained in <cit.>. Let k∈ℕ and ω_1 ≥…≥ω_k >0. Define ℋ^1 := {f∈ L^2(μ) | 𝐄_μ(f)=0, ⟨ (-ℒ f, f⟩_μ < +∞}. We have ∑_i=1^k ω_iλ_i =min_f_1,…, f_k∈ℋ^1∑_i=1^k ω_i ℰ(f_i) , where ℰ denotes the energy (<ref>), and the minimisation is over all f_1,f_2,…, f_k∈ℋ^1 such that ⟨ f_i,f_j⟩_μ = δ_ij , ∀ i,j ∈{1,…,k} . Moreover, the minimum in (<ref>) is achieved when f_i=φ_i for 1 ≤ i ≤ k. To apply Theorem <ref> in designing learning algorithms, we use the right hand side of (<ref>) as objective and add penalty term to it in order to incorporate the constraints (<ref>). In the end, we obtain the loss function that can be used to learn eigenfunctions of the generator by training neural networks: Loss(f_1, f_2, …, f_k) = 1/β∑_i=1^kω_i 𝐄^data(|∇ f_i|^2)/Var^data (f_i) + α∑_1≤ i_1 ≤ i_2 ≤ k(Cov^data(f_i_1,f_i_2) - δ_i_1i_2)^2 , where α is a penalty constant, and 𝐄^data, Var^data, Cov^data denote empirical estimators that approximate the mean, variance, and co-variance with respect to the measure μ, respectively. For brevity, we omit further discussions on the loss (<ref>), and we refer to <cit.> for more details. For the transfer operator 𝒯, using the same proof of Theorem <ref> (see <cit.>) and Lemma <ref> we can prove the following variational characterisation. Let k∈ℕ and ω_1 ≥…≥ω_k >0. Assume that 𝒯 has discrete spectrum consisting of the eigenvalues in (<ref>) with the corresponding eigenfunctions φ_i, i≥ 0. Define L^2_0(μ) := {f∈ L^2(μ) | 𝐄_μ(f)=0}. We have ∑_i=1^k ω_i(1-ν_i)=min_f_1,…, f_k∈ L^2_0(μ)∑_i=1^k ω_i ℰ_τ(f_i) , where ℰ_τ is the energy defined in (<ref>) for 𝒯, and the minimisation is over all f_1,f_2,…, f_k∈ L^2_0(μ) under the constraints (<ref>). Moreover, the minimum in (<ref>) is achieved when f_i=φ_i for 1 ≤ i ≤ k. We note that similar variational characterisations for eigenfunctions of transfer operator or Koopman operator have been studied in <cit.>. As in the case of generator, Theorem <ref> motivates the following loss function for learning eigenfunctions of the transfer operator 𝒯 [For overdamped dynamics (<ref>), we have 1 - ν_i/τ=1-e^-τλ_i/τ≈λ_i when τ is small, where λ_i is the corresponding eigenvalue of the generator. Based on this relation, we include the constant 1/τ in the first term of (<ref>). Also note that in contrast to (<ref>) time-series data is required in order to use the loss (<ref>) in training.]: Loss_τ(f_1, f_2, …, f_k) = 1/2τ∑_i=1^kω_i 𝐄^data|f_i(X_· +τ) - f_i(X_·)|^2/Var^data (f_i) + α∑_1≤ i_1 ≤ i_2 ≤ k(Cov^data(f_i_1,f_i_2) - δ_i_1i_2)^2 . Compared to VAMPnets <cit.>, the loss (<ref>) imposes orthogonality constraints (<ref>) explicitly and directly targets the leading eigenfunctions rather than basis of eigenspaces. Also, as opposed to the approach in <cit.>, training with both losses (<ref>) and (<ref>) does not require backpropagation on matrix eigenvalue problems. § ENCODER AS CVS FOR LOW-DIMENSIONAL REPRESENTATION OF MOLECULAR CONFIGURATIONS In this section, we briefly discuss autoencoders in the context of CV identification for molecular dynamics. An autoencoder <cit.> on ℝ^d is a function f that maps an input data x ∈ℝ^d to an output y∈ℝ^d by passing through an intermediate (latent) space ℝ^k, where 1 ≤ k < d. It can be written in the form f=f_dec∘ f_enc, where f_enc: ℝ^d→ℝ^k and f_dec: ℝ^k→ℝ^d are called an encoder and a decoder, respectively. The integer k is called the encoded dimension (resp. bottleneck dimension). In other words, under the mapping of the autoencoder f, the input x is first mapped to a state z in the latent space ℝ^k by the encoder f_enc, which is then mapped to y in the original space by the decoder f_dec. In practice, both the encoder and the decoder are represented by artificial neural networks (see Figure <ref>). Given a set of data x^(1), x^(2), …, x^(N)∈ℝ^d, they are typically trained by minimising the empirical reconstruction error Loss^AE(f_enc, f_dec) = 1/N∑_i=1^N |f_dec∘ f_enc(x^(i)) - x^(i)|^2 . In the context of CV identification for molecular systems, the trained encoder f_enc is used to define the CV map, i.e. ξ=f_enc. Note that the loss (<ref>) is invariant under permutation of the training data. For trajectory data, instead of (<ref>) it would be benefical to employ a loss that incorporates temporal information in the data. In this regard, several variants, such as time-lagged autoencoders <cit.> and the extended autoencoder using committor function <cit.>, have been proposed in order to learn low-dimensional representations of the system that can capture its dynamics. Connection with PCA. An autoencoder can be viewed as a nonlinear generalisation of PCA, which is a widely used technique for dimensionality reduction. To elucidate their connection, let us assume without loss of generality that the data satisfies 1/N∑_i=1^N x^(i)=0 and recall that the PCA algorithm actually solves the optimisation problem min_V_k∑_i=1^N | x^(i) - V_kV_k^⊤ x^(i)|^2 among matrices V_k ∈ℝ^d× k with k orthogonal unit vectors as columns <cit.>. Comparing (<ref>) and (<ref>), it is apparent that autoencoder can be considered as a nonlinear generalisation of PCA and it reduces to PCA when the encoder and decoder are restricted to linear maps given by f_enc(x) := V_k^⊤ x and f_dec(z) := V_k z for x∈ℝ^d, z∈ℝ^k, respectively. Characterisation of time-lagged autoencoders. We give a characterisation of the optimal encoder and the optimal decoder in the time-lagged autoencoders <cit.>. Assume that the data x^(0), x^(1), …, x^(i),… comes from the trajectory of an underlying ergodic process with invariant measure μ (<ref>) at time iΔ t, where Δ t>0 and i=0,1,…. Also assume that, for some τ>0, the state y of the underlying system after time τ given its current state x can be described as an ergodic Makrovian jump process with the transition density p_τ(y|x) (see the discussion on transfer operator in Section <ref>). For simplicity, we assume τ=jΔ t for some integer j>0. The time-lagged autoencoder is an autoencoder trained with the loss Loss^AE_τ(f_enc, f_dec) = 1/N-j∑_i=0^N-j-1 |f_dec∘ f_enc(x^(i)) - x^(i+j)|^2 , which reduces to the standard reconstruction loss (<ref>) when j=0. Let us consider the limit of (<ref>) when N→ +∞. Given the encoder f_enc, denote by μ^f_enc_z the conditional measure on the level set Σ^f_enc_z:={x∈ℝ^d|f_enc(x)=z} for z ∈ℝ^k (see (<ref>) for definition) and let Q^f_enc(z) be the corresponding normalising constant in (<ref>). Using (<ref>) and ergodicity, we have Loss^AE_τ(f_enc, f_dec) = lim_N→ +∞1/N-j∑_i=0^N-j-1 |f_dec∘ f_enc(x^(i)) - x^(i+j)|^2 = ∫_x∈ℝ^d∫_y∈ℝ^d |f_dec∘ f_enc(x) - y|^2 p_τ(y|x) dμ(x) dy = ∫_x∈ℝ^d∫_y∈ℝ^d[∫_z∈ℝ^k |f_dec(z)- y|^2δ(f_enc(x)-z)dz] p_τ(y|x) dμ(x) dy = ∫_z∈ℝ^k[∫_y∈ℝ^d∫_x∈Σ^f_enc_z |f_dec(z)- y|^2 p_τ(y|x) dμ^f_enc_z(x) dy] Q^f_enc(z) dz = ∫_z∈ℝ^k[𝐄_y ∼μ^f_enc_z,τ|f_dec(z)- y|^2] Q^f_enc(z) dz = 𝐄_z ∼μ^f_enc[𝐄_y ∼μ^f_enc_z,τ|f_dec(z)- y|^2] , where dμ^f_enc= Q^f_enc(z) dz is a probability measure on ℝ^k, and we have denoted by μ^f_enc_z,τ the probability measure on ℝ^d defined by dμ^f_enc_z,τ(y)= (∫_x∈Σ^f_enc_z p_τ(y|x) dμ^f_enc_z(x)) dy , y ∈ℝ^d . Using the simple identity min_y' ∈ℝ^d𝐄_y ∼μ^f_enc_z,τ|y- y'|^2 = 𝐕𝐚𝐫_y∈μ^f_enc_z,τ (y) where the minimum is attained at y'= 𝐄_y ∼μ^f_enc_z,τ (y), we can finally write the minimisation of (<ref>) as min_f_enc,f_decLoss^AE_τ(f_enc, f_dec) = min_f_encmin_f_dec𝐄_z ∼μ^f_enc[𝐄_y ∼μ^f_enc_z,τ|f_dec(z)- y|^2] = min_f_enc𝐄_z ∼μ^f_enc[min_y'=f_dec(z)𝐄_y ∼μ^f_enc_z,τ|y'- y|^2] = min_f_enc𝐄_z ∼μ^f_enc[𝐕𝐚𝐫_y∼μ^f_enc_z,τ (y)] . Note that (<ref>) is the distribution of y after time τ starting from points x on the levelset Σ^f_enc_z distributed according to the conditional measure μ^f_enc_z. To summarize, (<ref>) implies that, when N→ +∞, training time-lagged autoencoder yields (in theory) the encoder map f_enc that minimises the average variance of the future states y (after time τ) of points x on Σ^f_enc_z distributed according to μ^f_enc_z, and the decoder that is given by the mean of the future states y, i.e. f_dec(z) = 𝐄_y∈μ^f_enc_z,τ (y) for z∈ℝ^k. Similar results hold for the standard autoencoder with the reconstruction loss (<ref>). In fact, choosing τ=0 in the above derivation leads to the conclusion that the optimal encoder f_enc minimises the average variance of the measures μ^f_enc_z on the levelsets. To conclude, we note that although the loss (<ref>) in time-lagged autoencoders encodes temporal information of data, from the characterisation (<ref>) it is not completely clear that this information will be able to yield encoders that are suitable to define good CVs (in the sense discussed in Section <ref>). In the next section, we will further compare autoencoders and eigenfunctions on concrete numerical examples. We refer to <cit.> for discussions on the capability and limitations of time-lagged autoencoders. § NUMERICAL EXAMPLES In this section, we show numerical results of eigenfunctions and autoencoders for two simple two-dimensional systems. For eigenfunctions, we only consider the transfer operator and the loss (<ref>) due to its simplicity. Numerical study on computing eigenfunctions for the generator using the loss (<ref>) can be found in <cit.>. The code for training is implemented in PyTorch. §.§ First example The first system satisfies the SDE (<ref>) with β=4.0 and the potential (taken from <cit.>) V(x_1, x_2) = (x_1^2-1)^2 + 1/ϵ (x_1^2 + x_2 - 1)^2, (x_1, x_2)^⊤∈ℝ^2 , where we choose ϵ=0.5. As shown in Figure <ref>, there are two metastable regions in the state space, and the system can transit from one to the other through a curved transition channel. We sample the trajectory of (<ref>) for 10^5 steps using Euler-Maruyama scheme with time step-size Δ t=0.005. The sampled states are recorded every 2 steps. This results in a dataset consisting of 5× 10^4 states, which will used in training neural networks [Note that the empirical distribution of the data (shown in Figure <ref>) slightly differs from the true invariant distribution μ of the dynamics. However, there are sufficiently many samples in both metastable regions and also in the transition region. In particular, the discrepancy between the empirical distribution and the true invariant distribution is not the main factor that determines the quality of the numerical results.]. We train neural networks with the loss (<ref>) for standard autoencoders and the loss (<ref>) for time-lagged autoencoders. In each test, since the total dimension is 2, we choose the bottleneck dimension k=1. The encoder is represented by a neural network that has an input layer of size 2, an output layer of size 1, and 4 hidden layers of size 30 each. The decoder is represented by a neural network that has an input layer of size 1, an output layer of size 1, and 3 hidden layers of size 30 each. We take tanh as activation function in all neural networks. In the training, we use Adam optimiser <cit.> with batch size 2× 10^4 and learning rate 0.005. The random seed is fixed to be 2046 and the total number of training epochs is set to 500. Figure <ref> shows the trained autoencoders with different lag-times. As one can see there, for both the standard autoencoder (τ=0.0) and the time-lagged autoencoder with a small lag-time (τ=0.5), the contour lines of the trained encoder match well with the stiff direction of the potential. The curves determined by the image of the decoders are also close to the transition path. However, the results for time-lagged autoencoders become unsatisfactory when the lag-time is chosen as 1.0 and 2.0. We also learn the first eigenfunction φ_1 of the transfer operator using the loss (<ref>), where we choose k=1, the coefficient ω_1=1.0, lag-time τ=1.0, and the penalty constant α=10.0. The same dataset and the same training parameters as in the training of autoencoders are used, except that for the eigenfunction we employ a neural network that has 3 hidden layers of size 20 each. The learned eigenfunction is shown in Figure <ref>. We can see that the eigenfunction is indeed capable of identifying the two metastable regions and its contour lines are well aligned with the stiff directions of the potential in the transition region (but not inside the metastable regions). §.§ Second example In the second example, we consider a system that satisfies the SDE (<ref>) with β=1.5 and the potential V(x_1, x_2) = e^1.5 x_2^2/1 + e^5(x_1^2-1) - 4 e^-4 (x_1-2)^2-0.4x_2^2 -5 e^-4 (x_1+2)^2-0.4x_2^2 + 0.2 (x_1^4 + x_2^4) + 0.5 e^-2x_1^2 , for (x_1,x_2)^⊤∈ℝ^2. As shown in Figure <ref>, there are again two metastable regions. The region on the left contains the global minimum point of V, and the region on the right contains a local minimum point of V. To prepare training data, we sample the trajectory of (<ref>) using Euler-Maruyama scheme with the same parameters as in the previous example, except that in this example we sample in total 5 × 10^5 steps and by recording states every 2 steps we obtain a dataset of size 2.5× 10^5. We learn the autoencoder with the standard reconstruction loss (<ref>) and the eigenfunction φ_1 of transfer operator with loss (<ref>), respectively. For both autoencoder and eigenfunction, we use the same network architectures as in the previous example. We also use the same training parameters, except that in this example a larger batch-size 10^5 is used and the total number of training epochs is set to 1000. The lag-time for transfer operator is τ=0.5. Figure <ref> shows the learned autoencoder and the eigenfunction φ_1. As one can see there, since the autoencoder is trained to minimise the reconstruction error and most sampled data falls into the two metastable regions, the contour lines of the learned encoder match the stiff directions of the potential in the metastable regions, but the transition region is poorly characterised. On the contrary, the learned eigenfunction φ_1, while being close to constant inside the two metastable regions, gives a good parameterisation of the transition region. We also tried time-lagged autoencoders with lag-time τ=0.5 and τ=1.0 (results are not shown here). But, we were not successful in obtaining satisfactory results as compared to the learned eigenfunction in Figure <ref>. § ACKNOWLEDGEMENT W. Zhang thanks Tony Lelièvre and Gabriel Stolz for fruitful discussions on autoencoders. The work of C. Schütte and W. Zhang is supported by the DFG under Germany's Excellence Strategy-MATH+: The Berlin Mathematics Research Centre (EXC-2046/1)-project ID:390685689. § PROOFS OF LEMMA <REF> AND LEMMA <REF> Applying the detailed balance condition and the second identity in (<ref>), we can derive ℰ_τ(f) = 1/2∫_ℝ^d∫_ℝ^d(f(y) - f(x))^2 p_τ(y|x) π(x) dx dy = 1/2∫_ℝ^d∫_ℝ^d(f(y)^2 - 2f(x)f(y) + f(x)^2) p_τ(y|x) π(x) dx dy = ∫_ℝ^d∫_ℝ^d f(x)^2 π(x) dx - ∫_ℝ^d∫_ℝ^d f(x)f(y) p_τ(y|x) π(x) dx dy = ∫_ℝ^d[(I-𝒯)f(x)] f(x) dμ(x) = ⟨ (I-𝒯)f, f⟩_μ . It is straightforward to verify the identity (Bochner's formula) 1/2Δ |∇ f|^2 = ∇(Δ f) ·∇ f + |∇^2 f|_F^2 , where ∇^2 f denotes the matrix with entries ∂^2 f/∂ x_i ∂ x_j for 1 ≤ i,j ≤ d and |∇^2 f|_F is its Frobenius norm. Using (<ref>), (<ref>), and (<ref>), we can derive ∫_ℝ^d |ℒf|^2 dμ = -1/β∫_ℝ^d∇ f·∇ (ℒf) dμ = -1/β∫_ℝ^d∇ f·∇ (-∇ V ·∇ f + 1/βΔ f) dμ = 1/β∫_ℝ^d[V(∇ f, ∇ f) + 1/2∇ |∇ f|^2·∇ V - 1/β∇ f ·∇Δ f] dμ = 1/β∫_ℝ^d[V(∇ f, ∇ f) + 1/2∇ |∇ f|^2·∇ V - 1/β(1/2Δ |∇ f|^2 - |∇^2 f|^2_F )] dμ = 1/β∫_ℝ^d[V(∇ f, ∇ f) - 1/2ℒ (|∇ f|^2) +1/β |∇^2 f|^2_F ] dμ = 1/β∫_ℝ^d[V(∇ f, ∇ f) + 1/β|∇^2 f|_F^2 ] dμ , where the last equality follows from the fact that ∫ℒ |∇ f|^2 dμ = 0. siamplain
http://arxiv.org/abs/2306.05568v1
20230608212438
Maximally Machine-Learnable Portfolios
[ "Philippe Goulet Coulombe", "Maximilian Goebel" ]
econ.EM
[ "econ.EM", "q-fin.PM", "q-fin.ST", "stat.ML" ]
orange Maximally Machine-Learnable Portfolios Philippe Goulet Coulombe Departement des Sciences Économiques, mailto:[email protected]. For helpful discussions and comments, we would like to thank Frank Diebold, Dave Rapach, Erik Christian Montes Schütte, Hugo Subtil, Dalibor Stevanovic, and Boyuan Zhang as well as participants at the UQAM seminar and CFE London 2022. For research assistance, we are grateful to Mikael Frenette and Félix-Antoine Gaudreault. Université du Québec à Montréal Maximilian Göbel Bocconi University First Draft: December 5, 2022 This Draft: April 24, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== When it comes to stock returns, any form of predictability can bolster risk-adjusted profitability. We develop a collaborative machine learning algorithm that optimizes portfolio weights so that the resulting synthetic security is maximally predictable. Precisely, we introduce MACE, a multivariate extension of Alternating Conditional Expectations that achieves the aforementioned goal by wielding a Random Forest on one side of the equation, and a constrained Ridge Regression on the other. There are two key improvements with respect to Lo and MacKinlay’s original maximally predictable portfolio approach. First, it accommodates for any (nonlinear) forecasting algorithm and predictor set. Second, it handles large portfolios. We conduct exercises at the daily and monthly frequency and report significant increases in predictability and profitability using very little conditioning information. Interestingly, predictability is found in bad as well as good times, and MACE successfully navigates the debacle of 2022. empty left=2 cm, right= 2 cm, top=2.3 cm, bottom=2.3 cm § INTRODUCTION A natural trading strategy is to buy (sell) assets whose price one expects to appreciate (depreciate) with higher certainty. That is, out of a basket of securities, it may be preferable to focus active trading efforts on the most predictable assets given one's information set. It is well known that marginal predictive accuracy improvements can translate into substantial profits without equally substantial risks for investors. But such desirable assets are in very short supply, if they can be identified at all. Consequently, a natural question is whether we can reach to the ideal by constructing more predictable synthetic securities as linear combinations of (mostly) unpredictable existing ones. This paper devises a data mining technique that drills out those – Maximally Machine-Learnable Portfolios (MMLP) – by directly optimizing portfolio weights so to maximize forecasting accuracy, and thereby risk-adjusted returns. 0.1cm The origins of such ideas lie within <cit.>'s maximally predictable portfolios (MPP), where a set of weights w are chosen so to maximize the R^2 of w'r_t with a tightly specified linear regression based on a few factors. In this convenient sparse (both in returns and predictors) linear framework, obtaining the MPP reduces to solving an eigenvalue problem (akin to canonical correlation analysis) subject to a constraint.[In practice, solving the non-convex fractional programming optimization problem this quickly becomes a daunting task – particularly when considering many assets, factors, and constraints on the portfolio's composition <cit.>. For this reason, many alternative numerical methods have been proposed to solve more successfully the MPP problem <cit.>. Recent applications include <cit.> and <cit.>.] Fundamental limitations are apparent. First, the predictive function is of rather limited sophistication. Namely, it is linear, low-dimensional, and most often unregularized. This limits the obtained MPPs to lie within a narrow space of maximally linearly-predictable portfolios. Clearly, additional patterns of predictability are likely to be found when allowing for complex nonlinear relationships while keeping an eye on the characteristically hostile signal-to-noise ratio (SNR) of asset pricing applications. Second, the portfolio side of the equation is similarly unregularized, opening two evenly unpleasant routes. One can either limit the number of assets to be included and severely bound the space of MPP candidates, or run into the well-documented estimation problems of (large) covariance matrices <cit.>. MMLPs are designed to avoid all of the above by including a powerful nonlinear nonparametric function approximator on the right hand side, and regularization schemes for both the learnable portfolio weights and the corresponding predictive function. Those qualities, flexible nonlinearities and thoughtful regularization, have both been instrumental to the ML renaissance in empirical asset pricing <cit.>, and will be so again crossing the bridge from MPP to MMLP. 0.2cm MACE. We introduce MACE, which stands for Multivariate Alternating Conditional Expectations, and is a multifaceted generalization of <cit.>'s ACE algorithm. The latter was originally designed for nonlinearly transforming a univariate regression target to maximize association. As the name suggests, ACE achieves its aim by alternating the estimation of two functions (one for the right-hand side (RHS), and another for the left-hand side (LHS)) taking the other as fixed at each iteration, very much in the spirit of EM algorithms. Adapting ACE to the MMLP problem, MACE modifies it in two key aspects. First, the LHS is multivariate (an extension) and linear (a restriction). Second, it replaces ACE's rudimentary additive polynomial models by a Ridge Regression (RR) on the portfolio side and a Random Forest (RF) on the prediction side. RR provides a linear and regularized fit for the LHS, avoiding overfitting and non-plausible allocations. RF is a powerful off-the-shelf predictive algorithm that (i) handles high-dimensional data, (ii) can approximate a wide range of unspecified nonlinearities, (iii) requires little tuning, (iv) very rarely overfits. Then comes a panoply of algorithmic details that make the cohabitation of aforementioned elements possible: a learning rate, block out-of-bag sampling, decreasingly random optimization, and bagging strategies rather than predictions. Those are all extensively discussed in the paper. 0.1cm We also examine the link between MACE and traditional mean-variance portfolio optimization. In short, it minimizes the portfolio variance that is orthogonal to the information set. Given the low level of predictability that is characteristic for this application, the unconditional and conditional solutions are not miles apart in terms of resulting variance. This simple observation provides an explanation for the good variance properties of the algorithm as well as previous observations for MPPs <cit.>. It also explains why, as we will see, that taking the MACE portfolio as fixed and not trading it according to RF also delivers competitive results. 0.1cm Regarding portfolio construction, the majority of the literature relies on a two-step procedure, prediction coming first and portfolio construction, following various fixed rules, coming next. MACE, in contrast, fits into a more recent stream of studies that optimizes portfolio weights directly – explicitly or implicitly encompassing the prediction step. For instance, <cit.> do so with a reinforcement learning algorithm targeting Sharpe Ratios fueled with a large database à la <cit.>, and <cit.> deploy linearity to rewrite their simultaneous mean-variance/prediction problem with vector autoregressive predictions as canonical regression analysis. MACE's advantages with respect to those alternatives are hereby visible. It is nonlinear and nonparametric, yet remains simple, transparent and rather traditional in its trading decisions, and works with or without terabytes of data. Statistically, it is a kind of semi-parametric canonical correlation analysis <cit.> supplied with various desirable features for financial forecasting. Economically, it is a conceptually simple extension of the mean-variance principle. 0.2cm Statistical Arbitrage at the Daily Frequency. We consider two applications. The first is creating portfolios of the 20, 50, and 100 most capitalized firms on the NASDAQ for trading at the daily frequency. We evaluate those from January 2017 to December 2022, an era for which gains of ML-based statistical arbitrage are expected to be low, if they exist at all <cit.>. The information set is lagged returns of the portfolio itself. Thus, in this setup MACE is looking for maximally nonlinearly mean-reverting portfolios. And indeed it does find some, scoring enviable returns and risk-reward ratios. Nonlinearities prove instrumental to such results as the degree of mean-reversion (when approximated linearly) is shown to be highly state-dependent. Out-of-sample R^2 testifies to that, ranging from a moderate 0.5-0.9% in calmer periods, to 12% during the first wave of Covid-19, and a staggering 20% when zooming in on March 2020. MACE is shown to heavily rely on day-to-day oscillations to achieve swift returns in tumultuous months – a behavior learned in part from the financial crisis. Most importantly, it also outperforms benchmarks outside of high volatility episodes, both in bull and bear markets. In particular, all MACEs deliver positive returns in 2022, ranging from 5% to 23%. This and other features lead MACE, sometimes with only 20 highly liquid stocks, to nearly double the market's Sharpe Ratio. 0.1cm This application expands on various strands of the statistical arbitrage literature, where many tactics (ranging from heuristics to cointegration tests) have been proposed to identify mean-reverting portfolios or pairs with predictable spreads (see <cit.> for an extensive survey). Typically, the candidate securities are fixed ex-ante rather than “discovered”, and mean reversion is linear. Nonetheless, when a good discovery is made – like that of <cit.> exploiting very different time series behaviors for low- vs. high-turnover stocks – gains can be huge. Our approach mines for such discoveries. Hence, more closely related are the works of <cit.>, <cit.>, and <cit.> who also focus on constructing maximally mean reverting portfolios. Linearity is inherent, and allows for such problems to be reformulated as extensions of canonical correlation analysis with a varying degree of elaboration. An important focus of the literature, as it is the case for MPPs, has been on improving computations and placing reasonable constraints (like sparsity) on the allocation <cit.>. Nonetheless, linearity remains pervasive, with some directly targeting linear autocorrelation statistics <cit.>, and others embedding directly linear forecasting models, like Vector Autoregressions, within the optimization framework <cit.>. Therefore, MACE, through its use of RF, evidently widens the space of exploitable time series dependence for statistical arbitrage. Additionally, the portfolio side of the equation is not constrained computationally nor statistically from including a myriad of stocks and a long daily sample, a case of interest from a conceptual standpoint (testing market efficiency) and eventually practical in an era of shrinking transaction costs. 0.2cm Monthly Trading Based on Macroeconomic Indicators. The second application is at the monthly frequency and utilizes the canonical <cit.> data set to construct an MMLP with a subset of large-cap individual stock returns from CRSP. Thus, it uses no firm characteristics and is rather looking for aggregate predictability based on trivially available macroeconomic indicators. All RFs (MACE or not) struggle to deliver positive R^2's from the late 1980s up to the mid-2000s. However, MACE hits an R^2 of above 4% in the last 15 years of our sample ending in 2019 – an era for which predictability and associated economic gains (ML-based or not) are often reported to have waned <cit.>. This is achieved in part during the financial crisis and the years thereafter, where MACE limits losses considerably, and catches up with the pre-crisis trend as early as mid-2009. Other non-MACE portfolios using RF as the predictive function also manage to somehow mitigate losses during the meltdown, but then fail to find and leverage predictability when exiting the Great Recession. Moreover, during the slowdown of 2018, only MACE continues with mostly unabated upward-trending returns. We find, using interpretable ML tools, that this success is attributable to MACE uncovering a portfolio with a subtle response to elevated volatility, in the form of a nonlinearly time-varying risk premium. 0.1cm Our approach thus differs from <cit.> and the vast body of (monthly) studies who use a pooled panel approach with nearly 2 million observations of stock returns from roughly 30,000 U.S.-listed companies, with a corresponding feature set of over 100 company characteristics and macro-predictors. This is also the backbone of <cit.>'s reinforcement learning approach. A similar pooled panel with many cross-sectional and time series characteristics is also found for exchange rates in <cit.> and for cryptocurrencies in <cit.>. Given that MACE does not predict each stock separately, and rather focuses on forecasting a single synthetic index with easily available time series, it can be described, at least in relative terms, as a very low-maintenance strategy. Moreover, by construction, it cannot rely on cross-sectional anomalies that have already dissipated, or focus on illiquid, once non-adequately priced stocks. Obviously, MACE is not exempt either from risking an eventual depletion of its sources of predictability. Nonetheless, those concerns are alleviated by the multi-solution nature of the algorithm and the opaque prediction function. Indeed, MACE's market timing comes from a (mostly) black box prediction function that cannot be easily deduced by other market participants, and most importantly, if a certain linear combination has been overharvested, MACE can dig out others. 0.2cm Outline. This paper goes as follows. Section <ref> introduces MACE, motivates its structure, and discusses practical aspects. Section <ref> conducts the daily trading empirical analysis and section <ref> conducts a monthly frequency exercise. Section <ref> concludes. § MACE The sophistication of MMLPs, particularly the use of nonlinear tree ensembles-based predictions, necessitates the design of a vastly different framework for optimization than what prevailed for MPPs. §.§ The Algorithm MACE is, for the most part, a conceptually trivial extension of ACE. Its successful empirical development, however, requires a fair amount of subtle machine learning craftsmanship. <cit.>'s ACE applied to a generic h-step ahead forecasting problem of a single target Y_t+h reads as g(Y_t+h_1 × 1) = f(𝐗_t)_1 × K + ε_t+h where g and f are unknown functions, ε_t+h is the prediction error, 𝐗_t is the matrix of K available predictors at time t (which may include lags or various indicators). Thus, the sole deviation from the textbook predictive regression setup is the introduction of g. ACE's goal is to find the optimal transformation of g, in the sense that it is maximally predictable by the output of f. <cit.> show that ĝ and f̂ can be obtained from running an iterative algorithm that alternates between obtaining the conditional expectation of g(Y_t+h) given 𝐗_t for a fixed g and the conditional expectation of f(𝐗_t) given Y_t+h for a fixed f. Following the original incarnation, g and f typically consist of backfitted polynomial functions, which used to be a popular nonparametric ML approach – before being outshadowed by the advent of tree ensembles in the 1990s and the resurrection of neural networks in the mid-2000s. Nonetheless, the polynomial approach still remains the predictive function in recent ACE applications <cit.>. This paper extends ACE in three ways so that it can uncover a modern brand of maximally predictable portfolios: (i) Y_t+h is replaced by 𝐘_t+h∈ IR^N and g : IR→ IR by g : IR^N → IR, (ii) we impose a series of constraints on g so that its output is a portfolio, and (iii) f is a high-performing off-the-shelf modern ML tool. All three are vital to the current application, to a varying degree of obviousness. (i) puts the M in MACE by making it a multivariate problem and therefore allowing for g's input to be, for instance, a panel of stock returns. From this, (<ref>) becomes the general MACE problem g(𝐘_t+h)_1 × N_1 × 1 = f(𝐗_t)_1 × K + ε_t+h which lies within the broad class of nonparametric canonical correlation problems <cit.>. This is also contained within the class of models for which <cit.> develop theoretical guarantees for generic ACE-type algorithms. Now, (ii) restricts g's original nonparametric ambitions to that of learning a linear combination of 𝐘_t+h's (with positive weights summing to 1) so ĝ(𝐘_t+h) is a portfolio return series — as opposed to being literally anything, which could nevertheless be of interest in other financial applications. The minimization problem that ensues is min_w, . f∑_t=1^T (w'r_t+h - f (𝐗_t))^2 +λ ||w||^2  such that w≥ 0  and w'ι=1 where 𝐘_t+h is hereafter also assumed to be a panel of stock returns r_t+h. The addition of λ ||w||^2 provides l_2 regularization with intensity λ (an hyperparameter) that will guard against overfitting and non-realistic allocations <cit.>. The non-negativity constraint may or may not be activated. For instance, it will be turned off in our daily application. Its activation implies additional shrinkage beyond that of the l_2 norm by inducing some sparsity (some weights will be constrained to 0). Lastly, (iii) is what will provide MACE with forecasting power. f is chosen to be a Random Forest (RF) for various reasons, some more subtle than others (see section <ref>). Surely, what we want first and foremost, is f to be a solid off-the-shelf predictive model handling nonlinearities and high-dimensional data while keeping overfitting in check without extensive hyperparameter tuning <cit.>. Clearly, RF checks the first two boxes, along with Boosted Trees and (Deep) Neural Networks <cit.>. The last requirement is met by RF, but not nearly as much by the other two well-known families of ML algorithms. As will become apparent in section <ref>, due to the iterative nature of the MACE (and, in general, the idea of having a function on each side of the equation), RF's easily obtainable out-of-bag predictions, that are resilient to overfitting, will be a key ingredient in our routine. Initialization. Algorithm <ref> is divided into two key steps, which are, intuitively, the updating of the right hand and left-hand side parameters, respectively. We initialize ẑ_0,t+h as a plausible portfolio. When w≥ 0 is activated, such a portfolio is the equally-weighted one (as used in section <ref>). When it is not (as in section <ref>), one can use the solution to the classic (and static) global minimum variance portfolio problem, which is an equally intuitive initialization point, especially given the forthcoming discussion in section <ref>. Regarding f̂_0(𝐗_t), it is set to 0 and η=1 for line 4 in iteration s=1. This simply means MACE is initiated at the equally-weighted portfolio (or else) and its corresponding RF conditional mean. Given the inherent non-convexity of the objective and the plethora of possible solutions, initialization can matter. This is especially true in extremely low SNR environments and when regressors are generated endogenously – like in our daily returns prediction application. Our approach to the multiplicity-of-solutions problem is in the spirit of deep learning rather than classical econometrics. Indeed, a fair amount of ink has been spent on devising efficient algorithms to uncover the global optimum within a very restricted class of MPP problems, in part because the attained R^2 both in-sample and out-of-sample was regarded as a metric of market inefficiency. We deviate from the statistical philosophy of going at lengths to obtain “true parameters” and rather look for “useful parameters", that is, any solution that can generate value for wealth management strategists. Of course, the two objectives are surely not mutually exclusive, but they entail a different focus. Thus, in our applications, we do not especially care for f being unique nor the truest solution to anything, but rather aim at building a portfolio and a predictive function that generalizes well— that is, it maximizes R^2_train and R^2_test. In that spirit, devising mechanisms to maximize R^2_train remain essential, but they are coupled with equally relevant algorithmic elements to insure such feats can be reproduced out-of-sample. Limiting overfitting in both f and w is a necessary condition to successfully trade such a portfolio. Thus, in the coming paragraphs, we explain the nuts and bolts of MACE, and how it spreads the ML gospel of the bias and variance trade-off to the MMLP problem. The Random Forest Step and Block Out-of-Bag Subsampling. In many ways, MACE is a traditional EM algorithm, where we optimize certain parameters while keeping others fixed, reverse roles in the following step, and alternate until some stopping criterion is met. Accordingly, the first step predicts a fixed portfolio using RF. Then, predictions are updated as a convex combination of f_s^*(𝐗_t) (current predictions) and the previous iteration's predictions f̂_s-1(𝐗_t), where the speed of adjustment is determined by the learning rate η. When constructing f_s^*(𝐗_t), it is imperative that one does not use RF's fitted values, which are inevitably prone to immense overfitting. Indeed, RF's fit always delivers R^2_train close to 1 (for any standard tuning parameters combinations) even though the true R^2 is nowhere near that (see <cit.> for an explanation and a barrage of examples with classic datasets). This does not prevent RF from delivering stellar R^2_ test's – the traditional object of interest – and it is why the R^2_train>R^2_test differential has mostly stayed under the radar of the ML community. Whenever RF's in-sample predictions are required, one shall use the so-called out-of-bag (OOB) predictions, which are, by construction, immune to overfitting in a cross-sectional context – in the sense that their predictive accuracy will be exactly aligned with what one should expect out-of-sample <cit.>. In other words, such predictions include the conditional mean, whatever its quality may be, and little to none true error term <cit.>. This is particularly crucial in MACE given that such predictions are to be fitted by another ML algorithm in a subsequent step. In a manner, this approximates the ideal “cross-fitting” solution where, in our context, the Ridge Regression step would be conducted on one half of the sample, and the RF step on the remaining half. The latter (rather demanding) scheme is the backbone of so-called honest causal forest <cit.> in heterogeneous treatment effect estimation. In fact, it can be shown that OOB sampling and variants provide a convenient approximation when sample sizes are limited or other practical aspects render plain splitting nonoperational <cit.>. The only thing standing in the way of such properties to be applied to our problem is the time series nature of our data. Indeed, time series dependence in the left-hand side (LHS) or right-hand side (RHS) variables, which creates major complications for classical bootstrap inference, generates similar hurdles for the validity of out-of-bag predictions. While that in r_t+h is negligible for h=1, it is certainly not so when considering r_t+12, the average return between t and t+12 (in effect, a sliding moving average). This is even more prevalent in the case of 𝐗_t where predictors, while being stationary, may be quite persistent. This persistence will break the non-overfitting properties of OOB predictions – with an immediate consequence that f_s^*(𝐗_t) includes overfitted elements of ẑ_s-1,t+h, and thus failing to approximate the LHS and RHS being trained on “truly” separate data sets. All this is obviously related to how time series dependence biases downward bootstrapped standard errors used in small sample frequentist inference <cit.>, and the solution to the aforementioned problem – block bootstrapping or subsampling – is backed out from the wide literature on the subject <cit.>. Thus, to obtain a f_s^*(𝐗_t) which is plausibly exempt from overfitting, we will use block out-of-bag in-sample predictions. Such techniques have been used to reliably extract more “structural” quantities like various macroeconomic latent states in <cit.> and <cit.>. The Ridge Step. The Ridge step takes RF predictions as given and optimizes w so that w'r_t+h matches as closely as possible the predictions, in essence, collaborating with f so as to maximize association. The Ridge Regression apparatus comes with trivially implementable, yet healthy and necessary sources of regularization <cit.>. First, there is λ penalizing extreme allocations and shrinking w'r_t+h to the equally-weighted portfolio – as opposed to 0 in a typical Ridge Regression. This is due to the unconditional variance of the portfolio being fixed to 1 (line ) for identification purposes during estimation.[Indeed, it is easy to see in (<ref>) why this is necessary : replacing f by ζ× f and g by ζ× g (where ζ is an arbitrary scalar) gives rise to the same likelihood. Naturally, this cannot occur when g is the identity function, as in typical regression problems, but is inevitable within ACE and its descendants. ] Thus, everything being shrunk to the same value is what remains of the original ridge “prior” that every coefficient is shrunk to (the same value of) 0. Given that the resulting portfolio will eventually be rescaled to satisfy the capital budget constraint (w'ι=1), 1N is the value towards which the shrinkage is effectively pointing at. Another source of regularization in the Ridge step is obviously the long-only constraint w≥ 0. It embeds the prior knowledge that we “unconditionally” expect the market to follow an upward trajectory, and that MACE should preferably focus on portfolios of which it will most often hold a long position. Additionally, for our monthly rebalancing application, limiting the occurrences of overall short positions is desirable from a risk management perspective. This frequently imposed constraint in mean-variance optimization problems also plays here the additional role of limiting the Ridge's step expressivity by chopping out a wide space of potential w's. As in anything, good regularization balances bias and variance wisely by imposing constraints that will contort our likelihood the least. Accordingly, the implicit prior motivating w≥ 0 for one-month ahead forecasts may not always be as well motivated for much shorter horizons – and indeed, we will relax that restriction in the daily application. Learning Rate. Among the few more subtle technical extensions to ACE, we use out-of-bag block-subsampled predictions and introduce a learning rate η – whose combined action is mostly to curb overfitting and facilitate optimization. Their importance in practice is paramount since we are dealing with a high-dimensional Ridge Regression on one side and a RF on the other, with both having the ability of overfitting the training data, even if the other side of the aisle remains static. Directly inspired from Boosting and Neural Networks, the use of a learning rate curbs this problem and avoids zigzagging optimization paths. There is an obvious trade-off between η and s_ max, with a lower η necessitating a larger s_ max. In our experience, anything above 0.2 can quickly lead to unstable computations, learning rates above 0.1 will often lead to overfitted solutions (when the SNR is very low), and the optimization of larger portfolios may get suck with too small of a learning rate (like anything below 0.01). What lies within the 0.01-0.1 range usually provides interchangeable results and the symptoms of an impotent learning rate can easily be diagnosed from looking at the path of the in-sample loss. §.§ Relationship to Mean-Variance Portfolio Optimization MACE constructs a portfolio to be actively traded. Nonetheless, as we will see later, the raw MMLP portfolio often has nice properties when combined with much more passive trading (e.g., using a prevailing mean instead of RF). Notably, it has fine variance properties, even though variance is not explicitly minimized. Or is it? Furthermore, its predecessor, maximally predictable portfolios, have been noted to have good variance properties without necessarily aiming for it <cit.>. In the brief discussion below, we show that this is no coincidence. First, note that, in the absence of any predictability, i.e., in the true DGP, the conditional mean is the unconditional mean (f (𝐗_t)=μ ∀ t), then (<ref>) becomes min_w,  μ∑_t=1^T (w'r_t+h_z_t+h - μ )^2 +λ ||w||^2  such that w≥ 0  and w'ι=1 where z_t+h is the portfolio return h-steps ahead. In population, this problem is min_wVar[z_t+h(w)] + λ ||w||^2  such that w≥ 0  and w'ι=1 which is a regularized mean-variance optimization problem (as in, e.g., <cit.>) without the minimum return constraint. This constraint, I-.3em E[z_t+h(w)] ≥μ where μ is a minimal (unconditionally) expected return, bears a different meaning within the MMLP framework simply because the designed portfolio is meant for active trading, not to buy and hold. Nonetheless, as will be discussed in section <ref>, it is possible to bring some of it back in the form of healthy regularization for the MACE allocation by applying an analogous constraint on the mean of the conditional mean. This will in most circumstances, non-trivially improve out-of-sample economic performance. According to previous observations, what MACE is doing in population, when f is a non-trivial function and the “true” R^2 is larger than 0, is solving min_w,   fVar[z_t+h(w) ⊥ f(𝐗_t) ] + λ ||w||^2  such that w≥ 0  and w'ι=1. Thus, minimizing the error term in (<ref>) is equivalent to minimizing the residual variance of the portfolio, that is, the share of variance unexplained by the conditioning information. Given that true R^2 are never too far from 0 in predictive financial time series regressions, it is not surprising that MMLPs (or MPPs) have desirable unconditional variance properties. Equivalently, the above can be rewritten as min_w,   fVar[z_t+h(w) ] - Var[z_t+h(w)|  f(𝐗_t)] + λ ||w||^2  such that w≥ 0  and w'ι=1 where the apparition of the Var[z_t+h(w)|  f(𝐗_t)] term highlights an opportunity. From an economic utility maximization perspective, MACE's objective function postulates that it is not volatile returns (before active trading) per se that brings disutility, but prediction errors. In a world with small predictive margins, those two distinct objectives are in most contexts approximately equivalent. Nonetheless, this suggests that MACE is eager to handle more unconditional variance in the buy-and-hold return if some of it is predictable – and will be compensated by proactive trading based on informative signals. Hence, from the formulation in (<ref>), there is an imminent tension for w in the MACE problem because of its dual mandate, i.e., minimizing Var[z_t+h(w) ] and maximizing Var[z_t+h(w)|  f(𝐗_t)]. Those may push w in the same direction, or they may not – depending on what lies in 𝐗_t and the shape of f. The benefits of all forms of regularization on risk-reward ratios also become obvious from (<ref>). An overconfident MACE allocation will inflate Var[z_t+h(w)|  f(𝐗_t)] in-sample. If it fails to replicate predictive gains out-of-sample, it is likely left with a portfolio with higher unconditional variance that the global minimum variance solution, but no meaningful predictability to tame it. §.§ MACE vs. Predicting Single Stocks Returns Separately When it comes to time series predictions of stock returns, a popular ML approach is to conduct a pooled (nonlinear nonparametric) regression for a panel of stocks and their corresponding characteristics. <cit.> is the prime example, and they translate their predictions into returns via a long-short portfolio strategy. Alternatively, one can model each stock return separately with its own time series regression, but this has important limitations. In contrast to the above, MACE forecasts a single series, the portfolio's return. From the linearity of the portfolio, we have that I-.3em E[z_t+h(w)|𝐗_t] = w' I-.3em E[r_t+h|𝐗_t] where I-.3em E[r_t+h|𝐗_t] is a vector of conditional expectations for each stock. Thus, one can legitimately wonder why not simplifying the algorithm considerably by (i) getting expectations from pooled or individual predictive regressions and then (ii) running the global minimum variance problem on residuals from such regressions. Precisely, solving min_w∑_t=1^T (w'(r_t+h - I-.3em E[r_t+h|𝐗_t]))^2 + λ ||w||^2  such that w≥ 0  and w'ι=1 after obtaining I-.3em E[r_t+h|𝐗_t] externally. There are quite a few reasons not to consider such a route, some conceptual and others, practical. All of them are worth mentioning here because they highlight some of MACE's advantage that may so far have gone unnoticed. I-.3em E[r_i,t+h|𝐗_t] is arguably much harder to learn than typical z_t+h candidates from aggregate data, simply by the virtue of the latter being a portfolio. Single stock returns contain a lot of variation that cannot be captured by macroeconomic predictors, and with a small fraction of it being explainable by micro-level firm characteristics, often for low-capitalization stocks. What remains is a large amount of noise weakening the potential f through an unappealing SNR and increased estimation error. This crucially matters because (i) the extremely low SNR for separate stock returns is a serious impediment to any algorithm attempting to learn I-.3em E[r_i,t+h|𝐗_t] and (ii) the chosen individual model will often be one that puts a higher weight on minimizing estimation variance rather than entertaining ambitions to tackle bias^2. Thus, it can easily turn out that the selected/cross-validated f is the null function (or close to it) whereas the true DGP does, in fact, have an f yielding a positive R^2. In other words, choosing and optimizing ML (or any) models to predict r_i,t+h separately might be at odds with the final objective of getting a fine estimate of I-.3em E[z_t+h(w)]. And because of that, a positive R^2 remains unattainable without exceedingly large samples. One way out is the pooled (or global) regression approach with firm-level characteristics, where f's potency is revived through much more data and information on cross-sectional variation. Another route is to predict directly what one will end up trading, that is, the portfolio return. By the joint optimization of w and f, w provides f a more easily-forecastable target. Hence, the previously undetected R^2>0 becomes an attainable target because w collaborates in making f win at the bias-variance trade-off. This is convenient since getting a I-.3em E[r_t+h|𝐗_t] vector worthy of use is not necessarily an easy task, requiring large amounts of micro-level data not always easily accessible in real time, and a fair amount of computing resources. In comparison, MACE finds profitable predictability in a convenient low-maintenance setting. Given the attention they get, predicting indexes such as the S&P 500 rarely deliver sizable R^2s at short horizons. But there are numerous ways into which stocks can be assembled, and some of those blends may be more promising than others from a predictive viewpoint. Linking it back to the econometric literature on the benefits and costs of aggregation (forecasting aggregates vs. aggregating components' forecasts), MACE can be seen as finding the optimal aggregation that keeps variance low (by aggregating) and yet keeps bias^2 similarly low by creating an aggregate with limited aggregation bias from neglecting heterogeneity <cit.>. §.§ Why Random Forest? A natural question to ask is: why Random Forest? In principle, RF could be replaced by any ML algorithm. In practice, not quite so, and for many reasons. First, RF is the only algorithm which provides internally out-of-bag predictions. Obviously, nothing prevents a very patient researcher to bootstrap-aggregate Boosting and Neural Networks at every iteration s and incur a substantial computational burden. This is especially true of applications with many observations and regressors. Putting things in perspective, to obtain rightful f_s^*(𝐗_t)'s from Boosting and NN, it would take approximately 500 times (a reasonable number of bootstraps) longer than RF, assuming that the three algorithms have a roughly similar computational time (which is quite generous to Boosting and NN in this application). Alternatively, one can ditch any call on to OOB predictions, and extremely carefully tune hyperparameters. Given the impracticability of such an approach (for anything more complicated than Ridge, Lasso, and derivatives) and known results about the virtues of cross-fitting and analogous methods <cit.>, it appears that justifying the costs of going for such an alternative route would require glowing expected benefits. There aren't. Boosting, which, is often seen as marginally superior to RF in tabular data tasks, often does so by providing mostly small improvements in high SNR environments – a far cry from our financial application. In fact, with RF being less capricious tuning-wise, it has been reported in many low SNR applications to be equally if not more competitive than Boosting (see <cit.> and <cit.> for returns, and <cit.> and <cit.> for macroeconomic forecasting). Additionally, extensive tuning is often required for Boosting to have an edge on RF, which is highly impractical within an iterative procedure. Deep Neural Networks, which incredible merits in non-tabular data tasks are indisputable <cit.>, are known to still take the backseat to tree-based methods when it comes to tabular data <cit.>. Moreover, it has been the subject of considerable discussion that basic feature-engineering (like creating lags) combined with tree-based methods may outperform NNs with architectures tailored for time series data <cit.>. Of course, none of this rejects the possibility that letting f be constructed from some sophisticated deep recurrent network of any breed (like those in <cit.>) could further improve results. Rather, what it suggests is that this paper's results will not be severely handicapped by leaving the aforementioned extensions for future research. A last deep learning-based alternative is to consider a neural network with two hemispheres as in <cit.> – one linear for the LHS and one for the RHS – with a loss function being the squared distance between their respective outputs, reminiscent of developments in <cit.> and <cit.>. This ditches the need for alternating anything and can be optimized directly through gradient descent. There are quite a few complications, however. The first, quite subtle in nature, is that modern neural networks, very large and deep, vastly overfit the data in-sample, yet produce stellar out-of-sample performance for many tasks. This phenomenon has many names – double descent and benign overfitting among them – and now has a theoretical literature of its own <cit.>. The problem this poses for building a MMLP is that, if the most promising neural network attains a R^2_train≈ 1 fitting what is almost pure noise, there is neither room nor need in-sample to have the LHS collaborate in increasing the fit. Given that our application has a SNR which is a far cry from 1 and those of other typical successful deep learning applications, it is a severe complication. Nevertheless, using (often unstable) small networks, carefully crafting their design, and considering an extensive hyperparameters search – all this to avoid the slightest bit of overfitting in f_s^*(𝐗_t) – could maybe make the derivation of MMLPs from such an approach less ill-fated. In the concluding remarks, we provide additional thoughts and suggestions on how this could perhaps be done in future research. §.§ Setting Hyperparameters Given that MACE incorporates two ML algorithms, it inevitably has quite a few hyperparameters (HP), which, for convenience, are summarized in Table <ref>. Fortunately, RF is well known to be very robust to tuning parameters choices (with default values often very hard to beat, see <cit.> and references therein), and ridge is sparsely hyperparametrized. We opt for setting tuning parameters to fixed values, with their calibration being motivated by domain knowledge and observations of the (block) out-of-bag error metric (RMSE_OOB). Given the nature of the MMLP problem – the prediction of a non-fixed target – considering a validation set as is often seen in ML forecasting studies in economics and finance <cit.> is an avenue with strong headwinds and likely limited benefits. This is due to the multiplicity of solutions where identical HPs, when re-estimating the model with more data (e.g., reincorporating the validation data), can lead to different solutions. This is not unique to MACE in the ML realm, as this is a commonly known feature of modern neural networks. Also, there is an imminent tension between maximizing the reliability of the validation set and minimizing the likelihood of moving to a different optimum than what we optimized the hyperparameters for. The former calls for a longer validation set and the latter for a shorter one. In the light of all that, it appears more reasonable to rely on common sense whenever possible, and on the blocked RMSE_OOB, which is a proper CV metric for time series <cit.>, whenever data-driven guidance is needed. We first concentrate on those HPs pertaining to MACE's iterative optimization itself. The learning rate η is set at 0.1 for monthly data, which always delivers a stable solution and never gets stuck. For daily data, more care is needed given that X_t is not fixed. The smallest learning rate always appears desirable – as often observed for Boosting <cit.> – but we have noticed that too small of a η coupled with large portfolios may lead to the algorithm not optimizing at all in-sample (analogous to exaggeratedly tiny learning rates for deep learning). Thus, it is set to 0.01 for N ∈{20,50}, but 0.01 being not large enough for N=100 in-sample loss to start decreasing, we increase it to 0.05. Closely related are the choice of s_max, the maximal number of iterations, and the stopping method. For monthly frequency, we find that s_max=100 is well enough for MACE to converge and that performance never seems to deteriorate substantially with s, even after some plateau is achieved. Thus, the stopping criterion is simply s^* = s_max. Things are not so easy with the daily application where regressors are created endogenously. First, we set s_max=250 since η is considerably smaller. Since optimization is much more demanding in this environment, it is not impossible for the OOB error to start increasing beyond a certain s – i.e., suggesting the LHS is starting to overfit. Hence, we set s^* = _s RMSE_OOB(s), which can be seen as some form of internal early stopping, a key player in the regularization arsenal of modern deep neural networks <cit.>. Early stopping is typically implemented using a validation set, but here, for aforementioned reasons, it is more preferable to use RF's internal error metric. The next four hyperparameters in Table <ref> are those of RF. For monthly data, is set to the default value of 13, one that is typically hard to beat, except in extremely low SNR environments <cit.>. At the daily frequency, we noticed that =13 could never deliver RMSE_OOB(s)<1 for any s. Given that the daily application has a much lower SNR and a sparser X_t (hence, less potential for diversification in RF <cit.>), it is not entirely surprising that =13 might be too large and lead to early overfitting. Thus, we set =110 which kills two birds with one stone by decreasing computing time sharply. In a similar spirit, is set to a very high value of 200 in the daily application (nonetheless ≈125 of the training sample size), which greatly helps in easing the daily application's computational burden all the while helping improving performance as measured by the OOB. Default values for usually go up to 10. However, when faced with a low SNR, deep trees are either redundant or harmful, because additional splits allowed by =10 vs. =200 are typically fitting the noise and cancel out through bagging in the out-of-sample projection <cit.>. Limiting the expressivity of RF is not without precedent for predicting returns, as <cit.> report using trees of very limited depth. For the monthly frequency application, we set it to =20, a moderately high value that eases computations without any apparent effect on RMSE_OOB(s) (vs. default values). The next two tuning parameters of RF are and . We set =80% for all applications, which is standard. , on the other hand, needs to be chosen slightly more carefully to balance two goals. As already hinted at above, too low of a block size will lead to f_s^*(𝐗_t) including fitted noise, to be subsequently fed into the Ridge Regression. An overkill block size will seriously handicap bagging by limiting the number of different block combinations used to construct the trees in the ensemble, ultimately weakening RF (on the variance side) by decreasing the diversity of trees. Thus, we set to be a window size within which any form of meaningful dependence between the first and the last observations will have faded away, for both X_t and r_t+h. For the monthly application, we set it to two years, which very well cover the dependence in r_t+h and the stationarized <cit.> predictors in X_t. In the daily application, since X_t and r_t+h are daily returns with minimal time series dependence, two business months appears more than sufficient to maintain the interchangeability of the blocks, and leaving plenty of room for bagging to fulfill its task. A last hyperparameter for RF is the number of trees. It is https://philippegouletcoulombe.com/blog/the-number-of-trees-in-random-forest-is-not-a-tuning-parameternot a tuning parameter per se because there is no statistical trade-off for its choice: the larger the better, with the only constraint being computational burden <cit.>. Given that RF predictions usually stabilize (by the law of large numbers for an average) well before 500 trees, 500 is usual the default setting for in many RF implementations and is what is used for the monthly application. Yet, there is a subtle twist, that leads us to increase up to 1500 for the daily application. We are generating regressors endogenously – in essence using lags of a continuously updated target –rather than taking X_t to be fixed observed predictors. In doing so, we may incur attenuation bias in RF attributable to the generated regressor problem. More precisely, “measurement error” can impair RF's ability to detect nonlinear time-series dependence because, f_s^*(𝐗_t) being out-of-bag predictions, there are an average of (1-) × trees, which falls to 100 with =500. Bumping to 1500 makes it an average of 300 single tree predictions for f_s^*(𝐗_t), which is enough to curb measurement error problems without exploding computational costs. Finally, we must set a value for λ in the Ridge Regression step. While it could be tempting to cross-validate λ internally at each step, the optimally chosen λ at each s may be well off from that of the final s, and lead optimization in a poor direction. For instance, in early steps, cross-validation can easily choose a λ that shrinks the portfolio excessively, because, at that early stage s, there is, indeed, very little or no predictability being detected. Moreover, on top of withstanding the additional computational demand, changing λ may lead to certain steps not improving the loss, thereby impairing the EM-style algorithm's ability to minimize the overall loss. Therefore, for the monthly application, we set λ such that it attains an in-sample R^2 of 0.05, which is a high yet not unreachable mark. In the daily application, facing the inevitability that R^2_s,test is unlikely to stand above 1%, λ is chosen at each step so to target R^2_s,train=0.01. § APPLICATION I: DAILY STOCK RETURNS PREDICTION We begin our exploration of MACE by constructing maximally ML-predictable portfolios at the daily frequency. The high frequency brings both opportunities and difficulties. Among the former is the availability of more data points, and a lessened need to rely on data going way back to the late 1950s, as is typically the case in monthly exercises. In terms of the latter, an even more hostile signal-to-noise ratio comes to mind, as well as the scarcity of freely available predictors at such frequency. Here, r_t+1 comprises individual stock returns for firms listed on the NASDAQ. We keep N ∈{20,50,100} of them with highest market capitalization on January 3^rd 2017, which is the date of the beginning of our test sample. Hence, there is no look ahead bias for the test sample and the stocks are as liquid as it gets. For computational reasons, we do not re-estimate models and consider a fixed train-test split of the data, as done in, e.g., <cit.> and <cit.>. The training set is thus 2000/03/02–2016/30/12 and the test set 2017/03/01–2022/07/12. Thus, the test set includes a fair level of variety in terms of “financial regimes”. In chronological order, we have: a relatively quiet period, a crisis period with extremely high volatility, an unprecedentedly bullish bull market, and a long-lasting bear market. As mentioned above, gathering many exogenous predictors available in real-time for 𝐗_t is not easy and often not cost-free. In this application, we will see whether MACE can generate predictability with nothing more than time-series properties of the portfolio. Creating portfolios or stock combinations that have exploitable persistence properties (for mean reversion- or momentum-based trading strategies) has been studied, for instance, in <cit.>. Nonetheless, the near universe of following studies, by the limitations of relying on variants of canonical correlation analysis, are bounded to consider only linear autoregressive properties. Needless to say, if there is any remaining form of mean reversion that has not been drilled out yet, it will need to be complex in f or g – or both. Thus, f being a RF able to estimate any form of nonlinear nonparametric dependence may help in finding mean reversion patterns that remained undetected to the naked eye or simpler algorithms. Beyond its coverage of heterogeneous financial conditions, the test set is interesting in its own right simply by the virtue of being recent. <cit.> finds substantial gains from a simple daily long-short strategy using signals from predicting single stocks separately via tree-based techniques (among others) with lag returns as predictors (as we use). However, and now a recurring theme, the improvements are circumspect to the pre-2010 era, which, as the authors note, is likely due to the widespread dissemination in the 2010s of the very techniques they use. Thus, an interesting question is whether MACE (and other less obvious uses of ML) will find exploitable nonlinear mean reversion in a period where simpler methods could not. In a similar spirit, localized episodes of predictability, reasonably frequent before 2000, are found to be a much rarer event afterwards <cit.>. A related question is whether MACE, through ML-based nonlinearities in f, can capture and exploit those in real time rather than only observing them ex-post. It is natural to wonder whether MACE-based predictability will outlive its test set, that is, after the profitable pattern is openly communicated to other market participants. MACE's design partly protects against early depletion by (i) forecasting a synthetic security (rather than highly scrutinized stocks or portfolios) and (ii), doing so with a mostly opaque model. Additionally, as discussed in section <ref>, MACE can generate plenty of solutions, with most of them achieving the predictability goal in different ways. Hence, in principle, there should be no shortage of possibilities for enlarging the set of nonlinearly mean-reverting synthetic securities, especially keeping in mind that our application deliberately focuses on creating them from a narrow set of well-known stocks. Nonlinear Mean Reversion Machine. In what follows, we describe the remaining building blocks necessary to go from plain MACE to a version specialized for daily data. Throwing many lags of many stocks at the daily frequency directly in 𝐗_t will be at the nexus of computational and statistical inefficiency. A more manageable and promising route is the parsimonious problem min_w,   f∑_t=1^T (w'r_t+1 - f ([ w'r_t-1, … , w'r_t-21]))^2 +λ ||w||^2  such that and w'ι=1 where features – lags of the portfolio returns – are created endogenously given the portfolio weights.[Note that for a linear f, this optimization problem could be solved by nonlinear least squares or some sort of generalized eigenvalue problem as studied in <cit.> and others. ] There are two substantial modifications to Algorithm <ref>. The first is that we drop the w≥ 0 constraint. Keeping such a constraint in place – as in the monthly application of section <ref> – would push MACE to find portfolios for which it will most often go long with, and avoid relying extensively on short-selling to turn a profit. Such a restriction comes with the prior that, at the lowest frequencies, the market is always expected to go upward and that a successful strategy should not immensely deviate from this evident fact about the unconditional mean. At the daily frequency, however, the importance of the low-frequency component justifying a long position for the long-run is much tinier– and we want the configuration of the daily MACE to absorb this knowledge. More importantly, w≥ 0 appears unnecessary given that the portfolio will be bought or sold plausibly every day, and single-stock short positions will be short-lived by construction. The relaxation of this constraint now allows MACE to simultaneously hold short and long positions over single assets, even though we may go long or short with the overall portfolio. The second modification is, obviously, the inclusion of an additional step before the RF step that creates lags of the portfolio as it is constituted at iteration s. Given the SNR faced in this application, cleverly designed regularization is key. For that sake, we apply the MARX (Moving Average Rotation of X) transformation of such lags <cit.> and stack those in 𝐗_t. As argued in <cit.>, the transformation implies more approximate implicit regularization at high frequencies than raw lags themselves. Precisely, in a linear model with an l_2 or l_1 norm on coefficients, this switches the model from shrinking each coefficient towards zero to being shrunk to one another successively. For more complex ML methods where the implicit regularization (almost always entailing the prior that each feature should contribute, but marginally) cannot easily be altered by changing the penalty (like RF or Neural Networks), MARX is a trivial additional step that can help bolster predictability by embedding a more appropriate prior in the model. Its implementation is simply a one-sided moving average of the lagged portfolio return (of increasing length, up to a month) instead of raw lags — which, in the current application, is analogous to a basket of momentum indicators. [Note that in a linear model with no regularization, by the virtue of MARX being a rotation of 𝐗_t, it does not alter the span of predictors and yield identical fitted values. However, this is not true of regularized and/or nonparametric methods, which is our prediction tool in this paper.] We benchmark MACE for each N ∈{20,50,100} against a set of relevant and informative competitors: equal weights and the minimum variance portfolio. The latter correspond to the initialization values of MACE. We also include the S&P 500. Those are all predicted with a prevailing mean and RF. Going forward we denote the prevailing mean models as EW (PM), MinVar (PM), S&P 500 (PM) and the RF models EW (RF), MinVar (RF), and S&P 500 (RF) respectively. RF models are also procured with MARX-transformed features. Finally, we complement those with MACE (PM), which is a portfolio constructed with Algorithm <ref> but where RF predictions are substituted out-of-sample by MACE's portfolio prevailing mean. This serves the purpose of evaluating MACE's raw portfolio return (since, in effect, it comes from a well-defined mean-variance problem as per section <ref>'s discussion) and quantifying how much of MACE's success comes form leveraging predictability. Trading. Relative weights are fixed, but absolute weights (i.e., the overall position on the synthetic asset or portfolio) are changing in every period. To transform predictions into trading positions for economic evaluation metrics, we solve the prototypical mean-variance problem for a single return y_t+1 ω_t+1arg max ω_t+1ŷ_t+1 - 0.5 γ ω^2_t+1 σ̂^2_t+1 as is laid out in <cit.> and many others. The risk aversion parameter γ is set to 5 and ω̂_t+1 is constrained to lie between -1 for 2 for reasonable allocations. Evaluation Metrics. The evaluation metrics are the out-of-sample R^2, average annualized return (r^A), and the annualized Sharpe Ratio (SR) – where returns are collected from the trading exercise as described in Equation (<ref>). Following <cit.> and the ensuing literature, the out-of-sample R^2 of model m for forecasting portfolio return y_t+1 is defined as 1-MSE_m^OOSMSE_PM^OOS where PM stands for the prevailing mean and MSE_m= 1/#OOS∑_t ∈OOS (y_t+1-ŷ_t+1|t)^2. PM is specified to be the historical mean of the training sample. Given the inherent unpredictability of financial markets, this not-so naive benchmark is in fact one that is notoriously difficult to beat. To bring further enlightenment, we also report R^2 for key subsamples in Table <ref>. Namely, we report R^2_CovidW1, which is the R^2_OOS for the onset of the first wave of Covid-19, defined as February, March, and April 2020. Those 3 months were characterized by a level of volatility unseen since the financial crisis, and consists of the only recession in our test set. It is well documented that predictability is more likely to be found during bad economic times (see <cit.> and the many references therein). Thus, we wish to investigate whether (i) MACE follows that rule and (ii) if it can find predictability outside of the recessionary episode. Accordingly, R^2_CovidW1 is the R^2_OOS excluding those three months. In a similar spirit to R^2_CovidW1, we include R^2_2022 (the R^2_OOS from the first business day of 2022 until the end of our sample in December 2022) and r^A_2022, which is the corresponding annualized return for the same era. This allows to shed light on (i) whether there was any meaningful predictability to be found in the long-lasting bear market of 2022 and (ii) whether active daily trading with MACE or other RF-based strategies can avoid the sharp losses of the US stock market in 2022. We complement this extended set of metrics with <cit.>'s Omega Ratio (Ω), an increasingly popular measure of the risk-reward ratio that leverages all the moments of the distribution of returns (whereas SR only exploits the first two). This measure is particularly useful in situations where the distribution of returns is skewed, as only negative deviations from a certain threshold – e.g. the mean return of a benchmark, or an investor's desired average return – contribute to the risk component. First, we do find positive skewness in MACE-based returns. Second, and in a more striking fashion, RF-based returns, when predictability is non-negligible, are found to be much more leptokurtic (i.e., Laplacian-looking) than those of other strategies, even when excluding CovidW1. This is not unheard of for ML-based strategies <cit.>. Thus, to compare apples with apples without the inherent assumption of normality in SR and to avoid penalizing similarly both large positive and negative returns, we include Ω in Table <ref> as a supplementary risk-reward ratio.[The expected benchmark model return in Ω used as a cutoff is the mean return of the S&P 500 in the training set <cit.>. Doubling it does not alter rankings. ] §.§ Results We report relevant summary statistics for our daily exercise in Table <ref>. Log cumulative return plots for the small (N=20) and large (N=100) portfolios are shown in Figure <ref> and R^2 Comparison of MACE to Random Alternatives plots are available in Figure <ref>. To alleviate notation, MACE (N=100, PM) will be written as MACE_100 (PM) and other accordingly. Additionally, MACE using RF for prediction is simply denoted MACE. Statistical Results. For all three MACE specifications, we find strong evidence of predictability through time-series dependence at the daily frequency. Out-of-sample R^2's are abnormally high and distance most of the competitors for all subsamples but 2022. In the latter case, only MACE_100 achieves a positive R^2 that narrowly beats that of S&P 500 (RF). The bulk of predictability is indeed found during the first wave of the Covid-19 pandemic, with local R^2 for the three MACEs ranging from 3.88% to a stunning 12.20%. While MACE hits the two highest marks for the era, unusually high R^2's (taking <cit.>'s local predictability results as a reasonable yardstick) are not its exclusivity. S&P 500 (RF) also delivers a large R^2 during the era (7.4%), and EW_100 (RF) and MinVar_100 (RF) are getting 1.17% and 2.06%, respectively. What is more exclusive, however, is predictability outside of the turbulent spring of 2020. MACE_20 and MACE_100 achieve it both with 0.56% and 0.86%, which are sizable at the daily frequency, especially in good times <cit.>. All other RF-based models fail to deliver R^2_CovidW1>0, all situated in the vicinity of -1% , except for MinVar_100 (RF) at 0.25%, the closest competitor to MACE on this metric. Obviously, negative R^2's at such a frequency and with so little conditioning information are what one would expect. [In a recent evaluation from an out-of-sample period overlapping with ours, <cit.> get nearly all negative R^2 for 20 models using vast conditioning information for prediction at the weekly frequency. ] Nonetheless, two MACEs out of three outperform this predicament. And it holds up in 2022. We see that MACE_100 has a marginal outperformance during the bear market at 0.23%. For other models, negative R^2 are again the norm rather than the exception, with notable deviations by S&P 500 (RF) at 0.09% and EW_20 (RF) at 0.33%. However, in the latter case, it delivers the worst R^2 of any model for the other three subsamples. In fact, MACE_100 is the only model with four positive R^2 out of four. It is interesting to get a sense where MACE's out-of-sample R^2's stand with respect to random alternatives, like single stocks (in effect corner solutions of the MACE) and random portfolios. Especially, in the latter case, it can be seen as an implicit out-of-sample statistical test for the procedure itself. It aims at answering: if we were to draw random stock combinations and predicting them with RF rather than running MACE, how many of those would fare better out-of-sample? The location and shape of the distribution will also be informative about how much room MACE had to find a MLPP. We draw 150 such random portfolios with w≥ 0 imposed, and 150 without. Figure <ref> reports such results for R^2_CovidW1>0 and R^2_CovidW1>0 (forming two complementary samples). First, we observe how the prospects of predictability change from CovidW1 to CovidW1, with the random portfolios distributions showing negative means and clear negative skewness without CovidW1 data for both N=20 and N=100. R^2_CovidW1>0's for random portfolios see a sharp decrease in negative skewness for both portfolio size. In fact, for N=100, skewness visibly becomes mildly positive. Additionally, we can see a shift in location, with N=20 random portfolios' mean being approximately 0 and that of N=100 being mildly positive. A similar pattern is observed for single stocks, but is inevitably rougher given N<300 and single stocks having higher variance than linear combinations of them. figure-1 Those distributions, in themselves, highlight some things we already know, like the immense difficulty of finding predictability outside of “bad” economic times: almost 95% of random portfolios have a R^2_CovidW1<0, yet R^2's greater than 0 constitute approximately 50% of the R^2_CovidW1. Obviously, MACE's objective is not only to beat those odds by systematically landing on the “good” side of the distribution, but also to strive for the rightmost part of it. And it is what we see in all four cases of Figure <ref>. Indeed, the red line is to the right of the dashed line, meaning we can reject the null (at 5% confidence level, against a one-sided alternative) that MACE is randomly drawn portfolio. Also, the red line is always to the right of the best in-sample single stock or random combination. Economic Results. From Table <ref>, we see that MACE generates the highest return for all portfolio class sizes, with MACE_100 leading the march with a massive 41.36% annualized return over 2016-2022. MACE_20 and MACE_50 deliver more “reasonable” returns of 23.1% and 20.42%, which are nonetheless meaningfully higher than alternatives. Closest contenders include MACE (PM) itself and sometimes MinVar (PM or RF, depending on N). Hence, only MACE appears to consistently deliver the highest return. Obviously, that could all be at the expense of significantly more risk. Indeed, we see that the variance of MACE's returns often appears to be higher than that of alternatives since its SR is narrowly behind that of alternatives for N=20 (0.99 vs. 1.04) and N=50 (0.91 vs 0.96). Note, however, that MACE is consistently among top contenders whereas, e.g. , MinVar (RF), delivers the leading 0.96 for N=50 and the second-to-last 0.42 for N=20. For N=100, MACE gets a dominating annualized SR of 1.59, suggesting most of the 41.36% return it gets is not due to unshackled variance. The closest alternative among all N's is SR=1.04 for MACE (PM, N=20). Figure <ref> is telling about how that came about. First, there are the eye-grabbing flash gains during the onset of the Pandemic. These are unpacked in their own section <ref>. Given that major crisis only appear to occur on semi-decadal frequency, it is natural to wonder whether MACE_100's SR would still be as startling without its miracle run in March 2020. By peaking at Figure <ref>, it seems so: translating downward the red line by 0.6 from 2020 onward still make it land comfortably above competitors in terms of final cumulative returns. Moreover, those are increasing in a nearly linear fashion starting from 2018. The SR excluding the highly profitable month is 1.32, which confirms visual observations. Looking at MACE_100 (PM)'s cumulative returns is also revealing for MACE_100's overall performance. The latter is magnified during the early stages of the Pandemic because the former (i.e., the raw portfolio itself) suffers minimal losses. However, we see that MACE_100 (PM) proves inferior to its RF-based counterpart by delivering approximately 0 returns in 2018, 2020 as well as 2021, and losses in 2022. The r^A_2022 column also helps in understanding overall returns. The variance among results for this metric is vast. Some strategies suffered important losses, yielding returns that are quite correlated with the overall bear market. Others turned into a massive profit. All three MACEs do fine, with MACE_100 delivering almost 24%, with apparently (Figure <ref>) higher variance than previous years, however. MACE_20 is lowest among the three, with 5.03% and facing a major setback midyear by losing 10% in a bit more than a week. We will see in section <ref> that such setbacks can be smoothed out, most notably, by “bagging strategies”. While MACE_20 and MACE arguably get a headstart for 2022 with raw portfolios (PM) delivering marginally positive returns, that of MACE_100 (PM) is a dramatic -16.81%. In all three cases, it is the effect of active trading using RF predictions that avoids the failure of many strategies in 2022. It is also true, to a lesser extent, for S&P 500 (RF) which turns in 2.80% while the PM version suffers major losses. In fact, there are quite a few RF-based strategies that are profitable in 2022, however, unlike MACEs, those are not accompanied with enviable returns in less turbulent years. The large positive jumps in returns that many RF-based methods experience may be seen unfavorably by SR. From our inspections, many RF-based strategies generate returns that are always more leptukurtic – almost Laplacian-looking – than strategies based on the prevailing mean. For instance, excluding CovidW1, MACE_100 has a kurtosis of 5.96 vs 2.43 for MACE_100 (PM), and S&P 500 (RF) has 9.52 vs 6.87 for its prevailing mean version. Again with the aforementioned exclusion zone, we also notice that MACE_100 returns have positive skewness (0.2), which is highly preferable, whereas all other strategies have negative skewness. The other two RF-based MACE's do not have positive skewness, but it is always the least negative among its portfolio size group. Including CovidW1 only magnifies (substantially) the extent of such findings. Thus, a performance measure that appreciates such subtleties about return distribution, going beyond mean and variance, may give a different assessment of portfolios predicted by RF. This is indeed what we find. MACE (non-PM) portfolios always deliver the highest Ω within N-wise groups. Notable changes in ranking with respect to SR are MACE_20 going from (narrowly) third to clearly first within its group, MACE_50 moving from second to first with the N=50 group, and S&P 500 (RF) beating S&P 500 (PM). Among other stylized facts, we observe that, in line with theoretical observations in section <ref> – equation <ref> in particular – the variance of MACE portfolio return before trading is higher than all other portfolios. For instance, MACE_100 unconditional standard deviation is 1.54 out-of-sample excluding CovidW1 and 1.44 in-sample, whereas the MinVar portfolio (effectively MACE_100 at s=0) has 0.56 and 0.69. Thus, in its quest for higher predictability and returns, MACE creates a synthetic asset which unconditional volatility is higher and which is tamed through more accurate predictions. We see that this higher volatility is no out-of-sample “surprise” as the training and test nearly coincides on this metric. This is not true for MinVar. Those observations highlight both opportunities and perils for the MACE. The opportunities have already been documented widely. The peril is that of overfitting: letting MACE overfit will lead to higher unconditional variance out-of-sample than what can be inferred from the in-sample performance. From a risk-reward perspective, this could be a lose-lose situation. Clearly, the MACEs studied here are doing fine in that regard, but one should always bear in mind the dual costs of overfitting in the MMLP problem. In Appendix <ref>, we report how and MACE_20 and MACE_100 performances are affected from introducing various levels of transaction costs (TC). MACE_20 is mostly unaffected under reasonable TC levels, loosing about 1% in returns for any 0.5 percentage points increase in TCs. However, given the highly-liquid nature of the considered stocks, a more realistic ballpark is 0.1% as recommended by <cit.> for trading Volatility Lab's 177 sustainable funds. MACE_100 inevitably takes larger hits (at worst -10% returns with 1% TCs). Nonetheless, even in such strenuous conditions, it remain by far the dominant strategy as per all performance metrics. Smaller TCs keep MACE_100's annualized return still above 35% and risk-reward in a enviable place. §.§ Understanding March 2020 r0.41 < g r a p h i c s > MACE_100 during CovidW1 It is quite a sight in Figure <ref> that MACEs generate massive gains during the short-lived Pandemic recession. These can obviously be linked back to the abnormally high R^2's observed for that era.[Increased predictability in the early Covid era has also been noted in in-sample analyses such as <cit.> who found that the stock returns in certain industries are statistically significantly more predictable during 2020Q1 than a few months prior.] A similar result is found for S&P 500 (RF). However, S&P 500 (RF) is brought down by a dismal performance for the three years preceding 2020. A similar pattern holds for EW (RF) (unreported), where returns are dismal for the entirety of the test set. This highlights that RF-based models can most definitely fail to outperform basic strategies and that the MACE “treatment effect” is paramount in giving them consistently the upper hand. MACEs' and S&P 500 (RF)'s winning streak (both for N=20 and N=100) spans approximately 21 business days starting from early March 2020. Its intensity is greater for MACE_100 and S&P 500 (RF) where wealth is approximately tripled in one month. Indeed, zooming in on these precise days, MACE_100 hits a local R^2 of 20% and its sign prediction accuracy is 78 %. Needless to say, losing money only once every five days, thus compounding returns most days of the week in a time of high volatility, is what generates the rocket increase in profits during March 2020. Predictability patterns are clearly visible in Figure <ref>, where the RF embedded in MACE predicts many of the bounce backs and the overall zigzagging nature of the market in times of crisis. To the naked eye, it looks like RF, in part, uses strong negative autocorrelation from one day to the next–thus, fast-paced mean reversion. Table <ref> verifies such intuition with a first order approximation to nonlinear day-to-day dynamics. We find that the AR(1) coefficient for both MACE_100 and S&P 500 are strongly negative and highly statistically significant (t-stats above 4) during the first four months of the Pandemic. RF predictions provide significant economic value in this period because they precisely capture just that—those are strongly negatively correlated with yesterday's return. With Corr(r̂_t^RF, r_t-1) for the two targets in vicinity of -0.6, it is natural to ask where RF may have learned that. It is equally natural to conjecture it did so during the last major financial crisis. The third panel of Table <ref> verifies that. We see, albeit in a marginally more subtle way, that (i) returns are significantly negatively autocorrelated on a daily basis and (ii) RF predictions leverage the phenomenon to a non-trivial extent. While some statistically significant mean-reversion remains in MACE_100 outside of the pandemic, it is smaller by orders of magnitude. For the S&P 500, it is completely gone, one would expect. Thus, given the wide heterogeneity, a successful model must detect ex-ante whether we are in a state of strong daily mean reversion or one where there is little to none at all. It is an aspect in which the nonlinear nature of RF becomes key. This state-dependence is obviously only one of the many time series nonlinearities that tree ensembles can capture <cit.>. Finally, it is worth remembering that, even within those two hypothetical states, the linear approximation in Table <ref> only captures a fraction of RF nonlinear predictive dynamics. This is particularly true for CovidW1 where RF's prediction correlates very little with the first lag. Even during CovidW1, it is worth noting that the first lag explains less than 40% of the variance of RF predictions. §.§ Bagging Strategies and Other Algorithmic Refinements In a real-life implementation of a daily trading strategy, one may be more than willing to exchange transaction costs for computational costs. This is particularly true for MACE-based strategies, where all the computational burden is incurred while finding w, which remains fixed ever after, limiting daily computing costs to that of making one prediction with a Random Forest. In this subsection, we activate two algorithmic refinements that help in improving the N=20 results in terms of annualized returns and Sharpe Ratios. First, we introduce a modification to the Ridge Regression step that brings back part of the unconditionally expected return constraint that is usually part of classical mean-variance portfolio optimization. In our setup, it will have a second nature as unconditional return regularization. Second, we introduce “bagging of strategies” as a reasonably intuitive way to (i) tame the variance of returns and (ii) decrease the dependence of the solution on initialization values. This latter refinement, basically ensembling strategies, multiplies the computational cost by the size of the ensemble (B). The Return of the Minimum Return Constraint. Given that MACE has a conditioning set and builds a portfolio purposely for the sake of actively trading it, imposing an unconditional expected return constraint in the Ridge step does not nearly appear as natural as it would in a traditional mean-variance optimization setup. Notwithstanding, the close relationship between MACE and MinVar made explicit in section <ref> suggests that bringing back part of the constraint, in one form or another, could be beneficial. One such scenario occurs if MACE's predictive power is overstated in-sample, and leads it to overly rely on predictability to make an otherwise highly unprofitable portfolio profitable. Mere overfitting can lead the out-of-sample solution of MACE to be closer to MinVar (where the conditional mean is replaced by an unconditional mean). For those reasons, unconditional return regularization may prove a helpful foolproof. The implementation is simple: in MACE_μ≥μ, we turn off the intercept in the Ridge Regression step and add a positive value ξ=1 to f̂_s(𝐗_t). This has the effect of tilting the solution towards an allocation which has a higher unconditionally expected return in-sample. Intuitively, it pushes the ridge coefficients not only to reward each individual stock association with predictions f̂_s(𝐗_t), but also to reward historically higher growth stocks. Conversely, taking short positions on, e.g., Apple and Amazon, is discouraged unless completely hedged with other assets. Bagging Strategies. Interestingly, and, in fact, as a matter of necessity, we start by showing that the ensemble of strategies can be reduced to a single aggregate strategy, hereby avoiding the bag of strategies to multiply transaction costs by B. First, it is worth remembering that, unlike when predicting a fixed target, it is not possible here to merely average predictions. Those are attached to inevitably heterogeneous targets, and there is no guarantee that the average prediction will be appropriate for the average portfolio. An extreme case is that, in an ensemble of two models with, for estimation b to be the mirror image of estimation b' so that w_b+ w_b'=0, then the predictor and the predictand is 0 for all observations (even though each estimation separately had a positive R^2). Thus, the ensembling logic must be pushed further than merely averaging models, and rather look for bagging strategies. The proposed bagging scheme is the following: we run MACE B-times with different initializations, collect the B predictions, translate this into B positions through (<ref>), and then, finally, average the returns of a total of B strategies. However, stated as such, this would imply carrying at worst N × B trades a day instead of N. Fortunately, the bag of strategies can be rewritten so that it collapses to a single strategy. Precisely, we have that r_t^bag = 1/B∑_b=1^B ω_t,b∑_j=1^N w_j,b r_j,t = ∑_j=1^N r_j,t1/B∑_b=1^B ω_t,b w_j,b = ∑_j=1^N w_j,t^bag r_j,t where w_j,t^bag= 1/B∑_b=1^B ω_t,b w_j,b. In words, the bag of strategies is equivalent to a single strategy where the daily weight on stock j is w_j,t^bag, implying at most N transactions per day. We introduce two sources of randomization to make w_b's differ. First, the minimum variance solution used for initialization is randomized by estimating the covariance matrix on a random subsample of 70% of the training data. Second, we use decreasingly stochastic optimization steps via random observation weights which variance decrease with iterations (κ_t,s ∼exp(s)) in the Ridge part (Step 5 in Algorithm <ref>). This mild source of randomness is completely shut down when it becomes negligible (κ_t =1  ∀ t if s> s_max/3). The inspiration behind this randomization agent are some implementations of Boosting where trees are fitted on subsamples of the training data, or simply stochastic gradient descent in neural networks. Beyond creating a diversified ensemble, it may help in avoiding early trivial overfitting solutions and in getting unstuck from local minima. The choice of the exponential distribution (vs. subsampling) allows to keep all observations in at all times and is motivated from the Bayesian Bootstrap (see, e.g., the treatment in <cit.> or <cit.>) . 1.05 0.5 tableRefinements for MACE_20 [1pt] r^A SR Ω [5pt] MACE 23.10 0.99 1.18 [2pt] MACE_bag 20.60 1.07 1.20 [2pt] MACE_loose bag 23.03 1.36 1.25 [2pt] MACE_μ≥μ 29.76 1.17 1.19 [2pt] [para,flushleft] Notes: Economic metrics are r^A := Annualized Returns, SR := Sharpe Ratio, Ω := Omega Ratio. All statistics but SR and Ω are in percentage points. Returns and risk-reward ratios are based on trading each portfolio using a simple mean-variance scheme with risk aversion parameter γ=5. Numbers in bold are the best statistic of the column. 0.5 figureCumulative Returns < g r a p h i c s > We use B=50. Note that, by construction, the annualized return of MACE_bag will be the average of the B annualized returns (by the linearity of means). However, its volatility may be lower than the sum of each run's volatility, resulting in improved Sharpe Ratios. Another refinement is MACE_loose bag ,which is inspired from RF itself. The rule for λ is changed from R^2_s,train(λ)=0.01 to R^2_s,train(λ)=0.02 so to decrease the “bias” of base learners at the cost of increased variance, and finally letting the ensembling step take care of bringing down the overall variance. Results are reported in Table <ref>. All proposed extensions refine the original MACE_20 results. MACE_μ≥μ increases dramatically expected returns, which comes at the cost of reasonably increased volatility. It outperforms the simpler MACE specification for both risk-reward ratios, although the improvement is incremental for Ω. The bagging portfolios have a markedly different behavior: they have a marginally decreased r_A but greatly decreased variance. As a result, MACE_loose bag and MACE_bag both ameliorate over the reference specification, with MACE_loose bag being the superior refinement both in terms of SR and Ω. The reasons behind such remarkable improvements are apparent in Table <ref>: both MACE_bag and MACE_loose bag follow an almost linear – in log-terms – trajectory without suffering from any outstanding drawdowns. In particular, the latter fares as well 2022 as in any other year, and mid-March 2020 is only a momentaneous interruption of its otherwise steady exponential growth path. In Appendix <ref>, we report that MACE_loose bag and MACE_20 outperformance is mostly unabated when accounting for 1% transaction costs, a very estimate conservative in this application to the largest capitalization on the NASDAQ. Lower transaction costs deliver nearly identical returns to those reported above, and along with it SR's and Ω's. § APPLICATION II: MONTHLY STOCK RETURNS PREDICTION Lastly, we test MACE on monthly data. We shy away from any eccentricity and consider building portfolios with individual stock returns from CRSP for firms listed on the NYSE, AMEX, and NASDAQ <cit.> using the 16 macroeconomic indicators of <cit.> as the basis for 𝐗_t. We conduct a pseudo-out-of-sample expanding window experiment with a training set starting in 1957m3. The test set originally starts in 2005m1 and ends in 2019m12. We re-estimate MACE and the suite of competing models every 3 months and at each t, r_t+1 is comprised of all the stocks that have been continuously present in the dataset from 1957m3 until t. Accordingly, the number of stocks included in 2005 is 192 and shrinks to 113 in 2019.[Naturally, if the attrition were to become problematic, one could alleviate it by considering a rolling window instead.] We also report results when starting the test set in 1987 (as <cit.> do) but regard those starting in 2005 as more indicative of performance since MACE not only needs to learn a complex f (as in any ML-finance paper) but also g, and all that with time series of limited length. Also, predictability is known to be harder to find starting from the mid-2000s, with many studies reporting important gains from ML-based stock returns forecasting, but those nearly all take place before the start of the new millennium <cit.>. Regarding variable transformations, we subtract the risk-free rate from each stock return in r_t+1. Considering the exact composition of 𝐗_t, we first-difference the clearly non-stationary series in <cit.> and include 12 lags of each. The evaluation metrics are similar to those introduced in section <ref>. We complement those with the maximum drawdown (DD^MAX) as in <cit.>: DD^MAX = 0 ≤ t_1 ≤ t_2 ≤ Tmax (Y_t_1 - Y_t_2) where Y_t is the log cumulative return from t_0 through t. For computing MSE_PM^OOS in the denominator of the out-of-sample R^2 we again compute PM as the historical mean of the training-set. Given that available stocks are changing throughout the dataset, so does the composition of EW and MACE portfolios. Moreover, even with a fixed basket of available stocks, MACE's portfolio weights can change slowly as new data points enter the training set. Accordingly, the R^2 (and the other metrics as well) are not one for a fixed target, but rather for a fixed “strategy”. In other words, when y_t'+1 and y_t”+1 enter MSE_MACE and belong to two different estimation windows, they are likely coming from two distinct time series. Since those share the same unconditional variance by construction (1 or some other standardization necessary for identification in (<ref>)), they can be aggregated without window t' errors driving results more than that of window t” for mechanical reasons. When trading the MMLP, we again solve the prototypical mean-variance problem for a single return y_t+1 as stated in Equation (<ref>). As in <cit.>, the risk aversion parameter γ is set to 3 and ω̂_t+1 is constrained to lie between -1 for 2 for reasonable allocations. §.§ Results We report relevant summary statistics for our monthly exercise in Table <ref>. Similar to section <ref>, we show results for MACE and its corresponding modifications/refinements as well as EW and S&P 500. We report the relevant statistics for two distinct out-of-sample periods as outlined above. Log cumulative return plots are shown in Figure <ref> and R^2 comparison of MACE to Random Alternatives plots are available in Figure <ref>. Economic Results. Table <ref> shows that MACE is clearly outperforming any other model along each evaluation metric. The annualized average return of r^A = 18.70% beats the closest competitor (EW (RF)) by a stunning six percentage points. Yet, this achievement does not seem to come at the cost of extreme volatility with the Sharpe Ratio and the maximum drawdown both dominating the competitors by large margins. Even though the proposed refinements might even achieve superior performance in a specific metric – MACE_bag gives the investor a smaller max drawdown, whereas MACE_μ≥μ can even boost the annualized return – both modified MACEs cannot keep up with the Sharpe Ratio of SR=1.05. This suggests that MACE strikes an appealing balance between risk and reward. In Appendix <ref>, we document that those findings are unchanged when accounting for various levels of transaction costs. This dominance, however, fades for the earlier period between 1987 and 2004. As outlined above, the shorter T dimension limits MACE's ability to learn a complex signal, if there is any, resulting in diminished out-of-sample performance. As discussed in section <ref>, in such a situation it may merit to tilt MACE away from pure predictability and towards a safeguard of higher unconditional return instead. This might lead to sacrificing some in-sample predictability, but prove beneficial out-of-sample. We achieve such a tilting with MACE_μ≥μ, which generates both the highest annualized average returns and scores the highest Sharpe Ratio (SR=0.61). Figure <ref> gives us deeper insights into the underlying return dynamics that are cushioned by the single summary figures in Table <ref>. It is interesting to see how well MACE navigates the Great Recession (GR). While all competitors take either a deep hit or drift sideways (EW (RF)), MACE takes off already during the first half of GR and even accelerates its growth-rate at the outset. This behavior is different from MACE's behavior during earlier recessionary periods in the early 1990s and 2000s where MACE takes visible hits. Yet, the results for the period 2005-2019 suggest that MACE has learned from earlier mistakes. With a R^2 of 1.88% during the 2005-2019 subperiod, EW (RF) also fares well, most notably by avoiding heavy losses during GR. However, and as it often the case with more basic approaches, predictability is heavily localized during tumultuous economic times, leading EW (RF) (and also similarly S&P 500 (RF)) to overall underperform their prevailing mean counterparts both in terms of returns and volatility. Statistical Results. In Figure <ref> we see that the share of predictable Single Stocks is very similar across the two out-of-sample periods, whereas the distribution for Random Portfolios clearly shifts to the right during the 2005-2019 era. Thus, there is exploitable predictability, and MACE's job is to find promising w ex-ante. We see that it does so: it is superior to about 95% of randomly drawn portfolios. In the first subperiod, where more than 95% of such portfolios deliver negative R^2, MACE suffers a setback along with the crowd. With only ∼5% of portfolios found to be predictable ex-post, it is quite a daunting task to land in the promising region ex-ante. As was also observed in daily results, MACE appears particularly apt at finding the MPP when there is a reasonable number of possibly successful candidates to work with. In the opposite scenario where no or very few RFs attain any predictability, MACE inevitably struggles. Note that the large mass to the right of 0 in Figure <ref> is not necessarily indicative of market inefficiency given that one still needs to find the relevant vectors ex-ante. It, however, highlights a non-trivial number of possibilities for it to occur. On the other hand, the absence of a mass on the right side of 0 (as in Figure <ref>) is suggestive of efficiency, conditional on choices for stocks, information set, and predictive function. §.§ Predictability through the (Nonlinear) Time-Varying Risk Premia? Figure <ref> makes clear that MACE's stellar performance results predominantly from not tanking during the Great Recession and climbing steeply at its outset. This pattern is different from the other models. While EW (RF) did neither tank during the crisis, its engine sputtered post-crisis. In contrast, the MACE (PM)'s and EW (PM)'s growth picked up relatively quickly in the aftermath of the Great Recession, however, only after having tanked deeply throughout the crisis period. Only MACE seems to get the market timing right for both the crisis period and the subsequent recovery. To shed light on which economic indicators are driving this success, we use Shapley Values, a well established and evermore popular tool to quantify the contribution of predictors in opaque models <cit.>. We refer the reader to <cit.> for a generic textbook treatment and <cit.> for a focus on its applicability to financial and macroeconomic forecasting. Here, we dedicate our attention to the out-of-sample period 01/2008 - 12/2009. Relevant details regarding the construction of our variable importance metric from expanding windows are relegated to Appendix <ref>. Figure <ref> reports the five most important predictors for MACE and EW (RF) separately. The last panel combines the importance of all the lags of a given indicator. For MACE, the picture is dominated by a single strong predictor: the eight-month lagged stock market volatility (𝙻8_𝚜𝚟𝚊𝚛).[Of course, the predictor itself had high variance during this period. In Appendix <ref>, we address this concern and show that 𝙻8_𝚜𝚟𝚊𝚛 stands the test of adjusting the Shapley Values for the indicator's volatility. ] Grouping all lags together in Figure <ref> reinforces the case for the overall importance of 𝚜𝚟𝚊𝚛 itself. This indicates that MACE may leverage a subtle form of time-varying risk premium, as originally formalized in the ARCH-M model of <cit.>, its extension with time-varying parameters (TVP ARCH-M, ), or the GARCH-in-mean of <cit.>. These models allow for an asset's volatility to directly feed into the conditional mean of the asset's return, allowing for time-varying risk premia. "Subtle" refers here to the pattern being less evident than what one would expect from these classic models. The reason for this is threefold: first, there is a significant delay. Second, the volatility metric undergoes a highly non-linear transformation. Third, it is not MACE's previous volatility that enters the conditional mean, but that of the overall market as proxied by the S&P 500. When it comes to EW (RF), a few differences stand out: the contributions are more evenly distributed, with 𝙴12 coming in first, followed by stock market volatility. The other features are mostly shared with MACE's prediction. Yet, EW (RF) predictions partly go awry post 2008, and MACE's peculiar use of 𝚜𝚟𝚊𝚛 is the most plausible explanation for it avoiding this predicament. This is quite visible in Figure <ref> and <ref>, where we show the predictions of MACE and EW (RF) compared to the corresponding realized return. MACE's portfolio has a slightly less volatile return starting from 2009 and the associated predictions lie confidently in positive territory. EW's realized return has higher highs and lower lows and predictions are much more timid, that is, they are much closer to their unconditional mean. In Figure <ref>, we plot again the prediction for MACE and in each month, we report the single most important feature to its left-hand side. On the right, it is the single most important group (of lags).[See Appendix <ref> for further details about the calculation. For the grouped version, we sum the absolute Shapley Values at a given point in time across all lags of a particular feature i. This plotting scheme is inspired by <cit.>. ] It is striking that the string of positive predictions at the outset of the Great Recession are all attributable to 𝚜𝚟𝚊𝚛, and in particular, its 8^th lag. Yet, the prior evidence on the relevance of 𝚜𝚟𝚊𝚛 is mixed. In applications with sample periods ending prior to the Great Recession, <cit.> gets positive results while <cit.> and <cit.> get negative ones. Using data through 2013, <cit.> find 𝚜𝚟𝚊𝚛 gaining forecasting power for S&P 500 excess returns post-1985. MACE differs by being nonlinear and not looking at a pre-specified index. However, nonlinearity by itself appears to be insufficient as per EW (RF) not leveraging 𝚜𝚟𝚊𝚛 in any meaningful way. Lastly, we investigate how 𝙻8_𝚜𝚟𝚊𝚛 appears to contribute. Figure <ref> shows a scatter plot of the Shapley Values for 𝙻8_𝚜𝚟𝚊𝚛 (the "local contributions") with log(𝙻8_𝚜𝚟𝚊𝚛× 100) on the x-axis.[The dots in blue represent the Shapley Values over the training-set of the expanding window with an OOS start date in 01/2007. The red dots are the OOS Shapley Values collected for the expanding window 01/2007-12/2010. See Appendix <ref> for a detailed description of the collection process.] The yellow line represents a suggestive fit as the mean of all point realizations of log(𝙻8_𝚜𝚟𝚊𝚛× 100) < -0.25 and the mean of all realizations of log(𝙻8_𝚜𝚟𝚊𝚛× 100) ≥ -0.25. While being inherently imperfect because of 𝙻8_𝚜𝚟𝚊𝚛's various interactions with other predictors in RF, this fit is nonetheless instructive. First, the sign is right: more risk commands a higher premium. At a level of around 𝙻8_𝚜𝚟𝚊𝚛≈ 0.0078, which translates into a measure of daily stock-market volatility of σ = √(𝙻8_𝚜𝚟𝚊𝚛)≈ 0.088 ≡ 8.8%, market uncertainty seemingly triggers a regime of higher expected rate of return on the MACE portfolio. This observation speaks to the findings in <cit.> that the volatility feedback channel emerges during times of elevated volatility. Here, the suggestive fit points to a simple two regimes relationship: a first one where there is basically no risk premium, and second where there is a constantly higher premium, irrespective of the specific values of 𝙻8_𝚜𝚟𝚊𝚛 as long as it is above a certain threshold. Figure <ref> shows how this nonlinear relationship plays out in the time space. We plot the scaled versions of the realized stock variance (𝚜𝚟𝚊𝚛), the local Shapley Values for the grouped 𝚜𝚟𝚊𝚛, and the realized MACE returns. The positive relationship between 𝚜𝚟𝚊𝚛 and the realized MACE returns is clearly emerging from the midst of the Great Recession in late 2008 onwards. The delay is visible from the red bump appearing much later than the original 𝚜𝚟𝚊𝚛 impulse. The nonlinearity is also discernible from the red line following a very different pattern than the orange one – perfect linearity would imply a mere rightward translation of the orange line. The red plateau is well timed with high (unconditional) MACE realized returns at the outset of the Great Recession. From that and other observations, we can conclude that part of MACE's success in that era is uncovering a portfolio with a well dissimulated, yet stronger reaction to changes in volatility regimes. § CONCLUDING REMARKS We introduce the MACE algorithm to construct maximally machine-learnable portfolios. It does so by directly optimizing the portfolio weights to make life easier for the prediction function. As we have discussed, this does not neglect variance, quite to the contrary, as the MMLP problem is intimately linked to traditional mean-variance optimization. Advantages with respect to the various strands of literature building linear mean-reverting portfolio is MACE's flexibility through the use of Random Forest and its scalability. Peaking into the future, those qualities are essential to discover increasingly complex patterns of predictability in an era where a flock of humans and machines are constantly on the lookout for those. With respect to key ML applications in empirical asset pricing, MACE provides a low maintenance (data- and computations-wise) alternative which can deliver the goods leveraging only basic time series data, or lagged returns themselves. Our two applications, daily and monthly trading, illustrate that by scoring enviable returns and Sharpe Ratios in evaluation periods where gains from using ML methods have often been anticlimactic. There are quite a few directions for future research, beyond more or less straightforward applications to new assets and information sets, and changing ridge regularization for any other shrinkage one's heart desires. First, MACE could be extended to solely learn buy-sell signals where the cutoff point itself is trainable within the loop. In that way, we could potentially construct “episodic portfolios” where trading rarely occurs and typically does so when a rarer event is expected with moderate uncertainty. Second, some structured form of nonlinearities could be accommodated on the left-hand side of the equation. While from a statistical standpoint, nothing is impossible, from a financial one, the LHS must remain a tradeable combination of securities. Nonetheless, some nonlinear transformations of returns can be approximated by appropriately designed options and MACE could learn a maximally predictable combination of financial instruments. Third and more ambitiously, MACE's alternating EM-style algorithm could potentially be replaced by a single hemisphere neural network (à la <cit.>) that minimizes directly the MMLP loss function combined with bagging strategies to deal with the inevitability of overfitting and finding a trivial solution. As discussed earlier, there are numerous headwinds to such modifications and bagging by itself may not be enough. But, keeping in mind deep learning's edge with large and non-traditional data, the additional efforts could perhaps bring MMLPs to new highs. apalike 0.75 saveeqn § APPENDIX 1.25 §.§ Transaction Costs In the section, we quantifying the reduction in economic performance due to transactions costs (TC). To do so, we calculate TC_t = c×∑^N_n=1 | r_n,t(ω_t w_n,t - ω_t-1 w_n,t-1) | , where r_n,t is the return of stock n in period t, w_n,t is the portfolio-weight of stock n at time t, ω_t is the trader's positioning at time t and c is the share of the absolute return after trading that is to be allotted to transaction costs <cit.>. Daily Results. Given that our strategy utilizes highly liquid stocks listed on the NASDAQ, we expect transaction costs (TC) to be low. Thus, we set c∈{0.1%, 0.5%, 1%} The lowest c is recommended by <cit.> for trading Volatility Lab's 177 sustainable funds that are very large and liquid.[See https://vlab.stern.nyu.edu/climatevlab.stern.nyu.edu.] Table <ref> reports the corresponding summary statistics for various MACE portfolios after subtracting TC_t from the realized returns r_t. For the two portfolios with N=20 stocks, annualized returns before TC amounted to 23.16% for MACE_20 (Table <ref> and to 23.10% for MACE_loose bag (Table <ref>). As Table <ref> shows, the fallout due to transaction costs is well contained for these portfolios. Evidently, the degradation has to be higher for MACE_100 since it implies trading five times more stocks. It peaks at a reduction of about 10% when assuming the most pessimistic c. The other reductions are smaller by construction, and in all cases, MACE_100 still dominates alternatives in terms of the three economic metrics. Note that for all three MACEs, TC-adjusted r^A, SR, and Ω are still unquestionably well above what is reported for competing strategies , including passive ones (with TCs ≈ 0) and more proactive ones. Monthly Results. Table <ref> shows the monthly returns for MACE and its refinements after accounting for several levels of transaction costs. We now use c∈{ 0.5%,1%, 2%} which is highly conservative, and accommodates for the fact that the out-of-sample covers about 3 decades. It is also a more precautions choice given that, unlike the daily applications, considered stocks are certainly large-caps, but not necessarily the largest caps in an era of lessened TCs. Overall, we see that TCs only eat up a minor fraction of average monthly returns, such that also the corresponding risk-metrics, SR and Ω remain in the neighborhood of those reported in Table <ref>. §.§ Variable Importance Calculations In our monthly expanding window exercise, both periods are obviously not static but “evolving”. With e = 1,...,E expanding windows, T^ins_e denotes the end of the in-sample period for window e. Hence, we collect the corresponding Shapley Values for the OOS period as follows: for each variable i, we collect only those Shapley Values that fall into the interval starting with the month following the end of the current window's in-sample period (T^ins_e + 1) and ending with the end of the in-sample period of the next expanding window (T^ins_e+1). As we expand our in-sample period each quarter by another three months, the period between T^ins_e + 1 and T^ins_e+1 amounts to three months. The corresponding OOS variable importance of variable i (VI^oos_i) is thus calculated as follows: VI^oos_i = ∑_e=1^E ∑^T^ins_e+1_t=T^ins_e + 1 | ϕ_i,t | . Taking Figure <ref> as an example, where the OOS period runs from 01/2007 through 12/2010: we start in 12/2006 and collect the first three local Shapley Values of the OOS-period (01/2007-03/2007). We then expand our training set until 03/2007. Hence we collect the first three local Shapley Values of the new OOS period (04/2007-06/2007). We proceed until our training set ends in 09/2010. Summarizing indicator i's contribution across all it's lags, we calculate the grouped VI for group g (VI^oos_g) as follows: VI^oos_g = ∑_e=1^E ∑^T^ins_e+1_t=T^ins_e + 1∑_i ∈ g | ϕ_i,t | . where g includes all lags of with which indicator i is represented in the feature set. §.§ Volatility-Adjusted VI-Plots In Figure <ref> we show volatility-adjusted VI plots. That is, we adjust VI^oos_i in Equation (<ref>) by indicator i's ratio of OOS to in-sample standard deviation: AdjVI^oos_z = VI^oos_z ×( σ^oos_z/σ^ins_z)^-1 for z = i,g , where σ^oos_i is the standard deviation over the OOS period (here: 01/2008 - 12/2009) and σ^ins_i the standard deviation over the in-sample period (here: 03/1957 - 12/2007) respectively. For the grouped case (AdjVI^oos_g), the standard deviation (σ^oos_g) is calculated as the standard deviation of the moving-average of indicator i, where the length of the moving average corresponds to the number of lags (here 12) with which i enters the predictor matrix.
http://arxiv.org/abs/2306.08755v1
20230614213352
Bifurcation and periodic solutions to population models with two dependent delays
[ "Adrian Gomez", "Jose Oyarce" ]
math.DS
[ "math.DS", "math.FA", "92D25, 34K13, 34K18, 34K20, 34C20" ]
Bifurcation and Periodic solutions] Bifurcation and periodic solutions to population models with two dependent delays A. Gómez]Adrián Gómez a, José Oyarce a,* [email protected] J. Oyarce] [email protected] ^aDepartamento de Matemática, Facultad de Ciencias, Universidad del Bío-Bío, Casilla 5-C, Concepción, VIII-Región, Chile *Corresponding author [2010]92D25, 34K13, 34K18, 34K20, 34C20 We investigate the scalar autonomous equation with two discrete delays ẋ(t)=f(x(t),x(t-r),x(t-σ)), where f:ℝ^3→ℝ is a continuously differentiable non-linear function such that f(0,0,0)=0. It is shown that if the difference between the delays is constant, then one of the delays becomes a Hopf-bifurcation parameter and, in addition, the absolute stability of the trivial solution can be established. Moreover, the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions are determined by using normal form theory. The main results are applied to guarantee the existence of positive periodic solutions to Nicholson's blowflies and Mackey-Glass models, both with a delayed harvesting term. The conclusions are illustrated by numerical simulations. [ [ First version: 8^th November, 2022 This version: July 31, 2023 =================================================================== § INTRODUCTION To describe periodic oscillations in the experiments of Nicholson <cit.> with the Australian sheep blowfly Lucilia cuprina, Gurney et al. <cit.> proposed the following model ẋ(t)=-δ x(t)+Px(t-r)e^-x(t-r). Here x(t) represents the population, δ is the mortality rate, P is the maximum per capita daily egg production, and r is the time taken from birth to maturity. In <cit.>, the authors studied the model (<ref>) by taking the delay as a parameter and, consequently, a result dealing with the existence of periodic solutions to (<ref>) is presented in the paper (see <cit.>). One of the open problems formulated by Berezansky et al. <cit.> is to investigate the Nicholson's blowflies model with a delayed linear harvesting term ẋ(t) = -δ x(t)+Px(t-r)e^-x(t-r)-H x(t-σ), where the harvesting Hx(t-σ) is a function of the delayed estimate of the true population. The results obtained in this paper are applied to find the sufficient conditions for the existence of bifurcating periodic solutions to (<ref>) (see Section <ref>). Some general cases of (<ref>) can be found, e.g., in <cit.> and the references therein. To describe the dynamical behaviour of red blood cells production, Mackey and Glass <cit.> formulated the following single-delayed model u̇(t)=-δ u(t)+ βθ^n/θ^n+u^n(t-r), where u(t) represents the circulating density of red blood cells, δ is the loss rate of red cells, β is the maximal red blood cell production rate that the body can approach at low circulating red blood cell numbers, r is a maturation delay, n is a positive exponent, and θ is a shape parameter. Let u(t)=θ x(t), then (<ref>) becomes ẋ(t)=-δ x(t) + P/1+x^n(t-r), where P=β/θ. Similar to <cit.>, in <cit.> the authors investigated the existence of Hopf bifurcations at a positive equilibrium x=x_* of (<ref>) by using the delay as a parameter (see <cit.>). Some results about oscillation and global attractivity of solutions to (<ref>) can be found, e.g., in <cit.>. The results obtained in this paper are also applied to investigate (<ref>) with a delayed harvesting strategy (see Section <ref>). It is well-known that scalar differential equations with two delays are more realistic than single-delayed models and, in particular, they have significant importance in biological applications. We refer to <cit.> for population models with two delays; to <cit.> for neurological models with two delays; and to <cit.> for a compound optical resonator with two delays. Therefore, the purpose of this paper is to investigate the following scalar autonomous equation ẋ(t)=f(x(t),x(t-r),x(t-σ)), r≥ 0, σ≥ 0. Here, for f(u_1, u_2, u_3), we assume that f: ℝ^3→ℝ is a continuously differentiable non-linear function such that f(0,0,0)=0 and -a= ∂ f/∂ u_1(0,0,0), -b=∂ f/∂ u_2(0,0,0), -c= ∂ f/∂ u_3(0,0,0), where a, b and c are real numbers. We apply linear methods to study the local behaviour of the trivial solution to (<ref>), which leads us to investigate the linear equation ẏ(t)=- ay(t)-by(t-r)-cy(t-σ ). From a biological point of view, it is interesting to study positive periodic orbits that oscillate about positive equilibria of the delayed model. Obviously, this analysis can be done by shifting the positive equilibrium to zero and investigating a differential equation of the form (<ref>) and its linearization (<ref>). In the last decades the delayed equation (<ref>) has become of interest to many authors, e.g., Hale and Huang <cit.> investigated the stability of the zero equilibrium of (<ref>) and determined a geometrical description of the stable regions of (<ref>) in the (r, σ)–plane for some sets of a, b and c. In the case a=0, b>0, and c>0, Li et al. <cit.> studied the local stability of the zero equilibrium of (<ref>) and the existence of Hopf bifurcations considering one of the delays as a parameter, moreover the authors studied the direction of the bifurcation and the stability of the bifurcating periodic solutions by using the method of normal forms developed by Hassard et al. <cit.>. Piotrowska <cit.> presented some remarks and corrections about the results in <cit.> and, in addition, the cases of (<ref>) with a=0, b<0, and c<0 or a=0 and b c<0 were investigated. Note that in <cit.> the bifurcation analysis is treated for two independent delays. Nevertheless, as we will see throughout this paper, this is not our case since we will assume that the difference between the delays is constant. If bc=0, r σ=0 or r=σ, then (<ref>) is equal to an equation with a single delay, which corresponds to already known results (see, e.g., <cit.>) and, therefore, in this paper we do not consider this latter cases. The main purpose of this paper is to study the existence and stability of periodic solutions to (<ref>) when the parameters a, b and c belong to suitable sets and the difference between the delays is constant, namely τ=r-σ with τ∈ℝ fixed. Based on some of the ideas and results in <cit.>, we introduce a new method to analyse the distribution of the roots of the characteristic equation corresponding to (<ref>) and, consequently, we prove the existence of local Hopf bifurcations about the zero equilibrium of (<ref>). Furthermore, by using the method of normal forms developed by Faria and Magalhães <cit.>, we study the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions. The results obtained in this paper are applied to Nicholson's blowflies and Mackey-Glass equations. The paper is structured as follows. In Section <ref>, by choosing a delay as a parameter, we obtain the sufficient conditions to prove the existence of local Hopf bifurcations to (<ref>) when the difference between the delays is constant and, in addition, an absolute stability criterion of the zero equilibrium is stated. In Section <ref>, we study the direction of the bifurcation and the stability of the bifurcating periodic solutions by using normal forms. The results of Sections <ref> and <ref> are applied in Section <ref> to prove the existence of positive periodic solutions to the Nicholson's blowflies population model (<ref>), and the Mackey-Glass model (<ref>) with a delayed harvesting term. A numerical analysis of the equation (<ref>) is also presented in Section <ref> using the Matlab dde23 Package. § STABILITY OF THE ZERO EQUILIBRIUM AND LOCAL HOPF BIFURCATIONS In this section, we prove main results about the existence of absolute stability and local Hopf bifurcations for the trivial solution of (<ref>). We start this section by analysing the characteristic equation associated with (<ref>), it is h(λ) def=λ + a + b e^-λ r + c e^-λσ=0. Letting λ = μ + iω with ω≠ 0, and separating real and imaginary parts we have μ +a = - b e^-μ rcos (ω r ) -c e^-μσcos ( ωσ ), ω =b e^-μ rsin (ω r )+c e^- μσsin ( ωσ) . We are interested in the existence of purely imaginary roots of (<ref>), hence we do λ = iω in (<ref>) obtaining a =-bcos (ω r) - ccos (ωσ ), ω = b sin (ω r) + c sin (ωσ ) . If a+b+c=0, then ω=0 is a solution of (<ref>) for all delays (r,σ)∈ℝ_+^2 and, therefore, we do not consider that case. Let X_+ def={ (a,b,c)∈ℝ^3: a+b+c>0 }. If r=σ=0 and (a,b,c)∈ X_+, then the zero equilibrium of (<ref>) is asymptotically stable. Also, from <cit.>, if (a,b,c)∈ X_-={ (a,b,c)∈ℝ^3: a+b+c<0 }, then the zero equilibrium of (<ref>) is unstable for all delays r≥ 0, σ≥ 0. Thus, throughout this paper we assume (A) (a,b,c)∈ X_+, (B) bc≠ 0. In order to find and to discard imaginary roots of (<ref>), we have the following lemma. Let τdef= r-σ∈ℝ. (i) The system (<ref>) has a root ω^*>0 if and only if the equation cos(ωτ)=ω^22bc+a^2-b^2-c^22bc has the same root ω^*. (ii) A simple root ω^*>0 of (<ref>) is a simple root of (<ref>). (i) Adding up to square both sides of (<ref>) we arrive to (<ref>), hence it is clear that any root of the system (<ref>) is a root of equation (<ref>). Reciprocally, let ω>0 be a root of (<ref>). Rewriting (<ref>) as 2bccos(ωτ)+c^2+b^2=ω^2+a^2, we observe that Δdef=-(2bccos(ωτ)+c^2+b^2)<0. In consequence, the system ([ -(c+bcos(ωτ)) bsin(ωτ); bsin(ωτ) (c+bcos(ωτ)) ])([ u_1; u_2 ])=([ a; ω ]) has a unique solution such that ([ u_1; u_2 ]) = 1|Δ |([ -(c+bcos(ωτ)) bsin(ωτ); bsin(ωτ) (c+bcos(ωτ)) ])([ a; ω ]), = 1√(|Δ|)([ -(c+bcos(ωτ)) bsin(ωτ); bsin(ωτ) (c+bcos(ωτ)) ])([ a/√(|Δ|); ω/√(|Δ|) ]). Note that det(1√(|Δ|)([ -(c+bcos(ωτ)) bsin(ωτ); bsin(ωτ) (c+bcos(ωτ)) ]))=-1, and from (<ref>) it follows that ([ a/√(|Δ|); ω/√(|Δ|) ])= 1√(|Δ |)√(a^2+ω^2)=1, whence we obtain that u_1^2+u_2^2=1. Therefore, there exists σ∈ [0,2π/ω) such that u_1=cos(ωσ), u_2=sin(ωσ) and, consequently ([ -(c+bcos(ωτ)) bsin(ωτ); bsin(ωτ) (c+bcos(ωτ)) ])([ cos(ωσ); sin (ωσ) ])=([ a; ω ]). The last system is equivalent to (<ref>), thus we have shown that ω solves (<ref>). In order to prove (ii), we assume now that ω is a multiple root of (<ref>). Let f_1(ω) def= -bcos (ω r) - ccos (ωσ ) and f_2(ω) def= b sin (ω r) + c sin (ωσ ), then the system (<ref>) takes the form [ a=f_1(ω),; ω=f_2(ω), ] and (<ref>) is a^2+ω^2=f_1^2(ω)+f_2^2(ω). If ω is a multiple root of (<ref>) it also satisfies 0=f'_1(ω) and 1=f'_2(ω), hence ω=f_1(ω)f'_1(ω)+f_2(ω)f'_2(ω), which implies that ω is a multiple root of (<ref>) and, therefore, a multiple root of (<ref>), showing (ii). The last lemma allows to find simple roots of (<ref>) by studying (<ref>). In order to find these roots, we enunciate the following. For (<ref>) we have (i) If bc>0 and |b-c|≤ |a|<|b+c|, then for any τ∈ℝ there are smaller values of ω^*>0, k_τ∈ℕ _0, and σ̅∈ [0,2π/ω^*) such that for any k≥ k_τ, we have delays [ σ_k def=σ̅+2kπω^*> 0,; ; r_k def=τ+σ_k> 0, ] such that (r_k, σ_k,ω^*) solves (<ref>). Also, for each (r_k,σ_k), ω^* is a simple root. (ii) If bc>0 and |a|<|b-c|, then for all |τ|∈[0,√((b-c)^2-a^2)bc)∪⋃_n=0^∞[2nπ√((b-c)^2-a^2),(2n+1)π√((b-c)^2-a^2)], the conclusion in (i) is valid. (iii) If bc<0, |b+c|<|a|<|b-c|, there is a value τ^*> 0 such that for |τ|< τ^* the system (<ref>) has not any root ω>0. And if |τ|>τ^*, the conclusion in (i) is valid. (iv) If bc<0 and |a|<|b+c|, then for all |τ|∈[0,√((b+c)^2-a^2)|bc|)∪⋃_k=0^∞[(2k-1)π√((b+c)^2-a^2),2kπ√((b+c)^2-a^2)], the conclusion in (i) is valid. Let g(ω) def=ω^22bc+a^2-b^2-c^22bc, assuming bc>0 and |b-c|≤ |a|<|b+c| we have that g is an strictly increasing function for ω∈[0,∞) such that -1≤ g(0)<1 and thus, g(ω^*)=cos(ω^* τ) for a first positive ω^* and in this intersection the slopes have opposite signs, showing the simplicity of ω^*. Now, from Lemma <ref>, if ω^*>0 solves (<ref>), then Δ< 0 and we can rewrite (<ref>) as ([ cos(ω^*σ); sin(ω^*σ) ])=([ bω^*sin(ω^*τ)-a(c+bcos(ω^*τ))ω^* ^2+a^2; absin(ω^*τ)+ω^*(c+bcos(ω^*τ))ω^* ^2+a^2 ]). As both sides of the last equation are unitary vectors, there is a first σ̅∈ [0,2π/ω^*) such that (<ref>) holds. Hence, by the periodicity of the left side of (<ref>), any σ_k=σ̅+2kπ/ω^* with k∈ℕ_0, satisfies it. Therefore we can choose k_τ large enough such that r_kdef=τ+σ_k≥ 0 for all k≥ k_τ, and (i) holds. Assuming bc>0 and |a|<|b-c|, then g(0)<-1 and g(ω) takes all the values in [-1,1] when ω∈ [ω_1,ω_2] with g(ω_1,2)=∓ 1. Hence, by continuity, the equation (<ref>) has for each τ∈ℝ at least one root ω∈ [ω_1,ω_2]. Fixing τ and denoting the first root by ω^*, in order to have a simple first root ω^* for (<ref>), we need to avoid ω^*bc=-τsin(ω^* τ). Since ω^*≥ω_1, if (<ref>) is true, then ω_1bc = ω_1|bc|≤ω^*|bc| =|τ| |sin( ω^*τ)| ≤ |τ|, which is imposible if |τ|< ω_1bc=√((b-c)^2-a^2)bc. Also, if |τ|≠ 0 satisfies 2kπ|τ|≤ω_1<(2k+1)π|τ|, then ω^*∈[ω_1, (2k+1)π|τ|) and -τsin(ω^* τ)<0<g(ω^*). Consequently, from (<ref>) if |τ| ∈⋃_k=0^∞[2kπω_1,(2k+1)πω_1), then ω^* is a simple root and the conclusion in (i) is valid following the same proof. Assuming bc<0 and |b+c|<|a|<|b-c|, then the function g is strictly decreasing on [0,∞) and g(0)∈ (-1,1), hence the conclusion follows easily. Assuming bc<0 and |a|<|b+c|, we have that g is strictly decreasing on [0,∞), and g(0)>1. By continuity, clearly (<ref>) has at least one solution in [ω_1,ω_2] with g(ω_1,2)=± 1. Then, we consider ω^* to be the first root of (<ref>). In order to guarantee the simplicity of this first root, we need to avoid (<ref>) as above. In this case we take |τ|<ω_1|bc|=√((b+c)^2-a^2)|bc|. Then, |τsin(τω)|<ω_1|bc|≤ω|bc| for all ω∈ [ω_1,ω_2], and (<ref>) is impossible. Also, if ω_1∈[(2n-1)π|τ|,2nπ|τ|] for some n∈ℕ, then as in the first case, the slopes at the point ω^* for the left and the right sides of (<ref>) have opposite signs, hence ω^* is a simple root if |τ|∈[0,√((b+c)^2-a^2)|bc|)∪⋃_k=0^∞[(2k-1)π√((b+c)^2-a^2),2kπ√((b+c)^2-a^2)], and the proof follows as in the case (i). If the parameters a,b,c satisfy either (A), bc>0 and |a|> |b+c| or (A), bc<0 and |a|≥ |b-c|, due to the non-existence of non-zero imaginary roots in (<ref>), it can be easily verified the absolute local stability of the trivial solution to (<ref>). In Figure <ref>, we show the different possibilities given in Lemma <ref> in terms of the intersections between the curves y=g(ω) and y=cos(ωτ). If τ∈ℝ is large enough, there could be more than one purely imaginary root of equation (<ref>). For this reason, since we have to study the existence of a pair of simple characteristic roots of (<ref>) when (<ref>) has no other roots with zero real parts (see, e.g., <cit.>), we will slightly strengthen the assumptions of Lemma <ref> to restrict ourselves to the case where the uniqueness of the simple root is guaranteed for some pair of delays (r_0,σ_0). Let ω^* be the first solution of equation (<ref>) and define τ^* def=min{τ>0 : equations (<ref>) and (<ref>) are fulfilled for ω=ω^* }, to have the following lemma. Let τ∈ℝ and |τ|<τ^* be fulfilled, then there exists a pair of delays (r, σ)=(r_0,σ_0) on the curve σ=r-τ in the (r, σ)–plane within ℝ_+^2 such that the following assertions are valid: (i) If bc>0 and |b-c|≤ |a|<|b+c|, then there exists a unique pair of simple purely imaginary roots ± iω^* of (<ref>) at (r_0, σ_0) where ω^* ∈(0, √((b+c)^2-a^2)]. (ii) If bc>0 and |a|<|b-c|, then there exists a unique pair of simple purely imaginary roots ± iω^* of (<ref>) at (r_0, σ_0) where ω^*∈[√((b-c)^2-a^2), √((b+c)^2-a^2)]. (iii) If bc<0 and |a|<|b+c|, then there exists a unique pair of simple purely imaginary roots ± iω^* of (<ref>) at (r_0,σ_0) where ω^*∈[ √((b+c)^2-a^2), √((b-c)^2-a^2)]. Assuming bc>0 and |b-c|≤ |a|<|b+c|, then ω^* ∈ [0, ω_1] where g(ω_1)=1 and therefore g(ω^*) take the values in [g(0),1]. Put p(ω^*) def= g(ω^*)-cos(ω^* τ). Since p(0)<0≤ p(ω_1), then there exists a solution ω^*∈ (0, ω_1] to (<ref>). On the other hand suppose that there exist two solutions to (<ref>), then p'(ω^*)=0 for some ω^*∈ (0, ω_1 ] which contradicts the fact that ω^*/bc+τsin(ω^*τ)>0 for all ω^* ∈ (0,ω_1] and |τ|<τ^*, hence the uniqueness of the solution holds and the conclusion follows from Lemma <ref>. The proofs in the other cases are analogous, therefore, they are omitted. For τ∈ℝ fixed, now we prove the existence of a unique pair of solutions λ=± i ω^* to (<ref>) crossing transversally the imaginary axis. To check the Transversality Condition we have the following lemma. Assume that the parameters a,b,c and τ satisfy any of the conditions in Lemma <ref> and let (r_0,σ_0,ω^*) denote the values given in that Lemma for some fixed τ. Let λ(r)=μ(r)+i ω(r) be a root of (<ref>) such that μ(r_0)=0 and ω(r_0)=ω^*. Then μ'(r_0)>0. Let λ∈ℂ be a solution of (<ref>), then dh(λ )/dλ= 1-b r e^-λ r-c σ e^-λσ. For a fixed value of τ, we have dh(iω^*)/dλ= 1+a (r_0-τ) - bτcos(ω^* r_0) + i[ω^* (r_0-τ)+ bτsin(ω^* r_0)]. Therefore, d/dλ Re (h(i ω^*)) = 1+a(r_0-τ) - bτcos(ω^* r_0), d/dλ Im (h(iω ^*)) = ω^* (r_0-τ) + bτsin(ω^* r_0). In addition λ (r) satisfies λ(r)+a + e^-λ(r) r f(λ(r)) =0, where f(λ(r))=b+ce^λ(r) τ. Taking the derivative with respect to r it follows that λ '(r) (1- bτ e^-λ(r) r+(τ-r)f(λ(r))e^-λ(r) r) = λ(r) f(λ (r)) e^-λ (r)r. From (<ref>) we have that f(λ(r))e^-λ(r) r=-(λ(r)+a) and, therefore λ '(r)= - λ (r)(λ(r)+a)/1-bτ e^-λ(r)r+ (r-τ)(λ(r)+a). Since λ(r)=μ (r)+iω (r) is a root of (<ref>) such that μ(r_0)=0 and ω (r_0)= ω^*, then μ'(r_0) = ω^*(ω^*-bτ (ω^*cos(ω^*r_0)+ asin(ω^*r_0)))/(1+a (r_0-τ) - bτcos(ω^* r_0))^2+( ω^* (r_0-τ) + bτsin(ω^* r_0))^2. From (<ref>) it follows that μ'(r_0)= ω^*(ω^*+bc τsin( ω^* τ))/(1+a (r_0-τ) - bτcos(ω^* r_0))^2+( ω^* (r_0-τ)+ bτsin(ω^* r_0))^2. Under any of the assumptions of Lemma <ref> we have that ω^*+bc τsin( ω^* τ)>0, which leads us to conclude that μ'(r_0)>0. In order to study the Hopf bifurcation of (<ref>) by choosing one of the delays as the bifurcation parameter, specifically r, we need to avoid the threesomes (r_0, σ_0,ω^*) given in Lemma <ref> that satisfy r_0σ_0=0. To this end (see, e.g., <cit.>) we consider the following assumptions (C.1) a>c-b, (C.2) a>b-c. Note that if conditions (A), (B) and (C.1) or (C.2) hold, then the number of right half plane zeros of (<ref>) is zero for r=σ=0 and with the delays (r,σ) ∈ℝ^2_+ varying in the semi axis (r,0) or the semi axis (0,σ) on the (r, σ)–plane. Lemma <ref> introduces the existence of a unique pair of simple and purely imaginary roots to (<ref>) for some pair of delays (r_0,σ_0)∈ℝ^2_+. The natural question arises: How to calculate the values of the pair (r_0,σ_0)? To answer this question, note that from Lemma <ref> it follows that for ω^*>0 there exists a sequence { r_k }_k∈ℕ such that ω^* satisfies (<ref>), namely r_k= 1/ω^*[ arccos(-cω^*sin(ω^*τ)+a(b+ccos(ω^*τ))/ω^* ^2+a^2)+2kπ] (k=0, 1, 2, … ). Once we have determined ω^* for τ∈ℝ fixed, then formulae (<ref>) and (<ref>) allows us to calculate the pair of delays (r_0, σ_0) in the following manner. Without loss of generality assume that the delay r is the bifurcation parameter and define r_0 def=min{ r_k: k≥ k_τ and i ω^* is a simple and unique root of (<ref>)} , σ_0=σ, where r_k, σ and k_τ are given in the proof of Lemma <ref>, i.e., k_τ is such that r_k=τ+σ_k≥ 0 for all k≥ k_τ, σ̅∈ [0,2π/ω^*) is the first value such that (<ref>) holds, and r_k satisfy (<ref>). Assume that the parameters a, b, c and τ satisfy conditions (C.1) or (C.2) together with any of the hypotheses of Lemma <ref>. Then, we propose the following simple procedure to find a unique pair of simple characteristic roots μ(r)± iω(r) to (<ref>) crossing transversally the imaginary axis at a first point r=r_0. For τ≥ 0: (1) Form the curve σ=r-τ in the (r, σ)–plane within ℝ_+^2 and initiating from the point ( τ,0) find the first values (r_0,σ_0) given in (<ref>). (2)In Lemma <ref> take (r,σ)=(r_0, σ_0) which are such that (<ref>) has a first zero crossing transversally the imaginary axis and σ_0=r_0-τ . Analogously for τ<0. If the condition (C.1) is fulfilled we can apply the above procedure for τ>0, on the other hand if (C.2) is fulfilled we can apply the procedure for τ<0. Although the main importance of our results is that they are applicable in the case τ≠ 0, note that the assumptions of our main results do not exclude the case τ=0. Indeed, for τ=0 the equation (<ref>) is λ + a + (b+c)e^-λ r=0. If λ=iω^*, then (<ref>) becomes a =-(b+c)cos (ω^* r), ω^* = (b+c) sin (ω^* r), which implies that ω^*=±√((b+c)^2-a^2). If |a|<|b+c| holds, then (<ref>) has a pair of imaginary roots ± iω^* at the sequence r_k=1/√((b+c)^2-a^2) [ arccos( -a/b+c) +2kπ] (k=0, 1, 2, … ). If r=0 we have that λ=-(a+b+c), therefore, if conditions (A) and |a|<|b+c| hold, then the equation (<ref>) undergoes a Hopf bifurcation at the zero equilibrium when r=σ=r_0, i.e., if r=σ then the zero equilibrium of equation (<ref>) is asymptotically stable for 0<r<r_0 and unstable for r>r_0. The above analysis corresponds to already known results (see, e.g., <cit.>). According to Lemmas <ref>, <ref> and by using Rouche's Theorem <cit.>, now on account of (<ref>) and (<ref>), we formulate our main result dealing with the stability and local Hopf bifurcations for the zero equilibrium of the equation (<ref>). The following assertions are valid: (i) If the parameters a,b and c satisfy (A), bc>0 and |a|> |b+c|, then the zero equilibrium of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0. (ii) If the parameters a,b and c satisfy (A), bc<0 and |a|≥ |b-c|, then the zero equilibrium of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0. (iii) Assume that the parameters a,b and c satisfy (A), bc<0 and |b+c|<|a|<|b-c|. Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the zero equilibrium of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0 such that τ=r-σ. There exists a bifurcation parameter r_0>0 such that the following assertions are valid: (i) Assume that the parameters a,b and c satisfy (A), bc>0, |b-c|≤ |a| < |b+c|, and (C.1) or (C.2). Let, in addition, τ∈ℝ be such that |τ|<τ ^ *. Then, for r∈[0, r_0) the trivial solution to (<ref>) is asymptotically stable and the equation (<ref>) undergoes a Hopf bifurcation at the trivial solution when r=r_0. (ii) Assume that the parameters a,b and c satisfy (A), bc>0, |a|<|b-c|, and (C.1) or (C.2). Let, in addition, τ∈ℝ be such that |τ|<τ ^ *. Then, for r∈[0, r_0) the trivial solution to (<ref>) is asymptotically stable and the equation (<ref>) undergoes a Hopf bifurcation at the trivial solution when r=r_0. (iii) Assume that the parameters a,b and c satisfy (A), bc<0, |a|<|b+c|, and (C.1) or (C.2). Let, in addition, τ∈ℝ be such that |τ|< τ^ *.Then, for r∈[0, r_0) the trivial solution to (<ref>) is asymptotically stable and the equation (<ref>) undergoes a Hopf bifurcation at the trivial solution when r=r_0. § DIRECTION AND STABILITY OF THE HOPF BIFURCATION In this section, by applying normal form theory, we study the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions. We refer to <cit.> for the results and notations involved. Throughout this section, we will assume that the conditions and conclusions of Theorem <ref> are fulfilled, therefore, the delay σ in (<ref>) will no longer be interpreted as a free parameter in the sense that the difference between the delays is assumed constant. Now we introduce some preliminary notation and sets that we use throughout this section. Let κ>0 and define the phase space Cdef=C([-κ , 0], ℂ) equipped with the sup norm. Consider the following Banach space BC={ϕ:[-κ, 0] →ℝ: ϕ is continuous on [-κ, 0), ∃lim_θ→ 0^-ϕ(θ) ∈ℝ}. Without loss of generality, let max(r,r-τ)=r and define the linear functional L on BC as L(r)ϕ = ∫_-r^0 ϕ (θ) dη(θ)=-aϕ(0)-bϕ(-r)-cϕ(τ-r), where η (θ) = {[ 0 θ = -r,; -b -r < θ≤τ-r,; -(b+c) τ-r <θ≤ 0,; -(a+b+c) θ > 0. ]. Introducing the new parameter α=r-r_0, we will consider L(α)ϕ= -aϕ(0)-bϕ(-(r_0+α))-cϕ( τ -(r_0+α)). Due to the inclusion of ℝ^n into C, instead of (<ref>) we consider the equation u̇(t)=L(α)u_t+F(u_t, α), where α∈ V,  V a neighborhood of zero in ℝ and F(u_t, α)=f(u_t, α)-L(α)u_t with f(u_t, α)=f(u(t), u(t-(r_0+α)), u(t-(r_0+α-τ))). In order to calculate the normal forms up to the third order, we will assume that the functions L: V →ℒ(C; ℝ) and f: C × V →ℝ are sufficiently differentiable such that F: C× V →ℝ is a C^3 function. Let L_0=L(0), Λ = { iω^*,- i ω^* } and P be the center space of y'(t)=L_0y_t. Decomposing C by Λ as C=P⊕ Q, we choose bases Φ, Ψ for P and P^* respectively as P = spanΦ, Φ(θ)=(φ_1(θ), φ_2(θ))= (e^iω^* θ, e^-iω^*θ), - r ≤θ≤ 0, P^*=spanΨ, Ψ(s) = col (ψ_1(s), ψ_2(s))=col (ψ_1(0)e^-iω^*s, ψ_1(0)e^iω^* s), 0≤ s ≤ r, where ψ_1(0)def=(1-L_0(θ e^iω^*θ) )^-1=(1-b τ e^-iω^*r_0+(r_0-τ)(iω^*+a))^-1. Writing the Taylor expansions L(α)=L_0+L_1(α)+1/2L_2(α)+ h.o.t. , F(v, α)=1/2F_2(v, α)+ 1/3! F_3(v, α)+h.o.t., where h.o.t means higher order terms and L_j, F_j are the jth Fréchet derivative of L and F in the variables α and (v,α), respectively, we will consider the Taylor formulas [ F_2( ϕ, 0) = a_11ϕ^2(0)+a_22ϕ ^2(-r_0) +a_33ϕ^2(-(r_0-τ )); +2a_12ϕ(0)ϕ(-r_0)+ 2a_13ϕ(0)ϕ(-(r_0-τ )); + 2a_23ϕ(-r_0)ϕ(-(r_0-τ )),; ; F_3( ϕ, 0) = b_111ϕ^3(0)+b_222ϕ^3(-r_0)+b_333ϕ^3(-(r_0-τ )); + 3b_112ϕ^2(0)ϕ(-r_0)+3b_113ϕ^2(0)ϕ(-(r_0-τ )); +3b_122ϕ(0)ϕ^2(-r_0)+3b_133ϕ(0)ϕ^2(-(r_0-τ )); + 6b_123ϕ(0)ϕ(-r_0)ϕ(-(r_0-τ )); + 3b_223ϕ^2(-r_0)ϕ(-(r_0-τ )); + 3b_233ϕ(-r_0)ϕ^2(-(r_0-τ )). ] Let E_1 = 3 ψ_1(0) ·( b_111+b_222e^-iω^*r_0 +b_333e^iω^*(τ-r_0)+b_112(e^iω^*r_0+2e^-iω^*r_0) + b_113(e^-iω^*(τ-r_0)+2e^-iω^*(r_0-τ)) +b_122(2+e^-2iω^*r_0) +b_133 (2+e^-2iω^*(r_0-τ)) +b_123(2e^-iω^*τ+2e^iω^*τ+2e^iω^*(τ-2r_0)) + b_223(e^-iω^*(r_0+τ)+2e^iω^*(τ-r_0))+b_233(2e^-iω^*r_0+2e^iω^*(2τ-r_0)), E_2 = (a_11+a_22+a_33+2a_12cos(ω^*r_0)+2a_13cos(ω^*(r_0-τ)) +2a_23cos(ω^*τ))·(a+b+c)^-1, E_3 = ψ_1(0) ·( a_11+a_12+a_13+(a_12+a_22+a_23)e^-iω^*r_0 +(a_13+a_23+a_33)e^iω^*(τ-r_0)) , E_4 = ψ_1(0)·( a_11+a_22e^-2iω^*r_0 +a_33e^2iω^*(τ-r_0)+ 2a_12e^-iω^*r_0 +2a_13e^iω^*(τ-r_0)+2a_23e^iω^*(τ-2r_0))·( a_11e^2iω^*r_0 +a_22e^iω^*r_0 +a_33 e^iω^*(r_0+τ)+a_12(1+e^3iω^*r_0)+a_13 (e^iω^*(3r_0-τ)+e^2iω^*τ) +a_23 (e^iω^*(r_0-τ)+e^iω^*(r_0+2τ))) ·( (a+2iω^*)e^2iω^*r_0+b+ce^2iω^*τ) ^-1. According to <cit.>, we can calculate the normal forms on the center manifold to (<ref>) as follows. A normal form of (<ref>) on the center manifold of the origin is given by ẋ= B x+ ( [ B_1x_1α; B_1x_2α; ]) +( [ B_2x_1^2x_2; B_2x_1x_2^2; ])+ O(|x|α ^2+|x|^4), where B_1, B_2∈ℂ and B= (iω^*, -iω^*). The change of variables w, where x_1=w_1-iw_2, x_2=w_1+iw_2, and the use of polar coordinates w_1=ρcos(β), w_2=ρsin(β) transforms (<ref>) into {[ ρ̇ = K_1 αρ + K_2ρ ^3+O(α ^2ρ + |(ρ,α )|^4),; β̇ = - ω^* + O(|(ρ, α )|), ]. where [ K_1def= Re( ψ_1(0) · (ω^*(ω^*-ia) ))=μ'(r_0)>0 ,; K_2def=16Re(E_1)+ E_2 · Re(E_3)+ 12 Re(E_4). ] By applying the classical Hopf-bifurcation theory <cit.>, now we formulate our main result dealing with the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions. Suppose that any of the assumptions of Theorem <ref> are fulfilled, then the dynamics of (<ref>) near the origin is described by (<ref>). Since K_1>0, then a supercritical Hopf bifurcation occurs at r=r_0. Moreover if K_2< 0 (resp. K_2>0) the bifurcating periodic solutions are asymptotically stable (resp. unstable) on the center manifold. Further, the period of the bifurcating periodic solution is determined by T(ε) = 2π / ω^*(ε) for some |ε|<ε_0. § APPLICATIONS TO POPULATION MODELS In this section, we apply the results obtained in Sections <ref> and <ref> to prove the existence of bifurcating periodic solutions to (<ref>), and the equation (<ref>) with a delayed harvesting term. The results are illustrated by numerical simulations. §.§ Nicholson's model We consider the Nicholson's blowflies model (<ref>) described in Section <ref>. Here δ >0, H>0, P>0, r≥ 0, σ≥ 0, and we consider (<ref>) subject to the following non-negative initial condition and positive initial value x(t)=φ(t), φ(t)≥ 0, -max{ r,σ}≤ t ≤ 0, x(0)=x_0>0. It is well-known that if σ=0, then every solution x to the initial value problem (<ref>), (<ref>) is positive for t≥ 0. However, if σ>0 there exists a non-negative initial condition such that the solution to the initial value problem (<ref>), (<ref>) becomes negative at some t_0>0 (see, e.g., <cit.>). Therefore, since our context is biological, below we choose the appropriate initial conditions to present a numerical simulation of two positive solutions to (<ref>) (see Figure <ref> below). According to Theorem <ref>, now we formulate a stability criterion for the zero equilibrium of the model (<ref>). The following assertions are valid: (i) If 0<H<δ and P≤δ-H, then the zero equilibrium of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0. (ii) Assume that 0<H≤δ and δ-H<P<δ+H, or 0<δ<H and H-δ<P<δ+H. Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the zero equilibrium of (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0 such that τ=r-σ. If P≤δ +H, then (<ref>) has only the zero equilibrium. If P>δ + H, then the zero equilibrium is unstable and there exists the following positive equilibrium of (<ref>) x^*=ln( P/δ + H). Therefore, the condition P>δ +H will be assumed. Furthermore, in order to apply the results of the paper, we will assume that the delays in (<ref>) are such that σ=r-τ with τ∈ℝ. Let x(t)=x^*+u(t), then (<ref>) becomes u̇(t)=-δ u(t)-Hu(t-(r-τ))+(δ + H)[u(t-r)e^-u(t-r)+x^*(e^-u(t-r)-1)]. The linearization of (<ref>) around u=0 is ẏ(t)=-δ y(t)-(x^*-1)(δ + H)y(t-r)-Hy(t-(r-τ)). According to Theorems <ref>, <ref> and <ref>, now we formulate our main result dealing with the stability of the positive equilibrium of (<ref>), and the existence of local Hopf bifurcations at x=x^*. The following assertions are valid: (i) If 0<H< δ and 1< x^* < 2δ / (δ +H), then the positive equilibrium x^*>0 of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0. (ii) If 0<H<δ and 2H/(δ+H) ≤ x^*<1, then the positive equilibrium x^*>0 of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0. (iii) Assume that 0<H≤δ and x^*< 2H/(δ+H), or 0<δ<H and x^*< 2δ/(δ +H). Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the positive equilibrium x^*>0 of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0 such that τ=r-σ. There exists a bifurcation parameter r_0>0 such that the following assertions are valid: (i) Assume that 0<H≤δ and 2δ/(δ+H) <x^*≤ 2, or 0<δ<H and 2H/(δ+H)≤ x^* ≤ 2. Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the equation (<ref>) undergoes a supercritical Hopf bifurcation at x=x^* when r = r_0. (ii) Assume that x^*>2, or 0<δ<H and 1<x^*< 2H/(δ+H). Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the equation (<ref>) undergoes a supercritical Hopf bifurcation at x=x^* when r=r_0. (iii) Assume that 0<δ<H and 2δ/(δ+H)<x^*<1. Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the equation (<ref>) undergoes a supercritical Hopf bifurcation at x=x^* when r=r_0. Furthermore, in any of the above cases, if K_2<0 (resp. K_2>0), then the bifurcating periodic solutions are asymptotically stable (resp. unstable) on the center manifold. Now we consider a particular case of (<ref>), namely the equation ẋ(t)=-2 x(t)-x(t-(r-τ))+3e^2.5x(t-r)e^-x(t-r). Here the positive equilibrium is x^*=2.5. As usual, by a solution to (<ref>) we understand a continuously differentiable function x that satisfies the problem (<ref>), (<ref>) for t≥ 0. According to Theorem <ref> there exists a parameter r_0>0 such that all solutions to (<ref>) tend asymptotically to the positive equilibrium for every r<r_0, and for r= r_0 a Hopf bifurcation occurs at x^*=2.5. On the other hand, the stability of the bifurcating periodic solutions is determined by the sign of K_2. In particular, for τ=0.3782, we obtain ω^*=4.1533, r_0=0.5389 and K_2=-0.3573, i.e., the bifurcating periodic solution to (<ref>) is asymptotically stable on the center manifold. In Figure <ref> is showed that for r=r_0 a supercritical Hopf bifurcation occurs at x^*=2.5. §.§ Mackey-Glass model Based on the Mackey-Glass model (<ref>) described in Section <ref>, we propose the following Mackey-Glass model with a delayed harvesting term ẋ(t)=-δ x(t)+ P/1+x^n(t-r)-Hx(t-σ), where δ>0, H>0, P>0, n>0, r≥ 0, σ≥ 0. It is easy to check that (<ref>) has a unique positive equilibrium, namely x=x_* which satisfies x_*^n+1+x_*=P/δ+H. In (<ref>) we assume that σ=r-τ with τ∈ℝ. Let x(t)=x_*+u(t), then (<ref>) becomes u̇(t)=-δ u(t)-Hu(t-(r-τ))+P/1+(u(t-r)+x_*)^n-(δ+H)x_*. The linearization of (<ref>) around u=0 is ẏ(t)=-δ y(t)-Pnx_*^n-1/(x_*^n+1)^2y(t-r)-Hy(t-(r-τ)). According to Theorems <ref>, <ref> and <ref>, we conclude the following. There exists a bifurcation parameter r_0>0 such that the following assertions are valid: (i) If Pnx_*^n-1/(x_*^n+1)^2 + H <δ, then the positive equilibrium x_*>0 of equation (<ref>) is locally asymptotically stable for any delays r≥ 0 and σ≥ 0. (ii) Assume that Pnx_*^n-1/(x_*^n+1)^2+H>δ. Let, in addition, τ∈ℝ be such that |τ|<τ^*. Then, the equation (<ref>) undergoes a supercritical Hopf bifurcation at x=x_* when r=r_0. Furthermore, if K_2<0 (resp. K_2>0), then the bifurcating periodic solutions are asymptotically stable (resp. unstable) on the center manifold. According to the notation of Sections <ref> and <ref>, we finish this section presenting in Table <ref> the coefficients of the Taylor expansions around the positive equilibrium of the models (<ref>) and (<ref>). These coefficients are useful to determinate the sign of K_2 given in Theorems <ref> and <ref>. § DISCUSSION In this paper, we study a general class of autonomous scalar differential equations with two discrete delays. We have shown that the difference between the delays plays an important role in determining the stability of the system. At first, we introduce a new method to analyse the distribution of the roots of the corresponding characteristic equation and, by choosing a delay as the bifurcation parameter, we prove the existence of local Hopf bifurcations about the zero equilibrium. For some critical parameter sets, the absolute local stability of the zero equilibrium is also guaranteed. The normal forms are calculated to determine the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions. By applying the theoretical results obtained, we prove the existence of bifurcating periodic solutions for a Nicholson's blowflies model and a Mackey-Glass model, both with a delayed harvesting term. We conclude that when the birth rate, death rate and capture rate are regulated, the maturation delay leads to Hopf-bifurcations as long as the difference between the maturation delay and the capture delay remains constant. The numerical simulations are presented illustrating the results. It is worth mentioning here that Theorems <ref>, <ref> and <ref> can be applied to a wide class of biological models with two delays as long as the assumptions of these theorems are satisfied. § ACKNOWLEDGEMENTS J. Oyarce acknowledges support from Chilean National Agency for Research and Development (PhD. 2018–21180824) and A. Gómez the support from the research project 2120134 IF/R (University of Bío-Bío). The authors thank L. M. Villada for the advice in the numerical implementations of this work. § DECLARATIONS OF COMPETING INTEREST The authors declare they have no competing financial or personal interests that could influence this paper. § FUNDING This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profits sectors. § ORCID Adrián Gómez https://orcid.org/0000-0002-2978-4465 José Oyarce https://orcid.org/0000-0002-0974-3463 jose Amster P. Amster, A. Déboli, Existence of T–periodic solutions of a generalized Nicholson's blowflies model with a nonlinear harvesting term, App. Math. Lett. 25(9) (2012) 1203-1207. Qi Q. An, E. Beretta, Y. Kuang, C. Wang, H. Wang, Geometric stability switch criteria in delay differential equations with two delays and delay dependent parameters, J. Diff. Eqs.266(11) (2019) 7073-7100. Belair J. Bélair, S.A. Campbell, Stability and bifurcations of equilibria in a multiple-delayed differential equation, SIAM J. Appl. Math. 54(5) (1994) 1402-1424. berbrav2 L. Berezansky, E. Braverman, A note on stability of Mackey-Glass equations with two delays, J. Math. Anal. Appl. 450(2) (2017) 1208-1228. braverman L. Berezansky, E. Braverman, L. Idels, Nicholson's blowflies differential equations revisited: Main results and open problems, Appl. Math. Model. 34 (2010) 1405-1417. bravd E. Braverman, D. Kinzebulatov, Nicholson's blowflies equation with a distributed delay, Can. Appl. Math. Q. 14(2) (2006) 107-128. Braddock R.D. Braddock, P. van den Driessche, On a two lag differential delay equation, J. Austral. Math. Soc. Ser. B. 24(3) (1983), 292-317. ChowHale S.N. Chow, J.K. Hale, Methods of bifurcation theory, Springer, New York, 1982. Dieudonne J. Dieudonné, Foundations of Modern Analysis, Academic Press, New York, 1960. Faria T. Faria T, L.T. Magalhães, Normal forms for retarded functional differential equations with parameters and applications to Hopf bifurcation, J. Diff. Eqs. 122(2) (1995) 181-200. Gopalsamy K. Gopalsamy, Global stability in the delay-logistic equation with discrete delays, Houston J. Math. 16 (1990) 347-356. Gopalsamy3 K. Gopalsamy, Stability and oscillations in delay differential equations of population dynamics, Kluwer Academic Publishers, Dordrecht, 1992. Gopalsamy2 K. Gopalsamy, M.R.S Kulenović, G. Ladas, Oscillations and global attractivity in models of hematopoiesis, J. Dyn. Diff. Eqs. 2(2) (1990) 117-132. Gu K. Gu, S.L. Niculescu, J. Chen, On stability crossing curves for general systems with two delays, J. Math. Anal. Appl. 311(11) (2005) 231-252. Gurney W.S.C Gurney, S.P. Blythe, R.M. Nisbet, Nicholson's blowflies revisited, Nature. 287 (1980) 17-21. Gyori I. Györi, G. Ladas, Oscillation theory of delay differential equations with applications, Oxford University Press, New York, 1991. halegeometric J.K. Hale, W. Huang, Global geometry of the stable regions for two delay differential equations, J. Math. Anal. Appl. 178 (1993) 344-362. Hale2 J.K. Hale, L.T. Magalhães, W.M. Oliva, Dynamics in infinite dimensions, Second edition, Springer, New York, 2002. Hale1 J.K. Hale, S.M. Verduyn Lunel, Introduction to functional differential equations, Springer, New York, 1993. Hassard B.D. Hassard, N.D. Kazarinoff, Y.H. Wan, Theory and applications of Hopf bifurcation, Cambridge University Press, Cambridge, 1981. Huang2019 C. Huang, X. Yang, J. Cao, Stability analysis of Nicholson's blowflies equation with two different delays, Math. Comput. Simul. 171(9) (2020) 201-206. Lainscsek1 C. Lainscsek, L. Schettino, P. Rowat, E. van Erp, D. Song, H. Poizner, Nonlinear DDE analysis of repetitive hand movements in Parkinson's disease, pp. 421-427 in book Applications of Nonlinear Dynamics, Understanding Complex Systems (V. In, P. Longhini and A. Palacios eds.), Springer-Verlag, Berlin Heidelberg, 2009. Lainscsek2 C. Lainscsek, A.L. Sampson, R. Kim, M.L. Thomas, K. Man, X. Lainscsek, et. al, Nonlinear dynamics underlying sensory processing dysfunction in schizophrenia, Proc. Natl. Acad. Sci.116 (2019) 3847-3852. RuanWeiLi X. Li, S. Ruan, J. Wei, Stability and bifurcation in delay-differential equations with two delays, J. Math. Anal. Appl. 236(2) (1999) 254-280. Wei2021 Y. Liu, J. Wei, Bifurcation analysis in delayed Nicholson blowflies equation with delayed harvest, Nonlinear Dyn. 105 (2021) 1805-1819. Mackey1 M. Mackey, L. Glass, Oscillation and chaos in physiological control systems, Science 197 (1977) 287-289. Nicholson A.J. Nicholson, An outline of the dynamics of animal populations, Aus. J. Zool. 2(1) (1954) 9-65. Piotrowska M.J. Piotrowska, A remark on the ODE with two discrete delays, J. Math. Anal. App. 329(1) (2007) 664-676. ruan S. Ruan, J. Wei, On the zeros of transcendental functions with applications to stability of delay differential equations with two delays, Dynam. Cont. Discr. Impul. Sys. Series A. 10(6) (2003) 863-874. Hsmith H. Smith, An introduction to delay differential equations with applications to the life sciences, Springer, New York, 2011. Wei2005 J. Wei, M.Y. Li, Hopf bifurcation analysis in a delayed Nicholson blowflies equation, Nonlinear Anal. 60(7) (2005), 1351-1367. Wei2007 J. Wei, D. Fan, Hopf bifurcation analysis in a Mackey-Glass system, Internat. J. Bifur. Chaos 17 (2007) 2149-2157.
http://arxiv.org/abs/2306.03592v2
20230606112855
A sketch-and-select Arnoldi process
[ "Stefan Güttel", "Igor Simunec" ]
math.NA
[ "math.NA", "cs.NA", "65F10, 65F50" ]
Statistical inference for sketching algorithms Ryan P. BrowneDepartment of Statistics & Actuarial Science, University of Waterloo, Waterloo, Ontario, N1E 2V1, Canada. Email: [email protected] and Jeffrey L. AndrewsDepartment of Statistics, University of British Columbia, Okanagan Campus, Kelowna, BC, V1V 1V7, Canada. Email: [email protected] July 31, 2023 ==================================================================================================================================================================================================================================================================================================================== A sketch-and-select Arnoldi process to generate a well-conditioned basis of a Krylov space at low cost is proposed. At each iteration the procedure utilizes randomized sketching to select a limited number of previously computed basis vectors to project out of the current basis vector. The computational cost grows linearly with the dimension of the Krylov space. The subset selection problem for the projection step is approximately solved with a number of heuristic algorithms and greedy methods used in statistical learning and compressive sensing. Krylov method, Arnoldi process, randomized sketching 65F10, 65F50 TODO. List of things to consider adding/modifying for journal version: * add Greedy Algorithm of Natarajan (or check if it is equivalent to one of the other methods like OP; it doesn't seem to be numerically equivalent); Natarajan also refers to old Stewart and Golub paper on pivoted QR * what about Newton-type basis polynomials? Can we use sketching to construct p_j(z)=α_j (z-σ_j) p_j-1(z), v_j+1 = p_j(A)b by short recurrence so some near-optimality holds? (Note: Arnoldi polynomials minimize p_j(A)b.) * methods seem to work much better for dense random than for sparse (e.g., unit vectors). I found that perturbing sparse by a tiny bit often improves behavior significantly. § INTRODUCTION The Arnoldi process <cit.> is a key component of many Krylov subspace methods for large-scale numerical linear algebra computations, including solving linear systems of equations and eigenvalue problems with nonsymmetric matrices A∈ℝ^N× N; see, e.g., <cit.>. The Arnoldi process is also used for solving least squares problems, approximating matrix functions or matrix equations, and in model order reduction, to name just a few other applications. Given a starting vector ∈ℝ^N and an integer m≪ N, the Arnoldi process iteratively constructs an orthonormal basis {_1,_2,…,_m} of the Krylov space 𝒦_m(A,) := {, A, …, A^m-1}. More precisely, given j orthonormal basis vectors _1 := /,_2,…,_j, the next basis vector is obtained by orthogonalizing _j := A_j against all previous vectors, _j := _j - ∑_i=1^j h_i,j_i, h_i,j := _i^T _j, and then setting _j+1:=_j/h_j+1,j with h_j+1,j= _j. Collecting the basis vectors into V_m = [_1,_2,…,_m]∈ℝ^N× m and the orthogonalization coefficients into H_m = [h_i,j]∈ℝ^m× m, the Arnoldi process generates an Arnoldi decomposition A V_m = V_m H_m + h_m+1,m_m+1_m^T, where _m∈ℝ^m denotes the m-th canonical unit vector. By construction, H_m is an upper-Hessenberg matrix. In terms of arithmetic cost, the Arnoldi process requires m matrix-vector products A _j, as well as m(m+1)/2 inner products and vector operations (“” in BLAS-1 naming), for a total of O(m·(A) + N m^2) arithmetic operations. For sufficiently sparse A, this cost will be dominated by the N m^2-term for the orthogonalization. There are at least two possible ways to reduce this cost. The first one is to restart the Arnoldi process after m iterations, using =_m+1 as the starting vector for the next cycle. Such a restarting approach is particularly natural in the context of solving linear systems of equations (as there exists a linear error equation A= where is the residual), but it can also be used for eigenvalue problems <cit.> or matrix function computations <cit.>. Of course, the combined Krylov basis computed after ℓ>1 restarts is no longer orthonormal and this usually leads to a delayed convergence in restarted Krylov methods. The second, more recently proposed approach to reduce the arithmetic cost of the Arnoldi process is to employ randomized sketching; see, e.g., <cit.>. The key tool of sketching is an embedding matrix S∈ℝ^s× N with m < s≪ N that distorts the Euclidean norm · of vectors in a controlled manner <cit.>. More precisely, given a positive integer m and some ε∈ [0,1), we assume that S is such that for all vectors  in the Krylov space 𝒦_m+1(A,), (1-ε) ^2 ≤ S ^2 ≤ (1+ε) ^2. The matrix S is called an ε-subspace embedding for 𝒦_m+1(A,). Condition <ref> can equivalently be stated with the Euclidean inner product <cit.>: for all ,∈𝒦_m+1(A,), ⟨, ⟩ - ε·≤⟨ S, S⟩≤⟨, ⟩ + ε·. In practice, such a matrix S is not explicitly available and we hence have to draw it at random to achieve <ref> with high probability. There are several ways to construct a random matrix S with this property, see for instance the discussions in <cit.> or <cit.>. There are two main ways sketching can be employed within the Arnoldi process. The first one, proposed and applied in <cit.>, is to replace the inner products computed in <ref> by inner products on sketched vectors _j := _j - ∑_i=1^j h_i,j_i, h_i,j := (S_i)^T (S_j). Effectively, the process then computes an orthonormal sketched basis S V_m+1. Using an efficient subspace embedding such as the subsampled random cosine or Fourier transform <cit.> requiring O(Nlog s) operations when applied to a single vector, the overall complexity is now O(m·(A) + mNlog s+ N m^2). Even though the cost of computing all (m+1)m/2 inner products is reduced from N (m+1)m/2 to s (m+1)m/2, there is still a quadratic dependency on m. On the other hand, it follows from <cit.> that (1-ε/1+ε)^1/2(SV_m+1) ≤(V_m+1) ≤(1+ε/1-ε)^1/2(SV_m+1), so the computed Krylov basis V_m+1 will be close to orthonormal provided that ε is sufficiently small (i.e., s is sufficiently large which, in practice, means choosing s between 2m and 4m). Alternatively, one may give up completely on the aim of computing a (near) orthonormal Krylov basis and modify the target algorithm to deal with the non-orthogonality gracefully. This has been proposed in <cit.> for sketched GMRES and Rayleigh–Ritz extraction of eigenvalues, and in <cit.> for matrix function computations. One of the most straightforward approaches to generate the Krylov basis with a reduced number of projection steps[Mathematically, the term “orthogonalization” is no longer adequate when the basis is non-orthogonal, so we use the term projection step to refer to a vector operation in truncated Arnoldi.] is the truncated Arnoldi procedure. Let a truncation parameter k be given, then in place of <ref> we use the iteration _j := _j - ∑_i=max{ 1, j+1-k}^j h_i,j_i, h_i,j := _i^T _j. Alternatively, this same truncated Arnoldi procedure can be combined with sketching by replacing the coefficients h_i,j by their sketched counterparts h_i,j := (S_i)^T (S_j). A key benefit of truncation is that the O(Nm^2) cost of the orthogonalization is reduced to O(Nmk), i.e., it grows linearly with the Krylov basis dimension m. The truncated Arnoldi procedure is likely inspired by the Lanczos process for a symmetric matrix A, which is mathematically equivalent to truncated Arnoldi with k=2. The Faber–Manteuffel theorem <cit.> gives a complete characterization of the matrices A for which there is a short-term recursion that generates an orthogonal set of Krylov basis vectors. The truncated Arnoldi procedure first appeared in the context of eigenvalue problems <cit.> and linear systems <cit.>. For computations involving matrix functions, truncated Arnoldi has been used for the matrix exponential in <cit.> and for more general matrix functions in  <cit.>. We also refer the reader to <cit.> and <cit.>. Unfortunately, the Krylov bases generated by truncated Arnoldi (with and without sketching) can become severely ill-conditioned even for moderate m. In this paper we propose and test an alternative approach which we call the sketch-and-select Arnoldi process. The key idea is simple: instead of projecting each new Krylov basis vector against the k previous basis vectors, we use the sketched version of the Krylov basis to identify k candidates for the projection. We show that this problem is related to a sparse least squares approximation problem that has been studied in statistical learning and compressive sensing. We then demonstrate with performance profiles that the sketch-and-select Arnoldi process with a simple select strategy to determine the candidate vectors often leads to much better conditioned Krylov bases, outperforming truncated Arnoldi and many other tested methods. § THE SKETCH-AND-SELECT ARNOLDI PROCESS At iteration j of the sketch-and-select Arnoldi process we compute _j := A_j as in the standard Arnoldi process, but then aim to determine coefficients h_i,j (i=1,…,j) of which at most k are nonzero. We then compute _j := _j - ∑_h_i,j≠ 0 h_i,j_i and set _j+1:=_j/h_j+1,j with a suitable scaling factor h_j+1,j. To obtain the nonzero coefficients h_i,j we use the sketched Krylov basis SV_j and the sketched vector S_j. For any iteration j>k, we approximately solve the following sparse least squares problem: select an index set I ⊆{ 1,2,…,j } with |I| = k as Imin∈ℝ^|I|min S_j - SV_j( : ,I) . Here we have used MATLAB notation to denote column selection. Given the index set I, the components of the best are then used as the projection coefficients h_i,j in <ref>. Finally, the scaling factor h_j+1,j is chosen so that S _j+1 = 1. The determination of an optimal index set I is also known as best subset selection problem, a classic topic of model selection in statistical learning <cit.>. There are two main variants of this problem; (i) the problem considered above where the sparsity level k is prescribed, and (ii) for a given tolerance ϵ >0, the problem of selecting an index set I with fewest possible elements so that min_∈ℝ^|I| S_j - SV_j( : ,I) ≤ϵ. It is known that the determination of a global minimizer for such problems is NP-hard; see <cit.>. Nevertheless, a vast amount of literature has been devoted to developing efficient optimization algorithms for this task. A review of these methods is beyond the scope of this paper, so we refer to the excellent overview in <cit.>. A simple approach to selecting the index set I is to retain the k components of := (SV_j)^† (S_j) which are largest in modulus, ignoring the remaining j-k components. We found this to perform very well in the experiments reported in <ref>, and we present a basic MATLAB implementation in <ref>. We refer to this variant as (for “pseudoinverse”). One could also try to justify a weighted version of by noting that (SV_j)^T [ S_j - SV_j( : ,I) (I) - SV_j( : ,I) (I) ] = 0, where I denotes the set of indices that have not been selected and =(SV_j)^† (S_j) as before. Therefore (SV_j)^T [ S_j - SV_j( : ,I) (I) _= h_j+1,j S _j+1 ] = (SV_j)^T [SV_j( : ,I) (I)]. Our analysis in <ref> shows that ([S V_j, S_j+1]) crucially depends on the norm (SV_j)^T (S_j+1), which should be kept as small as possible. Upon applying norms we obtain |h_j+1,j | ·(SV_j)^T (S_j+1)≤∑_i ∈I | h_i,j | · (SV_j)^T (S_i) , which suggests to assign to I the indices of the k coefficients h_i,j for which |h_i,j| · (SV_j)^T (S_i) is largest (i=1,…,j). Unfortunately, the non-orthogonality of S V_j and the complicated dependence of h_j+1,j on the index set I means that minimizing the right-hand side of this inequality does not necessarily lead to a significant reduction of (SV_j)^T (S_j+1), which is why we no longer consider this weighted variant of here. Another approach is to select I as above, but then to recompute the corresponding coefficients as = SV_j( : ,I)^† (S_j). This ensures that the projected S_j is orthogonal to (SV_j( : ,I)). We refer to this variant as (as two pseudoinverses are computed). In the variant (for “correlation”) we select I as the components of (SV_j)^T (S_j) which are largest in modulus, using the k inner products as the projection coefficients h_i,j.[The variant has been hinted at in <cit.>: “Indeed, (𝐒𝐪_i)^* (S𝐀𝐪_j) ≈𝐪_i^* 𝐀𝐪_j for all i ≤ j, so we can choose to orthogonalize 𝐀𝐪_j only against the basis vectors 𝐪_i where the inner product is nonnegligible.”] Alternatively, we can recompute the projection coefficients as = SV_j( : ,I)^† (S_j), referred to as variant . Finally, we also test three popular methods for sparse approximation, namely orthogonal matching pursuit (OMP) <cit.>, subspace pursuit (SP) <cit.>, and the “Algorithm Greedy” from <cit.>. We have chosen these methods due to their popularity but also because they naturally allow for a fixed sparsity level k, as opposed to, e.g., LASSO <cit.>. §.§ Growth of the basis condition number By <ref> it is sufficient to only control the condition number growth of the sketched Krylov basis [SV_j, S _j+1]. For notational convenience, we will write this as [V,]. Our aim is to bound the growth of ([V , ]) in terms of the current (V). Note that the Gram matrix [V, ]^T [V, ] has unit diagonal in the sketch-and-select Arnoldi process as we always normalize each sketched Krylov basis vector, and so (V)^2 = ( [ V^T V 0; 0^T 1 ]). Also, ([V,])^2 = ( [ V^T V V^T; ^T V 1 ]). We can apply standard relative perturbation bounds known for symmetric positive definite matrices. To this end, write [ V^T V V^T; ^T V 1 ] = [ V^T V 0; 0^T 1 ] + [ O V^T; ^T V 0 ] =: G + Δ G. Note that Δ G=V^T. Then (see, e.g., <cit.> or <cit.>) ([V,])^2 = (G + Δ G) = λ_max(G + Δ G)/λ_min(G + Δ G) ≤ (1+η) λ_max(G)/(1-η)λ_min(G) = 1+η/1-η(V)^2, where η = G^-1/2 (Δ G) G^-1/2 = (V^T V)^-1/2 V^T ≤σ_min(V)^-1 V^T . Clearly this bound is only useful as long as 1-η>0, which is guaranteed if V^T < σ_min(V), and generally it cannot be expected to be sharp. However, it shows that it is a good idea to try to keep V^T as small as possible. Going back to the notation used for the sketch-and-select Arnoldi process, this means that we should aim to keep (SV_j)^T (S _j+1) small. An alternative approach to quantify the condition number growth is by bounding the smallest and largest eigenvalue of the Gram matrix, taking into account the special structure of that matrix. The following theorem provides such a result, giving a more explicit bound on ([V, ]). Let V be a matrix with m linearly independent columns of unit norm. Denote by σ_min and σ_max the smallest and largest singular value of V, respectively. Further, let be a unit norm vector such that V^T < σ_min. Then ([V,])^2 ≤1 + σ_max^2 + √( (σ_max^2-1)^2 + 4V^T ^2)/1 + σ_min^2 - √((σ_min^2-1)^2 + 4V^T ^2). We begin by remarking that, as the columns of [V,] are normalized, we have σ_max([V,])≤√(m+1) and hence any ill-conditioning of [V,] is mainly attributable to a small value of σ_min([V,]) = λ_min([V,]^T [V,])^1/2. Define the Rayleigh quotient R(,) = [ ^T, √(1-^2)] [ V^T V V^T; ^T V 1 ][ ; √(1-^2) ], ≤ 1. Let us denote β = and fix V^T :=α. Then R(,) = ^T (V^T V) + 2 ^T (V^T ) √(1-β^2) + 1 - β^2. The first term in this Rayleigh quotient is minimized by choosing as an eigenvector of V^T V corresponding to λ_min = λ_min(V^T V). For any choice of , =β, the second term is minimal when is such that V^T = -α/β. Hence, we can minimize the overall Rayleigh quotient directly, leading to R_min(β) = β^2 λ_min - 2 αβ√(1-β^2) + 1 - β^2. To find the optimal β∈ [0,1], we set γ:= 1-β^2, upon which R_min(β) = (1-γ)λ_min - 2α√(γ-γ^2) +γ which is easy to differentiate for γ. The optimal value β_* that minimizes R_min is β_* = √(1-γ_*), where γ_* = C_*^2 - C_*√(C_*^2+4) + 4/2(C_*^2+4), C_* = 1-λ_min/α. We can now derive a rather simple expression for R_min(β_*) in terms of C_*. We have γ_* = C_*^2 - C_*√(C_*^2+4) + 4/2(C_*^2+4) = 1/2 - 1/2C_*/√(C_*^2 + 4), and from this expression it easily follows that √(γ_* - γ_*^2) = 1/√(C_*^2 + 4). By plugging this expression and γ_* into the expression of R_min(β_*), we have R_min(β_*) = λ_min + (1 - λ_min)γ_* - 2 α√(γ_* - γ_*^2) = λ_min + α C_* ( 1/2 - 1/2C_*/√(C_*^2 + 4)) - 2 α/√(C_*^2 + 4) = λ_min + α/2 √(C_*^2 + 4)( C_* √(C_*^2 + 4) - C_*^2 - 4 ) = λ_min + α/2( C_* - √(C_*^2 + 4)) = 1/2 + 1/2λ_min - α/2√(C_*^2 + 4) = 1 + λ_min - √((1-λ_min)^2 + 4α^2)/2. For this quantity to be positive, we require α^2 < λ_min or equivalently V^T v < σ_min(V). Similarly, the first term in the Rayleigh quotient <ref> is maximized by choosing  as an eigenvector of V^T V corresponding to λ_max = λ_max(V^T V). For any choice of , =β, the second term in <ref> is maximal when is such that V^T = α/β. Hence, we can maximize the overall Rayleigh quotient directly, leading to R_max(β) = β^2 λ_max + 2 αβ√(1-β^2) + 1 - β^2. To find the optimal β∈ [0,1], we set γ:= 1-β^2, upon which R_max(β) = (1-γ)λ_max + 2α√(γ-γ^2) +γ. The optimal value β^* that maximizes R_max is β^* = √(1-γ^*), where γ^* = C^2 - C √(C^2+4) + 4/2(C^2+4), C = λ_max-1/α. (Note that the only difference compared to the above is in C versus C_*.) Evaluating R_max(β^*) yields R_max(β^*) = 1 + λ_max + √((λ_max-1)^2 + 4α^2)/2. Combining the expressions for the (worst-case) Rayleigh quotients, we obtain ([V,])^2 ≤R_max(β^*)/R_min(β_*) = 1 + λ_max + √((λ_max-1)^2 + 4α^2)/1 + λ_min - √((λ_min-1)^2 + 4α^2). The result follows since λ_max = σ_max(V)^2, λ_min = σ_min(V)^2, and α = V^T. Observe that it follows from the proof of <ref> that if the vector minimizes the Rayleigh quotient in <ref> (i.e., it satisfies V^T = -α /β_* where is the eigenvector of V^TV corresponding to λ_min), we have σ_min ([V, ])^2 = 1 + σ_min^2 - √((1-σ_min^2)^2 + 4V^T ^2)/2. Since for any vector of unit norm we have σ_max([V, ]) ≥σ_max≥ 1, this implies that there is a choice of for which ([V,])^2 ≥2/1 + σ_min^2 - √((σ_min^2-1)^2 + 4V^T ^2). Recalling that σ_max([V, ]) ≤√(m+1), we find that the right-hand side of <ref> is smaller than the right-hand side of <ref> at most by a factor m+1. In principle, it is possible to select a vector that realizes <ref> at every iteration, even though this is unlikely to happen in practice. Going back to the notation used for the sketch-and-select Arnoldi process, let us now consider the behavior of σ_min(SV_m) as m increases, assuming that at each iteration S _m+1 is selected as a vector that satisfies <ref>. For convenience, define x_m := σ_min(S V_m) and α_m := S _m+1 < x_m. Because of <ref>, these quantities satisfy the recurrence relation x_m+1^2 = 1/2(1 + x_m^2 - √((1 - x_m^2)^2 + 4 α_m^2)), m ≥ 1, with x_1 = σ_min(S_1) = 1. Using the fact that √(1 + z)≥ 1 + 1/2z - 1/8z^2 for all z ≥ 0, we can show that x_m+1^2 ≤ x_m^2 - α_m^2/1 - x_m^2(1 - α_m^2/(1 - x_m^2)^2). If for instance we take x_m ≤ 1/√(2) and 1/2 x_m ≤α_m < x_m, we have 1 - α_m^2/(1 - x_m^2)^2≤1/2, and so we obtain x_m+1^2 ≤ x_m^2 - 1/2α_m^2 ≤7/8 x_m^2, m ≥ 1. This implies that σ_min(S V_m) ≤(7/8)^m/2, m ≥ 1, showing that a geometric convergence to 0 of the smallest singular value of SV_m is possible. As a consequence, the condition number of SV_m may diverge geometrically (and hence also (V_m), because of <ref>), even if we impose the condition (SV_m)^T S_m+1 = 1/2σ_min(SV_m), which is quite stringent. § NUMERICAL EXPERIMENTS We now test variants of the proposed sketch-and-select Arnoldi process on a range of matrices from the SuiteSparse Matrix Collection (formerly the University of Florida Sparse Matrix Collection <cit.>), a widely used set of sparse matrix benchmarks collected from a wide range of applications. We include 80 matrices A in our test, which correspond to all square numerically nonsymmetric matrices in the collection (as of June 2023) with sizes between N=10^4 and 10^6. The starting vector  is chosen at random with unit normally distributed entries and kept constant for all tests with the same matrix dimension. The MATLAB scripts to reproduce the experiments in this section are available at <https://github.com/simunec/sketch-select-arnoldi>. In the tests below we compare the seven variants of the sketch-and-select Arnoldi process introduced in <ref>, each using a different method for the sparse least squares problem, as well as the truncated Arnoldi process with and without sketching. §.§ Illustration with a single matrix For our first experiment we plot in <ref> the condition number (V_m) as a function of m for the SuiteSparse problem , a matrix of size N=259,156 corresponding to a finite difference electro-physiological 3D model of a torso. We use a truncation parameter of k=2 and k=5 and perform m=100 Arnoldi iterations. The sketching operator is the subsampled random Hadamard transform (SRHT) <cit.> with an embedding dimension of s=200. We see from <ref> that most sketch-and-select Arnoldi variants exhibit a much smaller condition number growth than truncated Arnoldi. The main exception is which performs very badly in both cases, and which is only acceptable for k=5. Surprisingly, the variant breaks down after about m=80 Arnoldi iterations in the case k=2, but it works very well for k=5. The four most reliable variants are , , , and , all leading to a rather slow growth of the condition number. In terms of computational cost, all sketch-and-select variants only perform operations on small sketched matrices and vectors to determine the projection coefficients. Hence these computations are comparably cheap, but is the cheapest method as it only requires the solution of a single s× j least squares problem in the j-th Arnoldi iteration. OMP is a greedy method that, for each Arnoldi iteration j, requires k iterations, with the i-th inner iteration involving a matrix-vector product with the s× j matrix S V_j and the solution of an s × i least squares problem with the currently selected columns of S V_j (i=1,…,j). SP is also an iterative method but we have fixed the number of iterations to 1. As a result, the operations performed by SP in the j-th Arnoldi iteration are two matrix-vector products with the s × j matrix S V_j, a matrix-vector product with an s × k matrix, and the solution of two s × k least squares problems, as well as a least squares problem with a matrix of size at most s × 2k formed with selected columns of S V_j. The “Algorithm Greedy” from <cit.> is an iterative method that requires k iterations for each Arnoldi iteration j, and each inner iteration involves the computation of 2j inner products between sketched vectors, for a total cost of 2jk sketched inner products and the solution of one s × k least squares problem in the j-th Arnoldi iteration. §.§ Performance profiles Our next tests involve all 80 matrices of the SuiteSparse Matrix Collection and we use performance profiles <cit.> to visualize the results. In a performance profile, each algorithm is represented by a non-decreasing curve in a θ–y graph. The θ-axis represents a tolerance θ≥ 1 and the y-axis corresponds to a fraction y∈ [0, 1]. If a curve passes through a point (θ,y) this means that the corresponding algorithm performed within a factor θ of the best observed performance on 100 y% of the test problems. For θ = 1 one can read off on what fraction of all test problems each algorithm was the best performer, while as θ→∞ all curves approach the value y = 1, unless an algorithm has failed on a fraction of the test problems. For each test problem, we run each algorithm until the condition number of the constructed basis becomes larger than 10^12, up to a maximum basis dimension. The performance ratio is computed as the inverse of the basis dimension that is reached, so that, e.g., θ=2 would correspond to an algorithm that generates a Krylov basis with half the size of the basis generated by the best algorithm. The top panel in <ref> shows the performance profiles for a target basis condition number of 10^12 and a maximum basis dimension of m = 100, with a truncation parameter of k=2. The embedding dimension is s = 200. The four most reliable variants are , , , and , and they are almost indistinguishable in performance. Given that is the most straightforward to implement and the most computationally efficient, it emerges as the method of choice from these experiments. The bottom panel in <ref> displays the dimensions of the Krylov bases constructed for each test matrix by the four best performing algorithms and by truncated Arnoldi. The matrices are sorted so that the dimensions of the bases generated by truncated Arnoldi are in non-decreasing order. A similar picture emerges in <ref>, where we have increased the truncation parameter to k=5 and k=10, respectively; the maximum basis dimension is increased to m = 150 and m = 200, respectively, and the embedding dimension is chosen as s = 2m. With these parameters, the difference in performance between truncated Arnoldi and the best variants becomes more significant, and the variant can be seen to perform slightly better than , and . §.§ The effect of the starting vector In the previous experiments, we have used a starting vector b⃗ with random unit normally distributed entries. To investigate the influence of the starting vector on the performance of the algorithms, we repeat the experiment of <ref> using as vector the first canonical unit vector _1, instead of a random vector. The results are reported in <ref>. Surprisingly, the sketch-and-select Arnoldi variants perform significantly worse with this starting vector while, on the other hand, the overall performance of truncated Arnoldi is almost the same as before. In <ref> we repeat the same experiment by slightly perturbing the starting vector, i.e., we take = _1 + 10^-15, where denotes the vector of all ones. With this change, the performance of the sketch-and-select Arnoldi variants improves significantly relative to truncated Arnoldi, though not to the same level of what was observed in the experiment in <ref> with a random starting vector. A very similar improvement is obtained with a small random perturbation of the vector . While it currently appears to be impossible to make any general statements about the dependence of relative performance of sketch-and-select Arnoldi on the starting vector , we observed that truncated Arnoldi can produce artificially well conditioned bases for certain starting vectors. For example, it may happen that sparse basis vectors constructed by truncated Arnoldi have disjoint supports, and so they are all orthogonal to each other. (One example is the matrix : when =_1, the first 452 Krylov basis vectors produced by truncated Arnoldi with truncation parameter k≥ 2 are given by ±_2j-1.) The sketching-based methods do not “see” the sparsity of the basis vectors and hence cannot produce this exact orthogonality, losing performance relative to truncated Arnoldi. Adding a small (random) perturbation to the starting vector  removes the sparsity and hence reduces the appearance of such artificial cases. § FURTHER REMARKS ON THE SUBSET SELECTION PROBLEM Selecting k columns that give the smallest condition number of [V,] in the sketch-and-select Arnoldi process is a combinatorial problem, and even selecting a near-best index set I is nontrivial. We now give examples demonstrating that neither the largest coefficients in V^† nor those in V^T do necessarily indicate the best vectors to select for a minimal condition number growth. Note that most greedy algorithms, including the “Algorithm Greedy” in <cit.>, OMP <cit.>, and SP <cit.> use the entries of either V^† or V^T to select vectors, and so can be misled on examples like the ones below. Consider V = 1/√(5)[ 1 0 0; 2 2 0; 0 1 1; 0 0 2 ], w⃗ = [ 8; 8; 9; 7 ]. Note that all columns of V have unit norm. Say k=1, then which of the three columns of V should we project out of to get the best possible conditioned basis of four vectors? Let us compute V^†w⃗ = [ 9.39; 1.68; 9.95 ], V^T = [ 10.7; 11.2; 10.3 ]. According to these coefficients, we might be tempted to project either against v⃗_3 or v⃗_2, respectively. However, among the vectors v⃗_i := (I - v⃗_iv⃗_i^†) , i=1,2,3, it is the choice i=1 that strictly minimizes both ([V, v⃗_i]) and ([V, v⃗_i/v⃗_i]). Instead of condition number growth, perhaps a better measure to look at is the growth of loss of orthogonality, e.g., by any of the metrics I - [V,v⃗]^T[V,v⃗] , I - [V,v⃗/v⃗]^T[V,v⃗/v⃗] , [V,v⃗]^T[V,v⃗] , [V,v⃗/v⃗]^T[V,v⃗/v⃗] in the Euclidean or Frobenius norm. Can we get some guarantees for that? It turns out that this is also not the case. The above matrix V and the vector =[9,9,10,10]^T give examples where, in all cases, the third component of V^†w⃗ and V^T w⃗ is the largest in modulus, but the smallest growth in loss of orthogonality with k=1 projection steps is obtained by projecting against _2. We conclude that there must be some condition on V (and possibly ) to guarantee that the “correct” (optimal) support of k coefficients is selected. This is a well-known fact in compressive sensing <cit.>, where conditions like the restricted isometry property (RIP, <cit.>) are needed to guarantee exact or approximate recovery in sparse least squares approximation (see, e.g., <cit.> or <cit.>). On the other hand, the sensing problem A = studied in this field is usually underdetermined and one often has the freedom to choose the columns of A (the dictionary), whereas in our case we would like to select k columns from a basis which is otherwise unstructured. Moreover, in compressive sensing the vector that one wants to recover is usually sparse, or at least well approximated by a sparse vector, while in our case the coefficient vectors are generally dense. Nevertheless, we believe that results developed in compressive sensing may be used to gain more insight into the sketch-and-select Arnoldi process. § CONCLUSIONS AND FUTURE WORK We have introduced a sketch-and-select Arnoldi process and demonstrated its potential to generate Krylov bases that are significantly better conditioned than those computed with the truncated Arnoldi process, at a computational cost that grows only linearly with the dimension of the Krylov space. We have identified that the problem of generating a well-conditioned Krylov basis in that way is related to the best subset selection problem in statistical learning and the sparse approximation problem encountered in compressive sensing. While in principle any of the many methods that have been proposed for these problems can be employed, we have been surprised to find that the most basic variant of the sketch-and-select Arnoldi process shown in <ref> is among the best. In this approach we simply retain the k largest modulus coefficients of the least squares solution, setting the remaining coefficients to zero. Our implementation of the sketch-and-select Arnoldi process in <ref> is not optimized for performance, with several straightforward improvements possible including the use of QR updating strategies for the solution of the least squares problem or performing some of the operations in reduced precision. Also, for a practical implementation to be used in production, the sketch-and-select Arnoldi process could be modified to adapt the parameter k dynamically based on the measured growth of the condition number. Bounds like the ones derived in <ref> might be useful to control the condition number growth efficiently. Further possible extensions include a sketch-and-select block Arnoldi process or restarting strategies. § ACKNOWLEDGMENTS I. S. wishes to thank S. G. for the kind hospitality received during a research visit to the University of Manchester, where this work was initiated. plain
http://arxiv.org/abs/2306.04312v1
20230607101814
Flat band-engineered spin-density wave and the emergent multi-$k$ magnetic state in the topological kagome metal Mn$_{3}$Sn
[ "Xiao Wang", "Fengfeng Zhu", "Xiuxian Yang", "Martin Meven", "Xinrun Mi", "Changjiang Yi", "Junda Song", "Thomas Mueller", "Wolfgang Schmidt", "Karin Schmalzl", "Eric Ressouche", "Jianhui Xu", "Mingquan He", "Youguo Shi", "Wanxiang Feng", "Yuriy Mokrousov", "Stefan Blügel", "Georg Roth", "Yixi Su" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci", "cond-mat.supr-con" ]
These authors contributed equally to this work Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich, Lichtenbergstrasse 1, D-85747 Garching, Germany These authors contributed equally to this work Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich, Lichtenbergstrasse 1, D-85747 Garching, Germany State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, 200050 Shanghai, China These authors contributed equally to this work Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement (Ministry of Education), Beijing Key Laboratory of Nanophotonics and Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich, Lichtenbergstrasse 1, D-85747 Garching, Germany Institut für Kristallographie, RWTH Aachen University, D-52056 Aachen, Germany Low Temperature Physics Lab, College of Physics & Center of Quantum Materials and Devices, Chongqing University, Chongqing 401331, China Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich, Lichtenbergstrasse 1, D-85747 Garching, Germany Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich, Lichtenbergstrasse 1, D-85747 Garching, Germany Jülich Centre for Neutron Science (JCNS) at ILL, Forschungszentrum Jülich, F-38000 Grenoble, France Jülich Centre for Neutron Science (JCNS) at ILL, Forschungszentrum Jülich, F-38000 Grenoble, France Université Grenoble Alpes, CEA, IRIG, MEM, MDN, F-38000 Grenoble, France Helmholtz-Zentrum Berlin für Materialien und Energie GmbH, Hahn-Meitner-Platz 1, D-14109 Berlin, Germany Low Temperature Physics Lab, College of Physics & Center of Quantum Materials and Devices, Chongqing University, Chongqing 401331, China Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China E-mail: [email protected] Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement (Ministry of Education), Beijing Key Laboratory of Nanophotonics and Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China Peter Grünberg Institut (PGI) and Institute for Advanced Simulation (IAS), Forschungszentrum Jülich and JARA, D-52425 Jülich, Germany Institute of Physics, Johannes Gutenberg University Mainz, D-55099, Mainz, Germany Peter Grünberg Institut (PGI) and Institute for Advanced Simulation (IAS), Forschungszentrum Jülich and JARA, D-52425 Jülich, Germany Institut für Kristallographie, RWTH Aachen University, D-52056 Aachen, Germany E-mail: [email protected] Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich, Lichtenbergstrasse 1, D-85747 Garching, Germany Magnetic kagome metals, in which topologically non-trivial band structures and electronic correlation are intertwined, have recently emerged as an exciting platform to explore exotic correlated topological phases, that are usually not found in weakly interacting materials described within the semi-classical picture of electrons. Here, via a comprehensive single-crystal neutron diffraction and first-principles density functional theory study of the archetypical topological kagome metal Mn_3Sn, which is also a magnetic Weyl fermion material and a promising chiral magnet for antiferromagnetic spintronics, we report the realisation of an emergent spin-density wave (SDW) order, a hallmark correlated many-body phenomenon, that is engineered by the Fermi surface nesting of topological flat bands. We further reveal that the phase transition, from the well-known high-temperature coplanar and non-collinear k = 0 inverse triangular antiferromagnetic order to a double-k non-coplanar modulated incommensurate magnetic structure below T_1 = 280 K, is primarily driven by the SDW instability. The double-k nature of this complex low-temperature magnetic order, which can be regarded as an intriguing superposition of a longitudinal SDW with a modulation wavevector k_L and a transverse incommensurate helical magnetic order with a modulation wavevector k_T, is unambiguously confirmed by our observation of the inter-modulation high-order harmonics of the type of 2k_L+k_T. This discovery not only solves a long-standing puzzle concerning the nature of the phase transition at T_1, but also provides an extraordinary example on the intrinsic engineering of correlated many-body phenomena in topological matter. Due to its proximity to the room-temperature k = 0 chiral antiferromagnetic order, the identified multi-k magnetic state can be further exploited for the engineering of the new modes of magnetization and chirality switching for potential applications in topological and antiferromagnetic spintronics. Flat band-engineered spin-density wave and the emergent multi-k magnetic state in the topological kagome metal Mn_3Sn Yixi Su July 31, 2023 ===================================================================================================================== § INTRODUCTION Non-trivial topology of single-electron band structures has been established as a new paradigm for quantum materials possessing large spin-orbit coupling (SOC) like that found e.g. in topological insulators and Dirac semimetals <cit.>. These topological matters, characterized by topological invariants, for instance, Chern number in k-space or the skyrmion winding number in real space, until recently, are found largely in weakly interacting materials in which electronic correlation (i.e., the Coulomb repulsive interaction among the conduction electrons) plays only a minor role. On the other side, strongly correlated electron materials have long been a major source in condensed matter for the realisation of novel quantum phenomena and exotic states of matter <cit.>, such as high-temperature superconductivity, colossal magnetoresistance, charge and orbital ordering, and quantum spin liquid. It is thus fascinating to look into the recently emerging correlated topological materials <cit.>, in which both band-structure topology and strong electron correlation are intertwined, for even more exotic electronic and magnetic phenomena and novel functionalities. Topological kagome metals <cit.>, consisting of a network of corner-sharing triangles of transition-metal ions, have emerged recently as an ideal platform to explore a variety of exotic topological phases, ranging from the magnetic Weyl fermions in Co_3Sn_2S_2 <cit.>, the massive Dirac fermions in the ferromagnetic Fe_3Sn_2 <cit.>, the quantum-limit magnetic Chern phase in TbMn_6Sn_6 <cit.>, to the kagome superconductors AV_3Sb_5 (A = K, Rb and Cs) in which unconventional superconductivity coexists with chiral charge-density wave (CDW) <cit.>, and the emergent CDW order in the antiferromagnetic FeGe <cit.>. Due to its peculiar geometry of a kagome lattice, both the linearly dispersing and topologically protected band-crossing Dirac points, the van Hove singularities and the dispersionless flat bands could in principle be present in the electronic band structures of a kagome metal, as shown schematically in Fig. <ref>(a). The van Hove singularity driven electronic instability <cit.> has indeed been suggested to be responsible for the emergence of CDW in both the kagome superconductors AV_3Sb_5 and the kagome magnet FeGe. Meanwhile, the presence of topological flat bands would imply that the kinetic energy of electrons can be vanishingly small, and their effective mass can be exceedingly large, such that the relevant electronic states would essentially be classified as spatially localised or strongly correlated. Therefore, flat bands can be a natural fertile ground for electronic correlation effects. Despite the recent advances in the experimental identification of flat bands in a number of magnetic kagome metals <cit.>, the quest for the flat band-driven correlated many-body phenomena remains a significant challenge. This is largely because the observed flat bands in these kagome metals are often located far away from the Fermi level E_F, while correlation effects would become prominent only if they are close to E_F. Binary intermetallic compounds Mn_3A (A = Sn, Ge) crystallize in a hexagonal structure with space group P6_3/mmc, in which Mn atoms form a slightly distorted breathing-type kagome lattice in each of the z = 1/4 and z = 3/4 layer as shown in Fig. <ref>(b). Below T_N = 420 K, Mn_3Sn exhibits a 120^∘ magnetic structure with a modulation wavevector k = 0 and a unique negative vector spin chirality <cit.>, also referred to as an inverse triangular antiferromagnetic order, in which the ordered Mn moments are aligned in the crystallographic ab plane, as shown in Fig.<ref>(c). This unusual non-collinear antiferromagnetic structure is stabilised by the presence of a significant Dzyaloshinskii–Moriya interaction (DMI) perpendicular to the kagome lattice plane <cit.>. The magnetic space group of this structure is proposed to be either Cm'cm' or Cmc'm' <cit.>. As an archetypical kagome metal, Mn_3Sn has attracted tremendous interests largely owing to the recent observations of a strikingly large anomalous Hall effect (AHE) <cit.> and a range of the related anomalous transport properties <cit.> at room temperature, as well as the realisation of exotic Weyl fermions <cit.> in this material. While as suggested in various theories <cit.> these anomalous transport properties are intimately linked to Berry curvature in k-space that is induced by this coplanar and non-collinear antiferromagnetic order in a kagome lattice, they are considered also as an experimental fingerprint of the topologically protected Weyl nodes residing near E_F. The interplay between non-collinear antiferromagnetism, Berry curvature of the electronic structures and Weyl fermions in Mn_3Sn is believed to play a crucial role in these anomalous transport properties <cit.> as well as in their remarkable room-temperature tunability, for instance, by electric currents <cit.>, spin-orbit torque <cit.>, and uniaxial strain <cit.>. In this regard, Mn_3Sn is also hailed as a very promising material for topological and antiferromagnetic spintronics applications <cit.>. Furthermore, the recent observations of the many-body Fano resonance <cit.> and the Kondo effect <cit.> have indeed suggested that Mn_3Sn is a strong candidate material to explore topology-driven correlated many-body phenomena. The hexagonal structure of the bulk material Mn_3Sn is stable only with a small amount of excess Mn that would intrinsically substitute at the Sn sites <cit.>, therefore, depending on the exact chemical composition, the physical properties of Mn_3Sn may exhibit small sample dependence among samples prepared via different crystal growth methods. Since the electronic states at E_F are occupied solely by the Mn 3d electrons, a small amount of Mn doping would cause a sizeable shift of the chemical potential relative to the E_F comparing to the stoichiometric case. For instance, a 1% Mn-doping in Mn_3Sn may shift up the chemical potential by about 6 meV from E_F <cit.>. Therefore, this intrinsic albeit small doping effect due to excess Mn can have a substantial impact on the Fermi level of the electronic band structures, if acted in a controllable way, it may even be used for an intrinsic tuning of E_F in this material. While the antiferromagnetic phase transition at T_N appears a very robust feature across different sources of samples, a second phase transition at around T_1 = 280 K, that occurs within the inverse triangular antiferromagnetic ordered parent phase, to a possible low-temperature spiral magnetic phase, has also been hinted in the scattered literatures over the past decades <cit.>. Intriguingly, an accompanied complete suppression of the large AHE below T_1 was also observed in the samples grown by the self-flux method recently <cit.>, suggesting a significant and simultaneous change in both magnetic structure and transport properties. This also offers a new dimension for the switching of these remarkable anomalous transport properties in Mn_3Sn via a small change in temperature. A recent study on sample dependence further suggests that the phase transition at T_1 would emerge only when the chemical composition of a sample is close to the ideal stoichiometry <cit.>. However, the underlying mechanism for the occurrence of this highly unusual phase transition as well as the exact magnetic structure below T_1 have yet to be established. In this article, we report a comprehensive study of the complex magnetic order and electronic band structures of the archetypical topological kagome metal Mn_3Sn via both single-crystal neutron diffraction experiments and first-principles density functional theory (DFT) calculations. A key finding of this study is the observation of an emergent spin-density wave (SDW) order and the accompanied exotic multi-k magnetic state at low temperatures in this compound. Based on neutron polarisation analysis and magnetic structure refinement, we found that Mn_3Sn transforms, from a high-temperature coplanar and non-collinear k = 0 inverse triangular antiferromagnetic order, to a double-k non-coplanar modulated incommensurate magnetic structure below T_1, which can be regarded as an intriguing superposition of a longitudinal SDW with a modulation wavevector k_L, and a transverse incommensurate helical magnetic order with a modulation wavevector k_T. The nature of this complex low-temperature magnetic order is unambiguously demonstrated by the observation of the inter-modulation high-order harmonics of the type of 2k_L+k_T. Furthermore, a flat band along the high-symmetry K-M-K direction near E_F is revealed by our DFT band-structure calculations. We found that the parallel sections of this flat band on the Fermi surface form a perfect nesting condition that matches to the modulation wavevector of the observed incommensurate magnetic structures. We thus argue that the magnetic phase transition at T_1 is primarily driven by a SDW instability that is associated to the Fermi-surface nesting of flat bands, also a hallmark correlated electron phenomenon. Our DFT calculations also quantitatively demonstrate that the small intrinsic doping effect due to excess Mn is indeed responsible for shifting E_F near to this flat band, and thus paves the way for the emergence of rich Fermi surface-mediated many-body correlation effects. The discovery of a topological flat band-engineered SDW in Mn_3Sn not only solves a long-standing puzzle concerning the nature of the phase transition around T_1, but also provides an extraordinary and also a very rare example on the intrinsic engineering of correlated many-body phenomena in topological kagome metals. Given that the formation of SDW would considerably alter the electronic structure near E_F, for instance, by opening a small gap, therefore, potential impact on the Weyl fermions from SDW as well as a possible interplay between them would become a fascinating topic to explore in the future. Furthermore, the identified multi-k magnetic states in such a prototypical example of magnetic topological materials could offer new possibilities for the engineering of the new modes of magnetization and chirality switching in Mn_3Sn and the related kagome chiral antiferromagnets for potential applications in topological and antiferromagnetic spintronics. § MATERIALS AND METHODS High-quality Mn_3Sn single crystals were grown from the molten self-flux methods <cit.>. The starting materials consisting of Mn pieces (99.95% purity, Alfa Aesar) and Sn granules (99.99% purity, Chempur) were loaded in an Al_2O_3 crucible with some quartz wools on its top as a filter, and the crucible was then placed in a quartz ampoule that was sealed in vacuum afterwards. The quartz ampoule was heated to 1100 ℃ in 10 hours in a tube furnace, subsequently dwelt at 1100 ℃ for 20 hours, and then cooled down to 900 ℃ at a rate of 1 ℃/hour. When the temperature decreased to 900 ℃ the ampoule was quickly transferred to a centrifuge to separate crystals from the Sn flux. In some batches, some crystals are not fully separable from the flux, and the excess Sn flux could then be removed mechanically. The chemical composition of the as-grown single crystals was checked via energy dispersive X-ray analysis (EDX). The crystalline quality and orientation of the selected crystals were carefully examined by X-ray Laue. The magnetic properties of a number of Mn_3Sn crystals grown from different batches were measured between 1.8 K and 300 K on a SQUID magnetometer system (from Quantum Design). Both the longitudinal electric resistivity and Hall effect measurements were carried out from 2 K and 300 K on a PPMS system (from Quantum Design). A range of comprehensive single-crystal neutron diffraction experiments were carried out on our Mn_3Sn samples. The non-polarised single-crystal neutron diffraction experiment was performed at the hot-neutron 4-circle diffractometer HEiDi <cit.> at the Heinz Maier-Leibnitz Zentrum (MLZ), with an incident wavelength of 0.87 Å. For the precise determination of nuclear structure and chemical compositions of the studied sample, in total 707 nuclear reflections were measured in the incommensurate magnetic phase at 210 K, in which nuclear and magnetic reflections are well separated. Furthermore, 612 reflections, including 142 nuclear and 470 magnetic ones, were measured for the refinement of the magnetic structure in the lock-in phase at 225 K. The determination of the coplanar and non-collinear k = 0 antiferromagnetic structure was based on the 1343 reflections collected in the inverse triangle antiferromagnetic phase at 300 K. A further non-polarised single-crystal neutron diffraction experiment was undertaken for the detailed measurements of both temperature and magnetic field dependence at the lifting-counter thermal-neutron diffractometer D23 with a 6 T vertical-field magnet and an incident wavelength λ_i = 2.36 Å. For this experiment, the (h,0,l) reciprocal plane of the studied crystal is aligned in the horizontal scattering plane, so that the applied magnetic field is in parallel to the [1,2,0] direction of this hexagonal lattice system. The polarized neutron diffraction experiments were carried out at the cold-neutron polarised spectrometer DNS <cit.> (with λ_i = 4.74 Å) at MLZ, and at the cold-neutron triple-axis spectrometer IN12 at the Institut Laue–Langevin (ILL) (with k_i = k_f = 2 Å^-1). A neutron velocity selector is employed for the filtering out of higher-harmonics at both DNS and IN12. A Helmholtz XYZ-coil system and a zero-field CRYOPAD system were used for the xyz polarisation analysis at DNS and IN12, respectively. The flipping ratio of the typical nuclear reflections obtained at both instruments on the studied Mn_3Sn samples is in the range of 20-25. For the experiments at DNS, the thoroughness of the magnetic phase transition at T_1 was carefully checked in a number of single crystals grown from different batches, including both (h,0,l) and (h,h,l) oriented in the horizontal scattering plane. The crystals that show a complete transition at T_1 were chosen for the polarisation analysis study. For the IN12 experiment, only the crystal oriented in (h,0,l) was measured. No noticeable neutron depolarisation could be seen in the studied temperature range of the neutron polarisation analysis measurements. The electronic structure of Mn_3Sn was calculated based on the first-principles density functional theory with the generalized-gradient approximation (GGA) in the form of Perdew-Burke-Ernzerhof <cit.>. The accurate frozen-core full-potential projector augmented wave method <cit.>, as implemented in the Vienna ab initio simulation package (VASP) <cit.>, was used. The fully relativistic projector augmented potentials are adopted in order to include the spin-orbit coupling. The plane-wave energy cut-off of 300 eV and a Monkhorst-Pack k-point mesh of 9×9×9 were used for the self-consistent field calculations. After having obtained the converged charge density, the maximally localized Wannier functions were constructed by projecting onto the s, p, and d orbitals of Mn atom as well as onto s and p orbitals of Sn atom, using the WANNIER90 package <cit.>. The Fermi surface is then calculated on a denser k-point mesh of 51×51×51 by the means of Wannier interpolation. § RESULTS §.§ Phase transition at T_1 The phase transition at T_1 in our flux-method grown single crystals has been carefully examined via the measurements of magnetic and transport properties and polarised neutron diffraction. As shown in Fig.<ref>(d), the large AHE in the magnetic parent phase is completely vanished below T_1 in our samples and does not recover at lower temperatures. The magnetic susceptibility measured along both the c axis and crystallographic ab plane undergoes a drastic drop at around T_1 (see Fig.<ref>(e)). This drastic change can also be confirmed in the normalised longitudinal resistivity ρ_xx(T)/ρ_xx(300 K) measured with electric current applied along the [0,0,1] and [1,2,0] directions, respectively, as shown in Fig.<ref>(f), in which a notable jump along the [0,0,1] direction could be seen clearly at T_1. These observations thus indicate that the phase transition at T_1 is clearly associated with a substantial and also collective change in electronic, magnetic and topological properties. In addition, given that the inverse triangular antiferromagnetic order at room temperature is k = 0 type, the nuclear reflections coexist with the magnetic ones at the same q position. With the help of polarised neutron scattering, the magnetic scattering cross-section can be separated from the contribution of the nuclear coherent scattering cross-section, i.e., the x-spin flip channel (xsf) uniquely for magnetic reflections, and the x-non-spin flip channel (xnsf) only for nuclear reflections. As shown in Fig.<ref>(g) and (h), the temperature dependence of the polarised neutron diffraction measurements shows the complete vanishing of magnetic intensities of the (1,0,0) and (1,1,0) reflections in the xsf channel, demonstrating the transition at T_1 in the studied samples is indeed thorough and complete. Further discussions about this issue can be found in the Appendix. §.§ Magnetic structure at 300 K To confirm the magnetic structure of Mn_3Sn above T_1, we carried out a detailed single-crystal neutron diffraction experiments at 300 K at HEiDi. In total, 1343 reflections were collected for the structural refinement with Jana2006 <cit.>. It is known that the room-temperature magnetic structure of Mn_3Sn is a type of 𝐤 = (0,0,0) antiferromagnetic order, in which the magnetic reflections appear at the same position as the nuclear ones. We used the atomic occupancies that are already obtained at 210 K (see Tab.<ref> in the Appendix A) for the refinements. Based on the previous studies <cit.>, there are two possible magnetic space groups proposed for the room-temperature magnetic structure, namely, Cmc'm' or Cm'cm'. The corresponding magnetic structure models and the respective refinement results are shown in Fig.<ref>(a-d) and Tab.<ref>. The calculated structure factors are quite consistent to the measured values for both models. The ordered magnetic moment at 300 K is determined to be 2.54(8) μ_B and 2.53(10) μ_B respectively for each model, which are close to 3.00(1) μ_B obtained in an earlier study <cit.>. To distinguish between these two possible models with non-polarised single-crystal neutron diffraction is challenging because of the presence of multiple magnetic domains at zero field. This challenge could not even be completely overcome in an earlier study via spherical neutron polarisation analysis on a magnetized sample performed by P.J. Brown et al. <cit.>. Note that in a recent spherical neutron polarisation analysis study on a closely related compound Mn_3Ge, it has been shown that the magnetic space group Cm'cm' is favoured for the description of the ground state magnetic structure <cit.>. §.§ Unravelling complex magnetic order below T_1 via polarised neutron diffraction To reveal the magnetic structures below T_1, a series of polarised and non-polarised neutron diffraction experiments were carried out on our Mn_3Sn single crystals. Neutron polarisation-resolved reciprocal space intensity maps were measured at various temperatures in the (h,0,l) scattering plane as shown in Fig.<ref>(a). For longitudinal neutron polarisation analysis employed in our measurements, we define three orthogonal directions in a Cartesian coordination system: x is parallel to the scattering vector 𝐐, z is perpendicular to the scattering plane, and y is determined by the right-hand rule and also in the scattering plane. At 300 K i.e. in the inverse triangle antiferromagnetic parent phase, a sharp magnetic peak (1,0,0) is observed in the xsf and znsf channels, but is absent in the zsf channel. When the sample is cooled down to 260 K i.e. below T_1, (1,0,0) disappears in the xsf channel, and a set of new satellite peaks emerge at around (1,0,±0.1) in the xsf, zsf and znsf channels. The one-dimensional profile cuts from the reciprocal space intensity maps at 260, 210 and 140 K are plotted in Fig.<ref>(b), in which two types of incommensurate magnetic satellite peaks are evident in the xsf channel below T_1, and surprisingly, they are well separable between the zsf and znsf channels, even at 210 K where there is only a single satellite peak at (1,0,±0.1). This observation is verified by further high-resolution polarised neutron diffraction measurements at 260 and 178 K as shown respectively in Fig.<ref> (d) and (f). Among six measured polarisation cross-section channels at each temperature, these two different types of magnetic satellite peaks that both appear in the xsf channel are indeed well separated among the ynsf, zsf and ysf, znsf channels, respectively. This finding suggests that neutron polarisation analysis can provide a unique way for labelling multiple magnetic satellite peaks, thus paving the way for a systematic study of the complex magnetic structures below T_1 in Mn_3Sn. Based on the experimental scattering geometry (see Fig.<ref>(c)), if both the background and incoherent scattering contributions are negligible, the polarised neutron scattering cross-sections can be expressed in the following equations with respect to the magnetic interaction vectors 𝐌_⊥, σ_z^sf=σ_x^sf-σ_y^sf=𝐌_⊥ y^2 =|𝐌^[1,0,0]|^2sin^2θ + |𝐌^[0,0,1]|^2cos^2θ σ_y^sf=σ_z^nsf-σ_x^nsf=𝐌_⊥ z^2=|𝐌^[1,2,0]|^2 where σ_i^j (i = x,y,z, j = sf, nsf) represents the spin flip (sf) or non-spin flip (nsf) cross-section in polarisation i, and θ is the angle between the scattering vector 𝐐 and the [1,0,0] direction. 𝐌_⊥ y and 𝐌_⊥ z are the components of the magnetic interaction vector 𝐌_⊥ = ⟨𝐌×𝐐×𝐌⟩ along the polarisation direction y and z, respectively. At 300 K, for the reflection (1,0,0) the scattering vector 𝐐_1 is in the (h,k,0) plane and parallel to the [1,0,0] direction, so that θ_1 equals to zero. Given that the (1,0,0) reflection is observed only in the xsf and znsf channels, it can thus be concluded that the ordered magnetic moments lie only in the ab plane, in agreement with the previous reports <cit.>. For the incommensurate magnetic satellite reflections in the modulated magnetic phase below T_1, the scattering vector 𝐐_2 is not in the (h,k,0) plane, and, for instance, θ_2 is tan^-1(((2π/c)×0.1)/(2π/(√(3)a/2))) = 6.22^∘ for the satellite reflections (1,0,±0.1), we have then sin^2θ_2 = 0.01, which means that only about 1% of the magnetic component along the [1,0,0] direction would contribute to σ_z^sf, and that its contribution is essentially negligible in our analysis since it is within the error bar of our measurement. As a result of this analysis, as shown in Fig.<ref> (e) and (g), the magnetic satellite reflections around (1,0,±0.1) could be well decomposed into two orthogonal contributions: 𝐌_⊥ y^2= |𝐌^[0,0,1]|^2 and 𝐌_⊥_z^2=|𝐌^[1,2,0]|^2. This would indicate that the modulated magnetic phase below T_1 is not a spiral order as previously suggested <cit.>, but possesses an intriguing non-coplanar magnetic structure. From now on, we denote the magnetic modulation that is associated with the observed magnetic satellite reflections in the znsf (zsf) channel as k_T (k_L), since it is transverse (longitudinal) to the magnetic modulation wavevector. §.§ Temperature dependence of the magnetic order below T_1 Having established the way to resolve the different magnetic satellite refections via neutron polarisation analysis, we turn to the evolution of the magnetic structures as a function of temperature. As shown in the [1,0,L] Q-scans over a wide temperature range in Fig.<ref>(a), initially, two pairs of magnetic satellite reflections centred around (1,0,0) emerge below T_1 = 280 K, then in a fascinating way, they merge or lock-in to a single one at (1,0,±0.1) from 240 to 200 K, and eventually split to two again below 200 K. Surprisingly, additional satellite peaks are evident at around (1,0,±0.3), which are almost 1-2 orders of magnitude weaker than the main satellite peaks and seemingly resemble the third-order harmonics of magnetic modulations k_3. By fitting the observed satellite peaks at each temperature with a Gaussian function (as shown in Fig.<ref>(b)), the modulation wavevectors of k_L, k_T, and k_3 can be precisely determined. Instead of as 3𝐤_L, 3𝐤_T , or 𝐤_L+2𝐤_T, the 𝐤_3-type satellite peaks can only be indexed as 2𝐤_L+𝐤_T, as shown in Fig.<ref>(c) (see also Fig.14 in the Appendix F), for the whole temperature range below T_1. The presence of such an inter-modulation high-order harmonics is usually seen as a strong evidence for the emergence of a multi-𝐤 magnetic structure <cit.>. We thus conclude that Mn_3Sn possesses an intriguing double-k non-coplanar magnetic structure below T_1, which can be regarded as a superposition of a longitudinal spin-density wave k_L and a coplanar helical magnetic order k_T, both propagating along the c axis. Such type of double-k incommensurate magnetic order and the observation of the inter-modulation harmonics of the type of 2k_L+k_T are extremely rare since k_T and k_L are not from the symmetric arms of the same modulation wavevector group or they occur on different magnetic elements. As shown in Fig.<ref>(c), four phase regimes could be identified: k = 0 in the inverse triangular antiferromagnetic parent phase; k_T k_L from 280 to 240 K; k_T = k_L in the lock-in phase from 240 to 200 K; and k_T k_L below 200 K. Furthermore, clear "kinks" in the temperature dependence of the magnetic modulation wavevectors can be seen at 240 and 200 K, i.e., at the phase-boundary temperatures, reflecting a complex evolution of an apparent strong interaction between these two magnetic order parameters. §.§ Magnetic structure in the lock-in phase at 225 K While a complete determination of the double-k non-coplanar incommensurate magnetic structures at the different phase regimes via neutron crystallographic approach can be a formidable task, the existence of this commensurate lock-in phase between 240 and 200 K with a single modulation wavevector q_lock-in = (0,0,0.1) provides a feasible way to achieve this. We have thus investigated the magnetic structure at 225 K via single-crystal neutron diffraction at HEiDi. The magnetic structure refinement, based on in total 612 reflections (including 142 nuclear Bragg reflections and 470 magnetic satellites) measured at HEiDi, was performed based on the magnetic superspace approach via Jana2006 <cit.>. Jana2006 also performs the representation analysis, and automatically generates all possible magnetic superspace symmetries for any given paramagnetic space group with only very basic input information, such as the crystal structure model and the propagation wavevector <cit.>. By trials and errors, we have found that the best fitted magnetic superspace model is P6322.1'(00g)-h00s as shown in Fig.<ref>(a-b) with the refined R(obs) = 5.89% and R_w(obs) = 11.71%. In the magnetic superspace approach, the magnetic moment of the atom v can be expressed by a Fourier series of the type, 𝐌_v(𝐤·𝐫_v)= 𝐌_v, 0+∑_m[𝐌_v, m ssin(2 π m 𝐤·𝐫_v). .+𝐌_v, m ccos(2 π m 𝐤·𝐫_v)] where 𝐌_v, 0, 𝐌_v, m s and 𝐌_v, m c are the absolute value, the amplitude of the sine term and the amplitude of the cosine term, respectively. The details of the determined magnetic structure in the lock-in phase at 225 K are given in the Tab.<ref>, This magnetic structure model is indeed a non-coplanar magnetic order that can be decomposed into a longitudinal SDW along the c direction (see Fig.<ref>(d)) and an elliptical helix in the ab plane (as shown in Fig.<ref>(e)). In each kagome layer, the in-plane projection of magnetic moments retains the same inverse triangular magnetic configuration, but with a phase shift between layers. Possible topological Hall effects may arise in such a non-coplanar magnetic structure due to the presence of a finite scalar spin chirality, which is defined as κ_ijk=s_i· (s_j×s_k), where s_i, s_j, s_k are the non-coplanar spins within a triangle. However, as illustrated in Fig.<ref>(c), for the modulated magnetic structure in the lock-in phase there is a global time-reversal symmetry connecting the layer l_ci and l_(c+5)i, thus leading to the vanishing of the total scalar spin chirality. §.§ Electronic band structures, flat band and Fermi-surface nesting While the emergence of this longitudinal SDW from a magnetically ordered parent phase is rather peculiar, the clear jump in electric resistivity at T_1 (see Fig.<ref>(f)) would serve as a strong indication for the opening of an energy gap near E_F in electronic band structures due to the formation of SDW, as often observed in other known SDW and CDW materials <cit.>. We now turn to first-principles DFT band structure calculations for further insights. Our DFT calculations are based on the crystal structure and the room-temperature inverse triangular antiferromagnetic structure that are experimentally determined via single-crystal neutron diffraction on the studied single crystal (see the Appendix A). While the calculated band structures are largely in resemblance to those reported in the previous work <cit.>, a particularly interesting feature that is revealed in our calculations is the flat band (i.e. green-coloured) along the high-symmetry K-M-K direction, as shown in Fig.<ref>(b). Given that as-grown Mn_3Sn is stable only with slightly excess Mn at the Sn sites, which would in turn act effectively as electron doping, we also analysed the chemical shift of E_F as a function of charge doping, as shown in Fig.<ref>(c). The actual chemical composition of our studied crystal as determined from neutron diffraction structural refinement is Mn_3.012Sn_0.988 (see the Appendix A), equal to an effective electron doping of 0.672 e, so this doping level would shift E_F up by about 81 meV. As shown in Fig.<ref>(b), due to this intrinsic chemical doping and the resulted shift of E_F, the K-M-K flat band would reside exactly in the Fermi surface in the studied Mn_3Sn sample. This would create a favourable condition for the emergence of possible correlated many-body phenomena. Further analysis of the geometry of the Fermi surface for three representative cases of the doping level, namely, in the ideal stoichiometry (Fig.<ref>(d)), with optimal electron doping (Fig.<ref>(e)), and in the slightly over-doped case (Fig.<ref>(f)), has been undertaken. We found that, only with optimal electron doping in the case of Mn_3.012Sn_0.988, the parallel sections of this flat band in the Fermi surface could form a perfect nesting condition that matches to the modulation wavevectors of the observed incommensurate magnetic structures. We thus argue that the magnetic phase transition at T_1 is primarily driven by a SDW instability that is associated to the Fermi-surface nesting of flat bands, also a hallmark correlated electron phenomenon. §.§ Magnetic-field dependence of the multi-k magnetic state below T_1 The magnetic-field dependent measurements of this emergent multi-k magnetic state were carried out at D23. For this experiment, the (h,0,l) reciprocal plane of the studied crystal is aligned in the horizontal scattering plane, so that the applied magnetic field is in parallel to the [1,2,0] direction of this hexagonal lattice system. As shown in Fig.<ref>(a-b), no changes could be seen from the field dependence of the Q-scan along the [1,0,L] direction in the range of 0 to 6 T at 1.5 K. This indicates that the low-temperature non-coplanar double-k incommensurate phase is robust against magnetic fields applied along the [1,0,L] direction. Furthermore, comprehensive mappings in the (h,0,l) reciprocal-space plane as a function of applied magnetic fields were measured, as shown in Fig.<ref>(c-d). No evidence for possible new field-induced long-range or short-range magnetic modulations, such as an emergent magnetic skyrmion lattice suggested in a recent powder based study <cit.>, could be seen from these reciprocal-space mappings in our studied single-crystal sample. Therefore, the observed topological Hall effect (THE) in the mentioned powder sample <cit.> is not necessarily related to a well formed non-coplanar spin textures such as magnetic skyrmions, but possibly due to extrinsic causes, such as the chemical composition inhomogeneity, off-stoichiometry and multiple magnetic domains etc. in powders. It would be interesting to carry out further field-dependent investigations under applied fields in parallel to the [0,0,1] direction on the flux-method grown crystals. § DISCUSSION AND SUMMARY At first thought, the emergence of SDW from an antiferromagnetically ordered parent phase may seem very strange. But, based on our DFT band-structure calculations, the coplanar and non-collinear magnetic order in the parent phase is actually needed as one necessary condition for the formation of the identified topological flat bands and the resultant Fermi-surface nesting between them (also see the Appendix G). Another two necessary conditions are the chemical potential shift due to excess Mn, which can tune the Fermi level close to the flat band, and the electronic correlation effects, which would further sharpen the density of states (DOS) of the flat bands and thus favour emergent many-body phenomena. Such a correlated electronic band structure in Mn_3Sn is indeed captured in a recent DFT+DMFT study <cit.>. The formation of SDW would in turn open an energy gap at E_F and thus considerably alter the electronic structure near E_F. For instance, given that the same inverse triangular spin configuration is largely retained in the individual kagome layer of Mn below T_1, the complete suppression of AHE must be related to a dramatic modification of Berry curvature of the relevant electronic bands due to this significant band renormalisation. Furthermore, a potential impact on the Weyl fermions from SDW as well as a possible interplay between them are expected, which could become a fascinating topic to explore in the future. Unlike a usual helical order with a constant magnetic moment and phase between spins related by a lattice translation along the propagation vector, the helix in the realised double-k non-coplanar magnetic phase is incommensurately modulated with the periodicity of an amplitude-modulated SDW. The ground-state magnetic structure in Mn_3Sn thus decomposes into a set of magnetic order parameters with distinct propagation vectors. The very presence of the inter-modulation high-order harmonics, as well as the intriguing temperature dependence of these distinct propagation vectors (e.g., clear "kinks" in the phase boundary as shown in Fig.<ref>(c)), clearly indicate that this exotic magnetic phase is a result of the strong coupling between the emergent SDW and the high-temperature magnetic parent phase with an inverse triangular spin configuration. We believe that this coupling is achieved mainly via a very subtle effect of the amplitude-modulated SDW on the inter-unit cell exchange interaction term J_6 <cit.> (see Fig.<ref>(a)). In fact, in a similar way, a coupling between a multi-k helical order and orbital-density wave was also realised in a different helimagnet <cit.>. Furthermore, as shown in Fig.<ref>, since no changes could be found from the magnetic-field dependent measurement in the range of 0-6 T at 1.5 K, this suggests that the low-temperature double-k non-coplanar incommensurate phase is robust against magnetic fields. This also strongly supports our interpretation that the phase transition at T_1 is primarily driven by emergent many-body electronic phenomena, such as SDW, since they act in larger energy scales than the antisymmetric DMI interaction. In this regard, this highly unusual double-k non-coplanar incommensurate magnetic state is simply a consequence of the competition and coexistence between the electronic correlation-driven SDW and DMI-driven no-collinear chiral magnetism. Our discovery also represents an extraordinary example for an intrinsic flat-band engineering of electronic and magnetic properties in topological metals. While the shifting of the Fermi level is achieved via intrinsic doping from excess Mn in the studied samples, it can be expected that this may also be realised via the controlled extrinsic chemical substitutions, the applied pressure or uniaxial strain in a similar manner. The effect of the applied uniaxial strain on the SDW phase transition in Mn_3Sn has indeed been found recently <cit.>. Furthermore, the tuning of the Fermi level via systematic chemical substitutions, as well as its impact on the interplay between CDW and superconductivity, were also demonstrated in the kagome superconductor CsV_3Sb_5 <cit.>. The proximity of a flat band-engineered SDW instability to a room-temperature no-collinear antiferromagnetic order can lead to even more exotic properties and functionalities that are manipulatable for this interesting topological quantum materials. In summary, we have carried out a comprehensive investigation of the magnetic structures in the topological kagome metal Mn_3Sn via combined polarised and non-polarised single-crystal neutron diffraction. In consistence with the previous reports, a coplanar and non-collinear k=0 inverse triangular antiferromagnetic structure can be confirmed below T_N = 420 K in our studied single-crystal samples. An additional phase transition that is accompanied by a substantial change simultaneously in electric transport, AHE and magnetic susceptibility is identified at T_1 = 280 K. Based on neutron polarisation analysis and the magnetic structure refinement, it is revealed that Mn_3Sn transforms to an intriguing double-k non-coplanar incommensurate magnetic structure below T_1. The double-k nature of this complex low-temperature magnetic order is unambiguously confirmed by the observation of the inter-modulation high-order harmonics of the type of 2k_L+k_T. Furthermore, our DFT band-structure calculations quantitatively demonstrate that the small electron doping due to the intrinsic Mn excess is responsible for the shift of E_F, so that this K-M-K flat band can be intrinsically engineered to be exactly at the Fermi surface for the studied samples, and this thus paves the way for the emergence of rich Fermi surface-mediated electron correlation effects. We further argue that this intriguing magnetic phase transition at T_1 is primarily driven by a longitudinal SDW instability that is associated to the Fermi-surface nesting between the identified flat bands. The discovery of flat band-engineered SDW in Mn_3Sn not only solves a long-standing puzzle concerning the nature of the phase transition at T_1, but also provides an extraordinary example on the interplay and possible intrinsic manipulation of topology and electron correlation in topological matter. The identified multi-k magnetic states in such by now prototypical example of magnetic topological materials can have repercussions for exploring novel topological phases in magnetic materials, and may offer new possibilities for the engineering of the novel modes of magnetization and chirality switching in Mn_3Sn and the related materials for potential applications in topological and antiferromagnetic spintronics. § ACKNOWLEDGMENTS X.W. acknowledges V. Petříček for the fruitful discussions on the single-crystal structure refinement with Jana2006, and J. M. Perez-Mato for the help on the use of the Bilbao Crystallographic Server. X.W. acknowledges the financial support from the China Scholarship Council. F.Z. and J.S. acknowledge the funding supports from the HGF–OCPC Postdoctoral Program. We acknowledge S. Mayr for assistance with the crystal orientation, and T. Herrmannsdoerfer and S. Chattopadhyay for their help in the preliminary characterisations of our single-crystal samples. Y.S. acknowledges T. Brückel, J. Kübler and D. Khalyavin for valuable discussions. This work is based on the neutron diffraction experiments performed at DNS (MLZ, Garching), HEiDi (MLZ, Garching), IN12 (ILL, Grenoble). D23 (ILL, Grenoble) and FLEXX (HZB, Berlin) neutron instruments. W.F. acknowledges the National Key R&D Program of China (Grant Nos. 2022YFA1403800 and 2022YFA1402600) and the National Natural Science Foundation of China (Grant Nos. 12274027 and 11874085). Y.M. and W.F. acknowledge the funding under the Joint Sino-German Research Projects (Chinese Grant No. 12061131002 and German Grant No. 1731/10-1) and the Sino-German Mobility Programme (Grant No. M-0142). Y.M. gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 288 - 422213477 (project B06). M.H. acknowledges the National Natural Science Foundation of China (Grant No. 11904040) and the Chongqing Research Program of Basic Research and Frontier Technology, China (Grant No. cstc2020jcyj-msxmX0263). The authors declare that they have no competing interests. Y.S. conceived and supervised the project. W.F. supervised the work on DFT calculations. X.W. carried out the single-crystal growth and characterisation of physical properties with the supports from C.Y. and Yo.S. X.M. and M.H. carried out the electric transport measurements. X.W., F.Z. and Y.S. carried out the neutron diffraction experiments with the supports from M.M., J.S., T.M., W.S., K.S., E.R. and J.X. X.Y. and W.F. carried out the DFT calculations with the supports from Y.M. and S.B. X.W. and Y.S. analysed and interpreted the results with the supports from Z.F., W.F., G.R. X.W. and Y.S. wrote the manuscript with the contributions from all authors. Correspondence and requests for materials should be addressed to Y.S. or W.F. § APPENDIX A: DETAILS ABOUT THE CHEMICAL COMPOSITION ANALYSIS OF THE AS-GROWN MN_3SN SINGLE CRYSTALS The chemical composition of the as-grown Mn_3Sn single crystals was determined by a combined study of energy dispersive X-ray analysis (EDX) and single-crystal neutron diffraction. The EDX results show that the average chemical composition of our as-grown crystals is Mn_3Sn_1.033, which is close to the stoichiometric composition. The slightly excess Sn is due to the residual Sn flux on the sample surface. Below T_1 = 280 K, as demonstrated in our polarised neutron diffraction experiments, the room-temperature non-collinear 𝐤 = 0 antiferromagnetic order is completely disappeared in our studied samples. In addition, we have shown that, in the lock-in phase in the temperature range of 200-240 K, the magnetic satellite reflections are merged together, and meanwhile, are well separated from Bragg reflections. Therefore, we have collected 707 nuclear Bragg reflections at 210 K at HEiDi <cit.> for the determinations of the crystalline structure and chemical composition of the as-grown Mn_3Sn single crystals. The structure refinement was performed with the crystallographic program Jana2006, and the refinement result is listed in Tab.<ref>. The refined weighted R_w, at 2.64%, is very good. The refined chemical composition is Mn_3.012Sn_0.988, which is close to the stoichiometric composition and also clearly confirms the slight excess of Mn. Since no structural phase transition at low temperatures is found for this compound, we also use the nuclear structure obtained at 210 K as the input for the structural refinement at other temperatures. The calculated squared structure factors from the Jana2006 refinement versus the measured ones are shown in Fig.<ref>. It is well known that Mn and Sn have opposite sign of neutron scattering length, which gives a large contrast and makes the precise determination of the site-specific composition by neutron diffraction possible. § APPENDIX B: DETAILS ABOUT THE FLIPPING-RATIO CORRECTION OF THE POLARISED NEUTRON DIFFRACTION DATA The flipping-ratio correction was made for the polarised neutron diffraction data taken at IN12. For both room-temperature magnetic structure models, Cm'cm' or Cmc'm', the (0,0,2) reflection is a pure nuclear reflection that has no magnetic contributions. Therefore, it is possible to check the flipping ratio by measuring the intensities of the xnsf and xsf channels of this reflection, which would reflect the polarisation rate of the incident polarised neutrons and the efficiency of polarisation analysers in the employed experimental condition. The measured flipping ratio R = I_nsf-I_nsf_bg/I_sf-I_sf_bg for (0,0,2) at 305 K is about 20 as can be derived from Fig.<ref>, which is comparable to the flipping ratio measured on a reference sample of graphite. This indicates that there is no neutron depolarisation effect in the studied sample at room temperature. Note that I_nsf_bg and I_sf_bg are the respective background intensities in the the nsf and sf channels. By applying the flipping ratio of the (0,0,2) reflection, we could correct the polarised neutron diffraction data in a wide temperature range. The flipping-ratio correction can be performed via the following equations for each polarisation channel, I_i^corr_n s f=I_i^n s f+1/R_i-1(I_i^n s f-I_i^s f) I_i^corr_s f=I_i^s f-1/R_i-1(I_i^n s f-I_i^s f) where i=x,y,z and R_i is the measured flipping ratio at the corresponding polarisation channel. § APPENDIX C: TEMPERATURE DEPENDENCE OF THE (1,1,0) REFLECTION The polarised neutron diffraction was also carried out on the single crystals that are (h,h,l) oriented in the horizontal scattering plane. As shown in Fig.<ref>, a magnetic phase transition was observed around T_1 from the the temperature dependence of the (1,1,0) reflection in the xsf and xnsf channels, consistent with the previous results. There is a small hysteresis upon heating and cooling, which is likely due to a combined effect from the different temperature changing rates and a slight composition inhomogeneity in the measured sample, but not an indication of a possible structural transition, as our neutron diffraction study has shown that the structure of Mn_3Sn at 210 K is same as that at room temperature. § APPENDIX D: SAMPLE DEPENDENCE: COMPARISON OF THE PHASE TRANSITION AT T_1 BETWEEN OUR SAMPLES AND A BRIDGMAN-METHOD GROWN CRYSTAL There exists an apparent sample dependence for the phase transition at T_1 among single crystals grown by different methods <cit.>. A detailed comparison of the same Q-scans along the [1,0,L] direction between our flux-method grown samples and a Bridgman-method grown crystal <cit.> is illustrated in Fig.<ref>. It can be clearly noticed that at 260 K, the intensity of the (1,0,0) reflection in our studied sample is about 51000 counts/10s, and that of the magnetic satellite reflections in the incommensurate phase is almost 14000 counts/10s, leading to an intensity ratio of about 3.64. In comparison, for the Bridgman-method grown crystal, the intensities for the (1,0,0) reflection and the corresponding magnetic satellites are 71000 and 700 counts/minute, respectively, thus giving rise to a ratio of 101.43 at 260 K. The extreme weakness of these magnetic satellite reflections is a clear evidence that the phase transition at T_1 in this Bridgman-method grown crystal only occurs in a very small volume fraction and that the significantly large portion of the crystal still remains in the high-temperature 𝐤 = 0 inverse triangular antiferromagnetic order phase. In contrast, in our careful polarised neutron diffraction experiments, the magnetic components of the (1,0,0) and (1,1,0) reflections, which are characteristic to the high-temperature 𝐤 = 0 phase, are seen completely disappeared below T_1, thus unambiguously demonstrating that the phase transition at T_1 in our samples is thorough and complete. Furthermore, in the lock-in phase at 220 K in our sample, the two magnetic satellites are merged together into a single sharp reflection that has a comparable intensity to that of (1,0,0). However, in the Bridgman-method grown crystal two magnetic satellites do not completely coincide with each other at 220 K, which is due to likely inhomogeneity in its chemical composition. The sample dependence of physical properties among Mn_3Sn crystals grown with different preparation methods can be explained based on the Mn-Sn binary phase diagram as shown in Fig.<ref> <cit.>. It can be seen that the phase of Mn_3Sn is stabilised only when the excess Mn is present. For a crystal grown from the Sn-rich self-flux, its chemical composition should be very close to the stoichiometric one so that the intrinsic Mn excess is minimised in the sample. Therefore, the phase diagram can ensure that the samples grown from the Sn flux method even by different groups still have the similar chemical compositions, and would more likely show the phase transition at T_1 <cit.>. However, for the crystals grown from the Bridgman method, their actual chemical compositions are highly dependent on the compositions of the starting materials, the growth speed and the cooling rate. A disadvantage of the flux method for the growth of Mn_3Sn is the very narrow growth window in 884-984^∘C with the risk of the precipitation of the ferromagnetic Mn_2Sn. Strictly speaking, Mn_3Sn does not melt congruently, so the Bridgman method may not be the method of choice if the exact stoichiometry is important. § APPENDIX E: NEUTRON POLARISATION ANALYSIS RESULTS OF THE MAGNETIC SATELLITE REFLECTIONS (1,0,1-Δ) AT 180 K Neutron polarisation analysis is also carried out for the magnetic satellite reflections of the (1,0,1-δ) type in the xsf, zsf and znsf channels in the double-k modulated incommensurate phase at 180 K, as shown in in Fig.<ref>. It is anticipated that for these magnetic satellite reflections double peaks would appear in the zsf channel and a single peak in the znsf channel. This is because the angle θ_3 between the scattering vector Q_3 = (1,0,∼0.9) and the [1,0,0] direction is about 45^∘, and we will have the following equations: [ σ_z^zsf=𝐌_⊥ y^2; =|𝐌^[1,0,0]|^2sin^2θ_3 + |𝐌^[0,0,1]|^2cos^2θ_3; = 1/2(|𝐌^[1,0,0]|^2+|𝐌^[0,0,1]|^2)σ_z^znsf; =𝐌_⊥ z^2=|𝐌^[1,2,0]|^2 ] In this case, both the magnetic components in the ab plane and along the c direction contribute to the zsf channel, and in the znsf channel there will always be a single peak since the z direction is perpendicular to the scattering plane, which is parallel to the [1,2,0] direction in our scattering geometry. § APPENDIX F: INDEXING OF THE INTER-MODULATION HIGH-ORDER HARMONIC The modulation wavevector of the high-order harmonic 𝐤_3 is obtained over a wide temperature range from the high-resolution Q-scans measured at D23. In the previous reports <cit.>, 𝐤_3 was indexed as the third order of 𝐤_T (i.e., 3𝐤_T). Our high-resolution measurements clearly indicate that this is incorrect. As shown in Fig.<ref>, among various possibilities, the high-order harmonic 𝐤_3 could only be indexed as the inter-modulation high-order harmonics of the type of 2𝐤_L+𝐤_T. Such kind of inter-modulation high-order harmonics is a characteristic feature of a multiple-𝑘 magnetic structure. § APPENDIX G: DETAILED ELECTRONIC BAND STRUCTURES OF MN_3SN OBTAINED FROM THE DFT CALCULATIONS The electronic band structures of Mn_3Sn were comprehensively studied based on the first-principles DFT calculations. The calculated electronic band structures and the Fermi level E_F of the stoichiometric Mn_3Sn, in which both spin-orbit coupling (SOC) and the room-temperature 𝐤 = 0 magnetic order were included in the calculations, is shown in Fig.<ref>. The room-temperature 𝐤 = 0 magnetic structure is based on our own single-crystal neutron diffraction refinements in Tab.<ref>. For a comparison, the calculated electronic band structures, in which only SOC was included in the calculations but not the room-temperature 𝐤 = 0 magnetic order, is shown in Fig.<ref>. A significant impact of magnetism on the dispersion and degeneracy of band structure near E_F can be clearly seen in this comparison. Furthermore, as shown in Fig.<ref>, the Fermi surfaces in the (h,0,l), (h,k,0) and (h,h,l) reciprocal-space planes for the weakly doped case as in our studied sample, in which E_F is shifted up by 81 meV, are also plotted for a comparison to Fig.5(d-f) in the main text. No Fermi surface nesting conditions could be realised in these reciprocal-space planes. The calculated density of states (DOS) of the relevant electronic bands near E_F are plotted in Fig.<ref>, in which a sharp local maximum in DOS due to the presence of the K-M-K flat band can be clearly seen at the shifted Fermi level E_F+81 meV.
http://arxiv.org/abs/2306.02153v1
20230603164421
Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling
[ "Ramon Sanabria", "Ondrej Klejch", "Hao Tang", "Sharon Goldwater" ]
cs.CL
[ "cs.CL", "cs.LG", "cs.SD", "eess.AS" ]
Effects of Imperfections on Quantum Algorithms: A Software Engineering Perspective anonymousAnonymous author(s) Felix Greiwe Technical University of Applied Sciences Regensburg Regensburg, Germany mailto:[email protected]@oth-regensburg.de Tom Krüger Technical University of Applied Sciences Regensburg Regensburg, Germany mailto:[email protected]@oth-regensburg.de Wolfgang Mauerer Technical University of Applied Sciences Regensburg Siemens AG, Technology Regensburg/Munich, Germany mailto:[email protected]@othr.de July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Acoustic word embeddings are typically created by training a pooling function using pairs of word-like units. For unsupervised systems, these are mined using k-nearest neighbor (KNN) search, which is slow. Recently, mean-pooled representations from a pre-trained self-supervised English model were suggested as a promising alternative, but their performance on target languages was not fully competitive. Here, we explore improvements to both approaches: we use continued pre-training to adapt the self-supervised model to the target language, and we use a multilingual phone recognizer (MPR) to mine phone n-gram pairs for training the pooling function. Evaluating on four languages, we show that both methods outperform a recent approach on word discrimination. Moreover, the MPR method is orders of magnitude faster than KNN, and is highly data efficient. We also show a small improvement from performing learned pooling on top of the continued pre-trained representations. Index Terms: acoustic word embeddings, semi-supervised learning, continued pre-training, low-resource languages, unwritten languages § INTRODUCTION Acoustic Word Embeddings (AWE) are vector representations of variable length speech segments (i.e., words) <cit.>. Ideally, AWEs abstract away from non-linguistic information such as speaker gender and voice quality, so that instances of the same word cluster together in the embedding space. AWEs can be applied to a wide variety of search-intensive tasks such as query-by-example <cit.> or semantic speech retrieval <cit.>, as well as in unsupervised word segmentation and clustering systems <cit.>—one step toward creating speech technology for unwritten languages. Since our eventual goal targets this task, this paper focuses on models that do not rely on transcribed speech in the target language—the unsupervised setting—and we also assume limited unlabeled target language data (up to 50 hours). Most previous work on constructing unsupervised AWEs has approached the problem using learned pooling, where positive training pairs of similar speech segments (assumed to be the same word or n-gram) are used to learn a pooling function, based on a reconstruction <cit.> or contrastive <cit.> objective. Despite good AWE quality, these methods rely on identifying positive training pairs from a corpus using k-nearest-neighbors methods <cit.>. Even with approximate search, such methods are computationally and memory intensive. Recently, an alternative method for constructing AWEs was proposed <cit.>, which does not rely on positive samples nor complex pooling mechanisms, but simply mean-pooling the frame-level representations from a pre-trained self-supervised model. The authors show that HuBERT <cit.> representations (pre-trained on English) work well for English AWEs, but less well on other languages, presumably because the model is not adapted to those target languages. In this work, we propose solutions to the aforementioned limitations and perform experiments to directly compare the two methods individually and in combination. Specifically, for learned pooling, we show that high-quality positive pairs can be found efficiently by transcribing the target language data using a multilingual phone recognizer (MPR) trained on high-resource languages, then selecting matching phone n-grams. For average pooling of a pre-trained model, we show how to adapt English HuBERT to the target language using continued pretraining <cit.>. This entails an extra step of reconstructing HuBERT's k-means clusters, not needed for continued pretraining of wav2vec 2.0 <cit.>, but is worth doing since HuBERT has been shown to be more effective at word discrimination <cit.>. We evaluate our AWE representations using the same-different word discrimination task (same-diff) <cit.> on four languages: French (which we use as a development language to select hyper-parameters and run analyses), Mandarin, German, and Xitsonga (with the latter three acting as unseen test languages from a variety of language families). We experiment with continued pretraining, learned pooling using a contrastive objective, and their combination. Our experiments show that: * Using continued pretraining with 50 hours of target language data improves the performance of average-pooled HuBERT representations considerably, and most of the benefit is achieved with only 20 hours of data; * For the contrastive-learning model, using MPR to identify positive pairs yields a large number of high-quality pairs, resulting in better word discrimination scores than a previous approach <cit.> while being orders of magnitude faster; * With 50h of data, continued pretraining and contrastive learning have similar performance, but contrastive learning is more data-efficient, and achieves nearly the same results with only one hour of target language data; * Combining both methods yields a small further improvement. § TASK OVERVIEW In this section, we formally define our task and experimental framework. Suppose we have two utterances x^1 and x^2, both being sequences of frames. The task (same-diff) is to tell whether two word segments x^1_s:t and x^2_s':t' belong to the same word type or not. In this paper, we focus on an approach that uses self-supervised speech representations. We assume we have a self-supervised model f to compute representations of both utterances, i.e., z^1 = f(x^1) and z^2 = f(x^2). We then use a pooling function g to compute a fixed-dimensional vector, also known as an AWE, to represent a word segment. Specifically, we compute g(z^1_s:t) to represent x^1_s:t and g(z^2_s':t') for x^2_s':t'. The question of whether two word segments x^1_s:t and x^2_s':t' are of the same word type or not becomes measuring the similarity between g(z^1_s:t) and g(z^2_s':t') compared to other AWEs. The cosine similarity is typically used, and the mean average precision (MAP), i.e., the area under the ROC curve when we sweep a threshold, is used as the evaluation metric. Prior work <cit.> has shown strong results when f is an off-the-shelf HuBERT model trained on English and g is a simple averaging. In this work, we explore the setting where we have untranscribed speech of a target language to continue pre-training the self-supervised model f. In addition, we also explore pooling functions with trainable parameters, such as in <cit.>. We follow <cit.> and train the pooling function g with a contrastive loss. Specifically, we use NTXent <cit.> which is defined as ℓ_NTXent(c, c^+, N) = -logexp(cos(c, c^+)/τ)/∑_c^- ∈ Nexp(cos(c, c^-)/τ), where c and c^+ are both AWEs, c^+ is a positive example for c, N is a set of negative examples for c, and τ is a temperature hyperparameter. This loss requires mining positive and negative pairs. For example, in <cit.>, nearest neighbors are considered as positive examples, leaving others as negative examples. Nearest neighbors are known to be slow to compute when the number of examples becomes large. As we will see in later sections, we will explore a different approach to mining positive and negative pairs. § CONTINUED PRE-TRAINING In previous work, Sanabria et al. <cit.> showed that for constructing AWEs on different languages, HuBERT representations (which are trained only on English) perform better than both English-trained wav2vec 2.0 <cit.> and multilingually-trained XLS-R <cit.>. We extend their work to the setting where some untranscribed target language data is available. When there is a mismatch between training and test conditions, a common approach is to continue pretraining self-supervised models on the test condition <cit.>, which motivates us to continue pretraining HuBERT on the untranscribed target language. The task of HuBERT pretraining is masked prediction, where parts of the input are masked, and the goal is to predict the quantized speech frames of the masked parts. To perform continued pretraining on a different language, it is not immediately obvious what training targets to use. Similar to the original HuBERT, we run k-means on the hidden vectors from one of the HuBERT layers, and use the cluster IDs of hidden vectors as targets (described further in Section <ref>). We then continue to pretrain HuBERT on the target language, and finally (following <cit.>), mean-pool the hidden vectors to create the AWE. §.§ Experimental Setting We evaluate our approach with the same-diff word discrimination task <cit.> on French, German, Mandarin, and Xistonga. The French, German, and Mandarin sets are from Task 2 of Zerospeech 2017 <cit.>, and Xitsonga is from NCHLT <cit.>. Following <cit.>, we only use words that are at least 5 characters (or 2 characters, for Mandarin) and 0.5 seconds long, and we report mean average precision (MAP) scores.[Note that Xitsonga and Mandarin are tonal languages and we do not explicitly model tones or consider them during evaluation (although HuBERT may implicitly capture some tonal information). In practice, there are very few pairs of words in our data that only differ in tones, so they should not have much effect in the evaluation.] Table <ref> summarizes the statistics of each test set.[The list of test words, along with models and other materials used in our experiments, are available at <https://github.com/ramonsanabria/awe_ssl>. ] In contrast to <cit.>, we avoid pretraining on the set we evaluate on, and sample 50 hours of untranscribed data from multilingual LibriSpeech <cit.> for French and German, AIShell <cit.> for Mandarin, and a separate set in NCHLT <cit.> for Xitsonga. We use randomly sampled utterances to increase speaker diversity, which has been shown to be important for pre-trained models <cit.>. We use HuBERT BASE,[<https://github.com/facebookresearch/fairseq/tree/main/examples/hubert>] a 12-layer Transformer, implementated in Fairseq. We feed the untranscribed data from the target language to HuBERT, and run k-means with 500 clusters on the hidden vectors from the 10th layer of HuBERT. We use the cluster IDs of each frame as targets for continued pretraining.[We also explored using k-means centroids on MFCCs or hidden vectors of HuBERT on English, but using the k-means centroids on target languages consistently outperformed other units.] We observe that after epoch 15, the performance stabilizes, so we do early stopping at epoch 15 for all languages. After continued pretraining, we average the hidden vectors from the 9th layer of HuBERT to construct the AWEs. The hyperparameters are tuned on the French dataset, and we will evaluate how well they generalize to other languages. §.§ Results Figure <ref> shows the results for continued pretraining (HuBERT CP) on French, German, Mandarin, and Xitsonga. We compare to the original HuBERT (HuBERT EN) and to pretraining from scratch on a particular target language (HuBERT LANG). We observe that continued pretraining substantially outperforms others. In addition, in three of the four cases, pretraining from scratch on the target languages underperforms the HuBERT model pretrained on English. These results further support the claim in <cit.> that by pre-training on 960 hours of English data, HuBERT is able to learn a considerable amount of language-independent information that improves AWEs beyond what is learned from a smaller amount of target language data alone. However, it also indicates that a relatively small amount of target language data can successfully adapt the pre-trained English model and improve this type of mean-pooled AWEs. § LEARNED POOLING As we have seen, AWEs can perform well by using simple pooling functions, such as mean-pooling. To improve the system further, we study the options of learning a pooling function for constructing AWEs. As we have detailed in Section <ref>, the goal is to learn a pooling function g, such that g(z_s:t) represents the segment between time s and t given the frame-level representation z of an utterance. Adhering to <cit.>, we focus on training the pooling function g with a contrastive loss — specifically, NTXent in (<ref>). This loss function requires positive and negative examples that either need to be obtained from labelled data or to be mined with unsupervised approaches. Prior unsupervised work uses nearest neighbor search for mining contrastive examples, where positive examples are taken from the near neighbors and negative examples are taken from the complement. This approach can achieve a strong MAP, but nearest neighbor search is slow to compute and does not scale well when the data set becomes large. We propose to use a multilingual phone recognizer (MPR) to label the untranscribed data with timings for each phone segment. Two speech segments are considered as positive pairs if they have the same phone sequence. Though the MPR system requires additional compute and data to train, we argue that the requirement is not as stringent as it seems, especially as pretrained models on high-resource languages are becoming more widely available; the use of HuBERT is one example, and the use of an external phone recognizer for unsupervised ASR in <cit.> is another. §.§ Experimental Setting We compare two approaches to mining contrastive examples for training the pooling function: the k-nearest neighbor (KNN) approach used in previous work, and the proposed MPR approach. For the nearest neighbor search, we first collect a set of random speech segments, ranging from 80 to 310 ms and being at least 80 ms apart. We represent each speech segment with mean-pooled HuBERT representations, and build an approximate nearest neighbor graph <cit.> using dot-product as distance metric with FAISS <cit.>. The MPR system is a hybrid model based on time-delayed neural network <cit.> trained with lattice-free maximum mutual information <cit.>. The hybrid model is trained on English, Spanish, Russian, Polish, Portuguese, Bulgarian, Czech, Hausa, Swedish, and Ukrainian from GlobalPhone <cit.>—so it has not seen any of the languages that we evaluate the AWEs on. We collect speech segments with 2 to 5 phones. We define positive examples as the segments that have the same phone sequences, and negative examples as the complement. We sample a maximum of 300 n-gram instances for every n-gram type due to hard-drive limitation. As opposed to wav2vec2.0 used in <cit.>, we use HuBERT throughout the experiments due to its superior performance <cit.>. We use the same network architecture as in <cit.> to implement the pooling function. The network consists of a LayerNorm <cit.>, a 1D convolution, and a transformer layer with 4 attention heads (including position embeddings) with a learning rate of 0.0001. The model finally max pools the frame-level representations to create the AWE. We train the pooling function (while fixing HuBERT) with a batch size of 150 for 5 epochs with a maximum 1000 iterations for each epoch. §.§ Results and analysis Results for the MPR approach and the KNN baseline are shown in Figure <ref>, along with an oracle approach that uses ground truth n-grams from forced alignment (GT). We evaluate the approaches on 5 hours of untrascribed French, with one million positive pairs. We find that our MPR approach performs nearly as well as using using ground truth n-grams, despite no training on the target language, and it works considerably better than the nearest neighbors approach. Moreover, for this 5h dataset, the training pairs took only five minutes to extract using MPR, versus 12 hours for the KNN approach. The size of the speech segments we use for mining the positive and negative examples can potentially have a significant impact on the results. We perform a controlled experiment on the same 5 hour set with speech segments consisting of various number of forced-aligned phones to study the performance of the learned pooling function. Results are shown in Figure <ref>. In general, larger segment sizes perform better. Speech segments with at least 5 phones perform similarly, and we will use this setting for the rest of the experiments. § COMPARING AND COMBINING METHODS In the previous sections, we investigate configurations and show that our techniques achieve good performance. Now, we ask how do both methods compare to each other, and can they be effectively combined? We also include results of the iterative nearest neighbor (IKNN) approach proposed in <cit.> as a baseline. We use their implementation[<https://gitlab.cognitive-ml.fr/ralgayres/ssemodel>] but reduce the batch size from 250 to 150 to fit on a 12 GB GeForce GTX 1080 Ti. Instead of training with the test set as in <cit.>, we use a separate training set to be more comparable to our own approach. Because IKNN requires large amounts of memory, we were not able to run it on the full 50h training set, so we report results for 20h of training data for IKNN and also for our own best system. We train our contrastive pooling model on 5-gram MPR positive pairs due to the results observed in Figure <ref>. Table <ref> presents the results on all test languages. We observe that learned pooling with pre-trained (English) HuBERT features outperforms CP the four languages, and that a small further improvement is obtained by combining the two approaches. Results are almost as good with only 20h of data, and considerably outperform the comparison approach. Since we are interested in a low-resource setting, we finally explore the data efficiency of each method, by reducing the amount of training data used. We create subsets of different sizes by randomly sampling utterances from the 50-hour dataset. We use the same data to train all components (CP, K-means, and the learned pooling). Since longer n-gram pairs may be limited for some of the smaller data sizes, we use all 2-5 grams in all settings. Figure <ref> (top) presents the results. We observe that while all models reach a similar performance when trained on 50 hours of data, learned pooling achieves nearly all of the gain with only about three hours of data, indicating far greater data efficiency. This result accords with previous work: e.g., <cit.> showed that for three learned pooling methods, performance had begun to level off by 50k training pairs (the maximum they tested). It is worth noting that until <cit.>, it was assumed that systems should be trained on ground truth words or word-like discovered terms, leading to far fewer pairs than the n-grams used by <cit.> and this paper—thus, many systems were trained on only 5k-20k pairs <cit.>. With our approach, one hour of data yields about 1M pairs, and 10 minutes yields 24k pairs. Inspired by the results above, we further explore the data requirements of the learned pooling technique, by looking at very low data regimes and the effects of speaker diversity. We sample from 100 to 10k positive training pairs, which are either randomly sampled from the 50h multi-speaker dataset, or from a single speaker. Figure <ref> (bottom) shows the results, indicating improvements over the baseline with just a few hundred training pairs, and little difference between single-speaker and multi-speaker training. These results suggest that the pre-trained HuBERT features are already doing a good job of speaker normalization, and only a relatively simple learned pooling function is needed to improve over mean-pooling. This contrasts with earlier work that learned AWEs using MFCC input features, where training pairs from multiple speakers were needed to help overcome speaker differences <cit.>. § CONCLUSIONS We propose two techniques to adapt English self-supervised acoustic word embedding representations to a target language with up to 50 hours of unlabeled data. We first propose to adapt English frame-level representations to a target language by continued pretraining (CP). Our results using mean-pooling show show that CP is highly effective and can outperform the original model with only 10 hours of data. Next, we show that one can achieve similar performance by training a pooling mechanism on top of the self-supervied representations, using contrastive-learning with positive phone n-gram pairs obtained by a multilingual phone recognizer. The MPR method is fast and returns a large number of high-quality pairs, leading to better word discrimination than a previous approach. It is also extremely data-efficient, requiring only a few hours of target language data to reach its best results, and outperforming the previous approach with less than 1h of data. Finally, we show that the two methods can be combined, leading to the best overall results. IEEEtran
http://arxiv.org/abs/2306.02108v1
20230603131617
Random matrix theory and the loss surfaces of neural networks
[ "Nicholas P Baskerville" ]
math-ph
[ "math-ph", "cs.LG", "math.MP", "math.PR" ]
roman - University of Bristol [0.5ex]1pt Random matrix theory and the loss surfaces of neural networks [0.5ex]1pt Nicholas P. Baskerville MMath MA (Cantab), CMath MIMA < g r a p h i c s > July 31, 2023 15cm A dissertation submitted to the University of Bristol in accordance with the requirements for award of the degree of Doctor of Philosophy in the Faculty of Science, School of Mathematics. CHAPTER: ABSTRACT Neural network models are one of the most successful approaches to machine learning, enjoying an enormous amount of development and research over recent years and finding concrete real-world applications in almost any conceivable area of science, engineering and modern life in general. The theoretical understanding of neural networks trails significantly behind their practical success and the engineering heuristics that have grown up around them. Random matrix theory provides a rich framework of tools with which aspects of neural network phenomenology can be explored theoretically. In this thesis, we establish significant extensions of prior work using random matrix theory to understand and describe the loss surfaces of large neural networks, particularly generalising to different architectures. Informed by the historical applications of random matrix theory in physics and elsewhere, we establish the presence of local random matrix universality in real neural networks and then utilise this as a modeling assumption to derive powerful and novel results about the Hessians of neural network loss surfaces and their spectra. In addition to these major contributions, we make use of random matrix models for neural network loss surfaces to shed light on modern neural network training approaches and even to derive a novel and effective variant of a popular optimisation algorithm. Overall, this thesis provides important contributions to cement the place of random matrix theory in the theoretical study of modern neural networks, reveals some of the limits of existing approaches and begins the study of an entirely new role for random matrix theory in the theory of deep learning with important experimental discoveries and novel theoretical results based on local random matrix universality. CHAPTER: DEDICATION AND ACKNOWLEDGEMENTS I dedicate this thesis to Charlotte, without whose love, encouragement and support it would not have been possible. Doing a PhD essentially as a hobby was a bit deranged and I had no right to expect you to put up with the silly hours nor my doing maths on the beach or other such nonsense, but you always did. As romantic gestures go, 200-odd pages of maths is a poor substitute for a symphony, but for what it's worth, it is yours. My parents have never failed to support my endeavours, academic or otherwise, and have ever been a source of steadfast encouragement, for which I am truly grateful. This thesis is really the conclusion of something we began together in the Cambridge branch of Eat over ten years ago. I thank Jon Keating, Francesco Mezzadri and Joseph Najnudel for the years of enjoyable collaboration and for taking the punt on me and my project in the first place. Their advice and ideas were integral to the success of this research. They provided just the right amount of guidance to keep projects moving, while allowing me to develop my own ideas and approaches. I thank Diego Granziol for several years of enjoyable collaboration. Much of the second half of this thesis was influenced by our discussions and is, I am sure, much the better for it. Thanks are due to the RMT group in Bristol for making me feel part of things, despite only being in the office once a week. I gratefully acknowledge financial support for tuition fees from the School of Mathematics in the University of Bristol. Finally, I thank my former team at GCHQ and particularly Tom L. His initial support and practical help were instrumental in getting my PhD off the ground and I shall always be grateful for the three years during which he gladly encouraged me as I juggled PhD work and our day-jobs. I also gratefully acknowledge the three years of funding for tuition fees and travel expenses that came from AR/RIF/STR and the DRE scholarship scheme. CHAPTER: AUTHOR'S DECLARATION I declare that the work in this dissertation was carried out in accordance with the requirements of the University's Regulations and Code of Practice for Research Degree Programmes and that it has not been submitted for any other academic award. Except where indicated by specific reference in the text, the work is the candidate's own work. Work done in collaboration with, or with the assistance of, others, is indicated as such. Any views expressed in the dissertation are those of the author. Nicholas P. Baskerville July 31, 2023, Bristol subsection * tocPage * * CHAPTER: ABBREVIATIONS The following abbreviations are used throughout this thesis. NN neural network ANN artificial neural network DNN deep neural network SGD stochastic gradient descent MLP multi-layer perceptron CNN convolutional neural network RNN recurrent neural network GAN generative adversarial network RMT random matrix theory GOE Gaussian orthogonal ensemble LSD limiting spectral density ESD empirical spectral density NNSD nearest neighbour spacing distribution WLOG without loss of generality a.s. almost surely CHAPTER: NOTATION The following notation will be used consistently throughout this thesis unless stated otherwise. δ_x A Dirac δ-function centred at the point x μ̂_N The empirical spectral measure of an N× N matrix ρ_SC The semi-circle density g_μ The Stieljtes transform of the measure μ R_μ The R-transform of the measure μ μ⊞ν Additive free convolution of measure μ and ν I_N The N× N identity matrix O(N) The orthogonal group on N× N matrices Δ(x⃗) The Vandermonde determinant over N symbols {x_1,…, x_N} 𝒩(μ, Σ) A Gaussian random variable with mean μ and covariance Σ z The real part of z∈ z The imaginary part of z∈ i(X) The index of an Hermitian matrix X μ_Haar The Haar measure on O(N) 𝒪(·) Asymptotic “big-o” notation. f(x)=𝒪(g(x)) if ∃ some constant c>0 such that |f(x)| ≤ c|g(x)| for all large enough x. o(·) Asymptotic “little-o” notation. f(x)=o(g(x)) if f(x)/g(x)→ 0 as x→∞. f ∼ g Asymptotic equivalence. f∼ g if |f(x)/g(x)| → 1 as x→∞. [N] The set of integers from 1 to N: {1,2,…, N}. CHAPTER: INTRODUCTION In this chapter we introduce the central objects of study for this thesis, namely deep neural networks and their loss surfaces. Deep neural networks are an important sub-field of machine learning, so we begin with some introductory material and context for machine learning. We make no attempts to be exhaustive, but aim to provide a self-contained introduction, accessible for any mathematically literate reader, to the key ideas from machine learning relevant to our investigations. We will provide a rather more detailed introduction to deep neural networks specifically, again aiming to be accessible to any mathematical reader. The reader familiar with machine learning and deep neural networks may well safely skip these introductory sections, however they do establish some conventions and points of view, which may be more or less familiar depending on the reader's background. Following these broad introductory sections, we will sharpen the focus to provide a summary of the prior literature on deep neural network loss surfaces, particular focusing on the mathematical works upon which this thesis is built. We will also take this opportunity to draw out and summarise the existing connections between deep neural network loss surfaces and random matrix theory, but an introduction to random matrix theory itself is postponed until the next chapter. We conclude this introductory chapter with a summary of the new results which make up the principal intellectual contribution of this thesis and a literature review of related work. § MACHINE LEARNING Machine learning encompasses to a great variety of areas of study and practical application in computer science, statistics, data science, engineering, economics, genomics etc. See, for example, Chapter 5 of <cit.> for a high-level summary of many applications. One could summarise the essential aspects of machine learning as: data and a model. Data could refer to traditional tabular numeric values (e.g. stock market indices or weather readings), natural language, digital imagery, digital voice recordings, internet search engine logs etc. All of these fields (and many more besides) make use of data of one form or another. Researchers and practitioners typically wish to use data they have acquired to address questions such as: * Do these data support a particular hypothesis? * What underlying structure or dynamics are suggested by the observations in these data? * Can one use past data to predict future events? * Can one algorithmically find certain interesting subsets of a dataset? None of these questions are unique to the field of machine learning. Indeed, many such questions have been asked by statisticians and physical and biological scientists for centuries. The lines between machine learning and other, as it were, traditional statistical or mathematical modeling techniques are not entirely clear. Generally speaking, a machine learning approach to a problem is driven more by the data than any particular model. Motivated by intuition, prior observations or theoretical work, a physicist would traditionally start by proposing a model for the physical system under consideration and then obtain predictions to be tested theoretically. The physical model may well contain a number of parameters, such as physical constants, which should be estimated from data, however these parameters are typically few in number and possess meaningful physical interpretations. The physicist's model is as much a tool for making useful predictions about the world as it is a tool with which the underlying physical reality may be studied. A physicist may be able to improve the predictive power of their model, say, by introducing more parameters that can be tuned to the available data, but doing so would compromise its physical foundations and degrade its explanatory power. To the machine learning practitioner, there is no tension here: data is king and, crudely speaking, a model that better fits and predicts the data is a superior model. The preceding description certainly does not precisely define machine learning and there are doubtless examples of machine learning applications that lie outside of what we have presented, however our focus is exclusively on neural networks which, as we shall see, fall well within the boundary of machine learning as we have presented it. In the following subsections, we will outline sub-fields within machine learning. Such is the success of deep neural networks in modern machine learning, they are to be found in use in all of these sub-fields and, in many cases, they are the best available approach. §.§ Supervised learning A very common problem in machine learning is that of constructing a model from a labeled dataset. Consider a dataset of the form {x⃗_i, y_i}_i=1^N, where x⃗_i are the data points and the y_i are the labels. The x⃗_i may have come from any source and may or may not have a natural numerical representation as column vectors in some ℝ^d, however we assume that a representation of that form has been found. In some cases, the x⃗_i may have genuine geometrical meaning, while in other cases they may simply be numerical values stacked into vectors. The labels can be categorical, in which case the problem is called classification, or continuous, in which case the problem is regression. Here are two specific examples: * x⃗_i = (#bedrooms_i, #bathrooms_i, floor area_i, latitude_i, longitude_i) and y_i = market value (£) for a set of houses in the UK. * x⃗_i = (pixel_i1, …, pixel_id) and y_i ∈{ (0,0,1), (0,1,0), (1, 0, 0)} for a set of images categorised into three disjoint classes: cat, dog and rabbit. < g r a p h i c s > A model in this context is a function f that is a good approximation to x⃗_i ↦ y_i. f should not simply be a memorisation of the pairs {(x⃗_i, y_i)}_i=1^N, since applications typically require f to be useful on other datasets {(x̂⃗̂_i, ŷ_i)}_i=1^M generated from the same underlying distribution as {(x⃗_i, y_i)}_i=1^N, or else to reveal something of the underlying distribution. f can be deterministic or stochastic and must be computable by some algorithm, preferably quite efficiently, though this is not a universal necessity. To be more precise, let us introduce a data generating distribution supported on 𝒳×𝒴, where 𝒳 is in the majority of cases some ℝ^d or a subset thereof. 𝒴 may be a subset of some ℝ^c in the regression case, or a countable or even finite set in the classification case. A single sample (x⃗, y) from is a single data point and its corresponding label, while a dataset 𝒟 is some finite sample from (usually taken to be sampled i.i.d.). Let 𝒟_train and 𝒟_test be two separate finite datasets sampled from . Supervised learning consists of using the training set 𝒟_train to construct a model f:𝒳→𝒴 such that (x⃗, f(x⃗)) is close in distribution, in some sense, to , and practically this is measured using the test set 𝒟_test. No modern summary of supervised learning would be complete without mentioning semi-supervised learning. Within the context of this thesis, the distinction between supervised and semi-supervised learning is not of much importance; the difference lies in how the labels are obtained. Standard supervised learning datasets are often constructed by expending human effort to assign labels to data points. For example, people may be paid to label images with which they are presented as containing some objects of interest. In some cases, labels can be obtained systematically without any human labeling, for instance in the example of house prices above, the data already exist in some database (though of course human effort was almost certainly required at some point to generate the data and input them to the database). In semi-supervised learning, labels are derived directly from the data points in some algorithmic manner. A quite natural example is that of time series, where a model may be constructed to predict, say, the temperature in Bristol tomorrow given the observed temperate today and for every day in the previous week. Thus the 𝒳 is 7 dimensional (one dimension for each day), and 𝒴 is one dimensional (the temperature tomorrow). Given a dataset of historical temperatures in Bristol, simply a univariate time series T_i where i indexes the day, one can automatically construct a labelled dataset: x⃗_i = ((T_i-7,…, T_i-1), T_i), for all i for which the indices are valid. Any supervised learning method can then be applied to the resulting labelled data set to produce a model capable of predicting tomorrow's temperature. Again, from the perspective of this thesis, semi-supervised learning is indistinguishable from supervised learning, so we will not discuss it further. §.§ Unsupervised learning Unsupervised learning considers the case where one only has data points x⃗ and no labels y. Returning again to the house prices example, given only a dataset of data points x⃗ containing key parameters about houses, but no labels giving their market value, what can one learn about houses in UK? For example, one might imagine that using only the key parameters contained in x⃗ from a large dataset of houses, one could discover useful structure about broad categories of houses. One common strategy that is particularly relevant in the context of deep learning is embedding. Given only a data set of data points {x⃗_i}_i=1^N, an embedding model is some map f:ℝ^d →ℝ^e where typically e < d. Whatever the meaning or structure of the native data points x⃗_i ∈ℝ^d, the embedding model f will usually be constructed so that the embeddings {f(x⃗_i)}_i=1^N have some useful geometrical meaning. The canonical example of embedding models are word embedding models, for example see <cit.>, where the data sets are just large collections of natural language, and the embedding models aim to represent words in some Euclidean space such that the geometry of Euclidean space has semantic meaning. §.§ Generative modelling Consider a dataset {x⃗_i}_i=1^N sampled from some underlying distribution ℙ. We wish to construct an approximating distribution ℙ̃ from which samples can be easily drawn. In this case, ℙ̃ would be the model. A very elementary example of a generative modeling problem would be heights of people in some population, say x_i = height of person i. In this case, we expect ℙ to be Gaussian and so ℙ̃ can be obtained simply by estimating the mean and variance. We can extend to produce a less trivial example, where the population is a co-educational school. Rather than fitting a single Gaussian to the whole population, it would clearly be sensible to split into boys and girls and by year groups, and fit a Gaussian to each. Sampling a height from the population then consists of sampling boy/girl from a Bernoulli random variable, sampling year group from a Categorical random variable, and then sampling the height from a Gaussian. Clearly, even in the still rather modest example, the problem of appropriately estimating all of the Gaussian means and variances and the Bernoulli and categorical probabilities is much harder than estimating a single Gaussian, but the model is more expressive and will likely better represent the data. A much more complicated and modern example, is x⃗_i = (pixel_i1, …, pixel_in) for some set of images of faces. Constructing an adequate parametric model is likely to be infeasible in this case, with the overwhelmingly most successful modern approach being generative adversarial models (GANs) <cit.> (see below). §.§ Loss surfaces and the training of machine learning models As some of the above examples have already hinted, constructing a machine learning model has two distinct stages: model design and model training. In the height example above, model design is simply the choice to use a Gaussian distribution and model training is just estimating the mean and variance, e.g. by taking the sample mean and the unbiased estimate of the population variance. Increasing in complexity, let us consider a linear regression model f(x⃗) = Wx⃗ + b⃗, where the matrix W and the vector b⃗ contain the parameters of the model. Here model design is the choice of the form of f, namely as a linear map, while model training consists of choosing W and b⃗ to obtain f that best fits the data out of all possible models of the same linear form. It happens that the linear regression models, like Gaussian models, are one of the few model types for which optimal parameters can be computed exactly and in closed-form. Let us discuss how more general machine learning models are constructed and trained. We will describe the supervised case, for the sake definiteness, but much of what we say applies, mutatis mutandis, to unsupervised and generative modeling. Consider again a dataset {x⃗_i, y_i}_i=1^N where x⃗_i∈ℝ^d, y_i∈ℝ^c, for some positive integers d, c. Denote again by ℙ the underlying distribution from which the pairs (x⃗_i, y_i) are sampled; all expectations below are taken with respect to ℙ. We fix some loss function[Also known as an objective function, or simply `loss' or `objective'.] ℒℝ^c×ℝ^c →ℝ y, ŷ ↦ℒ(y, ŷ) which is some typically simple function chosen to measure the performance of a model. Typically there is some quantity of practical interest that one wishes to optimise a model with respect to, for example classification accuracy or mean-squared-error. ℒ will either be directly the quantity of interest (e.g. mean-squared-error) or will be chosen to correlate with the quantity of interest (e.g. mutual entropy in the case of accuracy). We can now state the central aim of machine learning as an optimisation problem: argmin_f∈ℱ𝔼ℒ(y, f(x⃗)) where ℱ is some class of functions. Of course, in any non-trivial case, one does not have access to ℙ but only the finite sample {x⃗_i, y_i}_i=1^N. The training set is used to optimise the function f, while the test set is reserved for estimating 𝔼ℒ(y, f(x⃗)) so that the quality of the training procedure can be measured. Here are some examples of loss functions: * L_2: ℒ(y, ŷ) = (y - ŷ)^2. * L_1: ℒ(y, ŷ) = |y - ŷ|. * Cross-entropy: ℒ(y, ŷ) = -∑_j y_j logŷ_j. The set of functions ℱ can be defined in a variety of ways, but will always have some set of parameters which are tuned to minimise the training loss ∑_i ℒ(y_i, f(x⃗_i)). Here are some examples of ℱ: * Linear regression: ℱ = {f(x⃗) = Wx⃗ + b⃗ :  W∈ℝ^c× d, b⃗∈ℝ^c}. Parameters are w⃗ (regression coefficients) and b⃗ (bias). ℱ is isomorphic to ℝ^dc+c as a vector space. * Gaussian process regression: ℱ consists of the posteriors given the data {(x⃗_i y_i)}_i and a prior with mean function mℝ^d→ℝ and covariance function kℝ^d×ℝ^d→ℝ. m and k may be simple functions possessing of a small number of hyper-parameters. ℱ is infinite-dimensional, though it is possible to consider the posterior for a fixed data set and then there are simply the prior hyperparameters to tune, giving again a space isomorphic to some ℝ^K. * Neural networks, a full discussion of which is given below in Section <ref>. Henceforth, we shall consider only finite-dimensional ℱ isomorphic to some ℝ^N and we assume a given parametrisation of ℱ with some vector parameter denoted by w⃗. For w⃗∈ℝ^N, f_w⃗∈ℱ denote the member of ℱ corresponding the vector of parameters w⃗. Having defined ℱ and ℒ, we obtain the notion of the loss surface {𝔼ℒ(y, f(x⃗))  :  f∈ℱ}. Finding the global minimum, or some sufficiently good local minimum or saddle point, on the loss surface is a matter of tuning a finite number of parameters. As mentioned above, there are some special cases for which the globally optimal parameters can be computed in closed-form. For linear regression with L_2 loss, one can straightforwardly compute derivatives and solve ∂ℒ / ∂w⃗ = 0 to find a unique global minimum. In almost all cases, however, no such exact solution will be possible and one must resort of approximate algorithmic approaches. A simple approach which nevertheless turns out to be extremely powerful and the basis of much of modern machine learning is gradient descent. Suppose that one can compute the gradient ∂/∂w⃗ℒ(y ,f_w⃗(x⃗)), where this may be exact or in some cases approximate. Defining a small learning rate η>0, a natural way to slightly improve the parameters is w⃗_t+1 = w⃗_t - η∑_i ∂/∂w⃗ℒ(y_i , f_w⃗(x⃗_i)) where w⃗_t our the current parameter estimates and w⃗_t+1 are the updated parameters. One could imagine repeatedly iterating to update the parameters and obtaining optimal, or at least sufficiently performant, parameters. ∑_i can refer to a sum over the whole training set, some subset, or a a single item. In the the first case, the described algorithm is precisely gradient descent, whereas in the latter two cases, if the subset is randomly sampled, the algorithm is stochastic gradient descent, since at each iteration a noisy estimate of the gradient is computed. § NEURAL NETWORKS In this thesis, a neural network shall refer exclusively to a particular type of machine learning model that was originally coined as artificial neural network (ANN) <cit.> to draw distinction between the machine learning models and the biological systems by which they are inspired. For our purposes, and typically for the purposes of modern machine learning, any historical connection with biological neural networks is of limited value (despite being historically important) and so we adopt the common terminology of merely neural network, with the `artificial' being implicit. The distinction between neural networks and deep neural networks is important, practically and theoretically, and will be made clear in the following discussion. Conceptually, neural networks are non-linear functions from ℝ^d to ℝ^c parameterised by some w⃗∈ℝ^N and formed as the composition of simple affine-linear maps and simple pointwise non-linearities in a layered structure. Being composed of simple, modular components, neural networks provide an elegant and efficient way of effectively arbitrarily scaling the capacity of models. Heuristically, the number of parameters N of a parameterised model determines its capacity to learn patterns in data: the larger N is, the more complicated and diverse the patterns that can be learned. Naturally one then wishes to define models with many parameters and easily scale up the number of parameters to obtain better results on complicated datasets. With traditional statistical models, the parameters typically have some interpretation, being attached to some distribution for example, and so substantially increasing the number of parameters will typically require complete redesign of the model. Even with non-neural machine learning models, it is typically not possible to arbitrarily scale the number of model parameters, as they are typically constrained by the design of the model and/or the data. For example, a linear regression model has no freedom: the number of parameters is determined entirely by the data dimensionality. Neural networks immediately solve this issue, essentially providing a simple recipe for constructing arbitrarily large models of some fixed type. Neural networks are defined by their architecture and their parameters. The architecture is the specification of how the parameters w⃗ are used to define a function f_w⃗. There are many different architectures in the machine learning literature and in practical use <cit.>, however there are a small number of standard types of architecture that cover the vast majority of architectures - we shall describe a few of the most significant types below. Finally, we note that it is near-ubiquitous in the machine learning literature to use the term neural network to refer to specific architectures with arbitrary parameters (which are families of functions) and specific architectures with specific parameters (which are bona fide functions). Multi-layer perceptrons (MLPs) The simplest and oldest type of neural network is the MLP[MLPs are also commonly called fully-connected networks.] <cit.>. Let L>0 be an integer, and let n_0,n_1,…, n_L > 0 be integers, with n_0=d and n_L = c. Define matrices W^(i)∈ℝ^n_i-1× n_i and vectors b⃗^(i)∈ℝ^n_i; these are the weights and biases respectively. Let σ:ℝ→ℝ be a non-linear function[Note that the definition of any neural network works if σ is linear, but this case is not generally interesting (as it results in linear neural networks), so we exclude it by definition.] - the activation function. Theoretically, σ is often assumed to be differentiable, though this assumption is not required by some of our results. In all practical cases, σ will be twice-differentiable except possibly at a finite set of points at which it is merely continuous. We shall use this latter, weaker, condition, with the convention that, whenever expressions involving derivatives of σ are encountered, they implicitly exclude the finite set of points at which the derivative does not exists. This convention mirrors what is seen in practice, where σ'(x_*) = lim_x→ x_*^-σ'(x) for any non-differentiable point x_*. An MLP with L layers is now defined as f_w⃗(x⃗) = z⃗^(L),   z⃗^(l) = W^(l)σ(z⃗^(l-1)) + b⃗^(l),   l=1, …, L,   z⃗^(0) = x⃗, where σ(x⃗) for vector x⃗ is defined as the vector with components σ(x_i), i.e. σ is applied element-wise. There may optionally be another non-linearity applied to z⃗^(L), which may be different from σ, but we will not need to consider that case here. Note that if L>1, all layers apart from the final layer are called hidden layers. Deep neural networks are usually defined to be networks with at least one hidden layer, though most of the major practical successes of neural networks comes from models with tens, or even hundreds, or hidden layers. Machine learning using deep neural network is commonly referred to as deep learning. Convolutional neural networks (CNNs) MLPs are a very general form of neural network that can be applied to data of any structure, given some strategy for converting each data point to a single vector representation. If the data are not naturally represented as vectors, forcing them into such a representation so that an MLP can be used is likely to be sub-optimal. The classical motivating example is that of image data, where each data point is an image and so naturally represented as a rank 3 array of pixels: (width, height, channels). By flattening the pixel arrays in vectors and applying an MLP, we would almost certainly be making the learning problem more difficult than it really is. For example, if the network's only objective is to detect cats in images, a picture of a cat located in the top left of the image should appear the same to the network as a picture of a cat in the bottom right of the image, but an MLP presented with flattened vectors must learn separately to identify cats in all possible locations. CNNs are the standard solution to this kind of problem, particularly for image data <cit.>, but also for other data types such as time series and even natural language text <cit.>. We can write a basic CNN as: f_w⃗(x⃗) = z⃗^(L),   z⃗^(l) = g(σ(z⃗^(l-1)); W^(l), b⃗^(l)),   l=1, …, L,   z⃗^(0) = x⃗, where g(·; W, b⃗) is an affine-linear function with respect to its input and also its parameters W, b⃗, and the shape of the weights and biases are entirely general. This definition is clearly a strict generalisation of the MLP, which is given by g(x⃗; W, b⃗) = Wx⃗ + b⃗. CNNs take g to be a convolution operation. Let W∈ℝ^2k+1× 2k + 1 × c_1 × c_2 be a kernel and let x⃗∈ℝ^h× l × c_1, then g(x⃗; W_ijk) = ∑_p = i-k^i+k∑_q=j-k^j+k∑_r=1^c_1 W_pqrkx_pqr. g can be similarly defined to include biases, care must be taken with the definition at the edges (e.g. when i-k < 0) and the first two indices of W needn't have odd dimension, but for our purposes there is no need to consider these details. Here 2k+1 is the filter size, c_1 is the number of input channels and c_2 the number of output channels. c_1, c_2 are the analogue of the input and output size of each layer of an MLP. Typically, in the first layer of a CNN, k is much less than h and l, so that the number of parameters in W is much less than the number of parameters in the a corresponding weight matrix of an MLP: (2k+1)^2 c_1c_2 compared to hlc_1c_2. Note also that the convolutional structure of (<ref>) reuses entries of W in multiple location on the input x⃗. As well as reducing the number of parameters compared to equivalent MLPs, CNNs also restrict to functions which are translation invariant in the desired sense motivated by the above example of cat detection in images. Finally, note that CNNs are special case of MLPs; the operation defined in (<ref>) is affine-linear and so for any index flattening transformation ϕ(x⃗) there exists a matrix Ŵ such that g(x⃗; W) = Ŵϕ(x⃗). Nevertheless, CNNs are preferred to MLPs on any data for which the convolution operation is appropriate, as they provide a beneficial inductive bias, essentially encouraging the optimisation procedure (recall (<ref>)) to find superior local optima than would be found for an MLP. Sequential modelling architectures CNNs are well-adapted to image data and, loosely speaking, data which can reasonably be represented as images (e.g. spectrograms <cit.>). CNNs have also been successfully applied to natural language data <cit.>, however there are a few other architecture types designed for natural language data and other sequential data. In particular, recurrent neural networks (RNNs) <cit.> and later variants such as long short-term memory (LSTM) <cit.> networks and gated recurrent units (GRUs) <cit.> have architectures designed to respect the time-ordering of the data (e.g. the order of words in a sentence) while possessing the appropriate time re-parametrisation invariance. More recently, transformer models <cit.> have been proposed and enjoyed considerably practical success over RNN and CNN architectures. Architecture combinations The different architecture types outlined above need not be used in isolation, but can be combined. For instance, it is standard practice to construct architectures as a concatenation of a CNN and an MLP, with the MLP acting on the flattened output of the CNN[Historically, such concatenations of CNNs and MLPs were the standard approach, so are universally referred to simply as CNNs and networks with only convolutional layers are often called fully-convolutional networks.]. RNNs and transformer architectures are usually built as extensions of MLPs, though there are also convolutional examples (see e.g. convolutional RNNs). Generative adversarial networks (GANs) MLPs, CNNs and the various sequential modelling architectures are the most common neural network architecture types in practical use and, between them, provide the basis of the vast majority of applications of deep learning to supervised, unsupervised and semi-supervised learning problems. GANs <cit.> are the canonical basic approach to generative modelling using neural networks. GANs are composed of two neural networks: generator (G) and discriminator (D). G is a map ℝ^m→ℝ^d and D is a map ℝ^d→ℝ. G's purpose is to generate synthetic data samples by transforming random input noise, while D's is to distinguish between real data samples and those generated by G. Given some probability distribution ℙ_data on some ℝ^d, GANs have the following minimax training objective min_w⃗_Gmax_w⃗_D{𝔼_x⃗∼ℙ_datalog D(x⃗) + 𝔼_z⃗∼𝒩(0, σ_z^2)log(1 - D(G(z⃗)))}, where w⃗_D, w⃗_G are the parameters of the discriminator and generator respectively. Given a well optimised generator model, one can sample approximately from the data distribution by sampling latent vectors in the space ℝ^l and passing them through the generator. Training neural networks By defining neural network architecture suitable for some data and by varying the number of layers, or the size of the layers (i.e. the size dimensions of the weights), one can specify very large families of parameterised non-linear functions with essentially arbitrary expressivity and complexity. Indeed, there are many results beginning with shallow networks <cit.> that establish neural networks as universal function approximators within certain classes of functions and considerable amounts of more recent work that establish the representational power of deep networks <cit.>. Therefore, given any data, any learning task defined on that data and any theoretically possible level of performance, one can be quite sure of constructing an neural network architecture, and hence a family of parametrised functions, such that there exist some parameter values giving the specified level of performance at the task on the data. If neural networks are to be practically useful, however, there must exist some feasible algorithm to find such parameter values. Feasible here has at least two meanings: * computationally feasible, i.e. the algorithm must terminate in a reasonable time using a reasonable amount of computational resource; * the algorithm must be general-purpose, i.e. one requires algorithms that apply to a wide variety of datasets and architectures - it would be infeasible if a bespoke algorithm were required for every (dataset, architecture) combination. We have already seen how the layered structure of neural network, building complicated functions from the composition of simple primitives, makes feasible the specification of models with arbitrary capacity and complexity, the layered structure is also essential for feasible training. In particular, despite their potentially enormous size and considerable complexity, most neural networks are efficient to evaluate, as the vast majority of the computational work in their evaluation comprises linear algebraic operations which have been well-optimised for many computational architectures <cit.>. Moreover, the layered structure makes possible the efficient and automatic computation of derivatives of neural networks with respect to their parameters. Indeed, consider the form an MLP in (<ref>). Differentiating f_w⃗(x⃗) with respect to any of the W^(l) is a mathematically simple matter: one simply applies the chain rule. Let us define y⃗^(l) = σ(z⃗^(l)), so z⃗^(l) = W^(l)y⃗^(l-1) + b⃗^(l). Then ∂z⃗^(l)/∂y⃗^(l-1) = W^(l),   ∂y⃗^(l)/∂z⃗^(l) = σ'(z⃗^(l)),   ∂z⃗^(l)/∂ W^(l) = y⃗^(l), so observe that, if σ' is known in closed-form and an implementation provided, a computer can implement the chain rule to automatically compute exact derivatives of f_w⃗. If derivatives of ℒ are also implemented, then the full derivatives ∂_w⃗ℒ(y, f_w⃗(x⃗)) can be computed for any x⃗, y and at any w⃗. Moreover, all these gradient computations also benefit from the optimised implementations of linear algebraic primitives. In the machine learning literature, computing f_x⃗(x⃗) is called a forward pass and computing ∂_w⃗ f_w⃗(x⃗) is called a backward pass. Since neural networks allow for efficient automatic computation of loss gradients ∂_w⃗ℒ(y, f_w⃗(x⃗)), the simplest algorithm one could imagine to optimise the parameters w⃗ for a dataset is stochastic gradient descent (<ref>). So far it is clear that using SGD in combination with neural network backward pass represents a feasible optimisation algorithm for general neural networks and it quite feasible to perform hundreds of thousands of steps of SGD in an acceptable time-frame, though obviously this varies with model and dataset size, as do the requirements on the computational hardware. However this discussion does not address the quality of the optimisation. That is to say, we have described a procedure for neural network optimisation that is general-purpose, feasible to implement and apply to any architecture and dataset, and simple computational experiments would be sufficient to determine how many SGD steps can be performed per second for a given model and given hardware. For this procedure to be of value, however, it must, with sufficient probability, find parameter values w⃗ that give sufficiently good performance of the neural network on the defined task. While SGD is an intuitive and appealing algorithm, the cases for which it can be proven to find, say, global minima are narrow <cit.> and certainly cannot be expected to generically apply to deep neural networks. Indeed, a priori, for large neural networks with many parameters, one should expect there to be a great many saddle points and local optima of the loss surface around which SGD could get stuck. Algorithmic innovations can somewhat mitigate the problem of saddle points, such as endowing the gradient descent trajectory with momentum <cit.> or adjusting the learning rates in different directions on the loss surface <cit.>, and these techniques can greatly improve practical performance of neural networks <cit.>. In very high dimensions the intuition of such techniques does not necessarily apply and if there are a great many local minima, then we should expect SGD to converge to, at best, some local minimum determined by the random initialisation of w⃗. In general, there is no reason to expect that these local minima will provide network performance anywhere near the global optimum, or even useful performance at all. In bold defiance of these arguments, neural networks continue to have substantial success when applied to an increasingly long list of machine learning problems: computer vision, speech processing, natural language processing, reinforcement learning, media generation etc. We refer the interested reader to the excellent website <cit.> where they will find links to published literature detailing the success of neural networks in all fields of machine learning. Networks are trained using stochastic gradient-based optimisation methods on very high-dimensional, strongly non-convex surfaces for which no formal convergence or performance guarantees exist and yet excellent practical performance is routinely obtained with little concern for whether the optimisation problem has been solved. Extremely over-parametrised models can be trained with large numbers of passes through the data without overfitting. Models with equivalent training performance can have radically different generalisation performance depending on complicated interactions between design choices such as learning rate size (and scheduling) and weight-decay <cit.>. § STRUCTURE OF NEURAL NETWORK LOSS SURFACES One strand of theoretical work focuses on studying properties of the loss surfaces of large neural networks and the behaviour of gradient descent algorithms on those surfaces. Much of the content of this thesis sits within this line of research. <cit.> presented experimental results pointing to a similarity between the loss surfaces of multi-layer networks and spherical multi-spin glasses <cit.>. <cit.> built on this work by presenting modeling assumptions under which the training loss of multi-layer perceptron neural networks with activations can be shown to be equivalent to a spherical multi-spin glass (with network weights corresponding to spin states). The authors then applied spin glass results of <cit.> to obtain precise asymptotic results about the complexity[Complexity will be given a formal definition in Chapter <ref>.] of the training loss surfaces. Crudely, the implication of this work is that the unreasonable efficacy of gradient descent on the high-dimensional and strongly non-convex loss surfaces of neural network models can in part be explained by favourable properties of their geometry that emerge in high dimensions. Relationships between simpler neural networks and spin glasses have been known since <cit.> and, more generally, connections between spin glass theory and computer science were studied in <cit.> in the context of signal processing (image reconstruction, error correcting codes). More recent work has dispensed with deriving explicit links between neural networks and spin glasses, instead taking spin glass like objects as a tractable playground for gradient descent in complex high-dimensional environments. In particular, <cit.> compare empirically the dynamics of state-of-the-art deep neural networks and glassy systems, while <cit.> study random tensor models containing some `spike' to represent other features of machine learning problems (some `true signal' to be recovered) and perform explicit complexity calculations as well as gradient descent dynamical calculations revealing phase transitions and landscape trivialisation. <cit.> simplify the model in favour of explicitly retaining the activation function non-linearity and performing complexity calculations à la <cit.> for a single neuron. <cit.> study the loss surface of random single hidden layer neural networks by applying the generalised Gauss-Newton matrix decomposition to their Hessians and modelling the two components as freely-additive random matrices from certain ensembles. <cit.> consider the loss surfaces of single layer networks by computing the spectrum of the Gram matrix of network outputs. These works demonstrate the value of studying simplified, randomised neural networks for understanding networks used in practice. The situation at present is far from clear. The spin glass correspondence and consequent implications for gradient descent based learning from <cit.> are tantalising, however there are significant challenges. Even if the mean asymptotic properties of deep neural network loss surfaces were very well described by corresponding multi-spin glass models, the question would still remain whether these properties are in fact relevant to gradient-based algorithms running for sub-exponential time, with some evidence that the answer is negative <cit.>. Another challenge comes from recent experimental studies of deep neural network Hessians <cit.> which reveal spectra with several large outliers and considerable rank degeneracy, deviating significantly from the Gaussian Orthogonal Ensemble semi-circle law implied by a spin glass model. Bearing all this in mind, there is a long and illustrious history in the physics community of fruitfully studying quite unrealistic simplified models of complicated physical systems and still obtaining valuable insights into aspects of the true systems. Several of the assumptions used in <cit.> to obtain a precise spherical multi-spin glass expression are undesirable, as outlined clearly in <cit.>. Assuming i.i.d. Gaussian data and random labels is clearly a going to greatly simplify the problem, however it is also the case that many of the properties of deep neural networks during training are not specific to any particular dataset, and there may well be phases of training to which such assumptions are more applicable than one might first expect. Gaussian and independence assumptions are commonplace when one is seeking to analyse theoretically very complicated systems, so while they are strong, they are not unusual and it is not unreasonable to expect some important characteristics of real networks to persist. By contrast, the restriction of the arguments in <cit.> to exclusively activations seems innocuous, but we argue quite the opposite is true. There are deep mathematical reasons why Gaussian and independence assumptions are required to make progress in the derivation in <cit.>, while the restriction to activations appears to be an obscure peculiarity of the calculations. The is certainly a very common choice in practice, but it is by no means the only valid choice, nor always the best; see e.g. leaky in state-of-the-art image generation <cit.> and GELU in state-of-the-art language models <cit.>. It would not be at all surprising if a spin glass correspondence along the lines of <cit.> were impossible without Gaussian and/or independence assumption on the data, however it would be extremely concerning if such a correspondence specifically required activations. If the conclusions drawn in <cit.> about deep neural networks from this correspondence are at all relevant in practice, then they must apply equally to all activation functions used in practice. On the other hand, if the conclusions were precisely the same for all reasonable activation functions, it would reveal a limitation of the multi-spin glass correspondence, since activation function choice can have significant implications for training neural networks in practice. § CONTRIBUTIONS OF THIS THESIS In Figure <ref> below we give a diagram that outlines the contributions of this thesis and their position within the literature. Rounded purple boxes denote antecedents and influences of our contributions from the literature. The references given in these boxes are not intended to be exhaustive but simply indicators. Rectangular orange boxes denote our contributions, where we display both the published papers and the corresponding Chapter in this thesis. We expand further on the context of this thesis and its contributions in the following subsections. Chapters <ref> and <ref> form the first major contribution and are discussed in section <ref>. Chapters <ref> and <ref> form a distinct major contribution but are nevertheless related to the the earlier chapters, as indicated in the diagram. Chapters <ref> and <ref> are distinct contributions that are certainly connected to the major parts of the thesis, but are more peripheral in their contribution; they are discussed in section <ref> and <ref> respectively. §.§ Generalisation of spin glass models for neural networks loss surfaces The first major contribution of this thesis is a significant generalisation of the understanding of spin glass models for neural network loss surfaces. Beginning with Chapter <ref>, we return to the modeling assumptions and methodology of <cit.> and extend their results to multi-layer perceptron models with any activation function. We demonstrate that the general activation function has the effect of modifying the exact multi-spin glass by the addition of a new deterministic term in the Hamiltonian. We then extend the results of <cit.> to this new high-dimensional random function. At the level of the logarithmic asymptotic complexity of the loss surface, we obtain precisely the same results as <cit.>, however the presence of a general activation function is felt in the sharp asymptotic complexity. On the one hand, our results strengthen the case for <cit.> by showing that their derivation is not just an accident in the case of networks. On the other hand, we have shown that this line of reasoning about neural networks is insensitive to an important design feature of real networks that can have significant impacts on training in practice, namely the choice of activation function. The main calculation for our result uses a Kac-Rice formula to compute landscape complexity of the modified multi-spin glass model we encounter. Kac-Rice formulae have a long history in the Physics literature <cit.> and more specifically to perform complexity calculations <cit.>. Complexity calculations in spiked matrix and tensor models in <cit.> have addressed spin glass objects with specific rank-1 deterministic additive terms, however those calculations do not extend to the case encountered here since those deterministic terms create a single distinguished direction — parallel to the gradient of that term everywhere on the sphere — which is critical to their analysis; our extra deterministic term creates no such single distinguished direction. We chart a different course using supersymmetric methods in Random Matrix Theory. Supersymmetric methods have been used before in spin glass models and complexity calculations <cit.>, often using the replica trick. We show how the full logarithmic complexity results of <cit.> can be obtained using a supersymmetric approach quite different to the approach used in that and similar works. By moving to this approach, we can make progress despite the presence of the extra deterministic term in the multi-spin glass. Our approach to the supersymmetric calculations most closely follows <cit.>, but several steps require approximations due to the extra term. Some of our intermediate results in the supersymmetric and RMT calculations are stronger than required here, but may well be useful in future calculations, e.g. spiked spherical multi-spin glass models with any fixed number of spikes. Finally, our approach computes the total complexity summed over critical points of any index and then uses large deviations principles to obtain the complexity with specified index. This is the reverse order of the approach taken in <cit.> and may be more widely useful when working with perturbations of matrices with known large deviations principles. Motivated by our results in Chapter <ref>, we ask if it is possible to further extend the spin glass modeling approach to capture yet further peculiarities and details of modern neural networks. We seek, in particular, a model that is capable of revealing the influence of architectural details at leading order in the annealed complexity, unlike the relatively weak effect of the activation function seen in Chapter <ref>. Modern deep learning contains a very large variety of different design choices in network architecture, such as convolutional networks for image and text data (among others) <cit.>, recurrent networks for sequence data <cit.> and self-attention transformer networks for natural language <cit.>. Given the ubiquity of convolutional networks, one might seek to study those, presumably requiring consideration of local correlations in data. One could imagine some study of architectural quirks such as residual connections <cit.>, and batch-norm has been considered to some extent by <cit.>. In Chapter <ref>, we propose a novel model for generative adversarial networks (GANs) <cit.> as two interacting spherical spin glasses. GANs have been the focus of intense research and development in recent years, with a large number of variants being proposed <cit.> and rapid progress particularly in the field of image generation. From the perspective of optimisation, GANs have much in common with other deep neural networks, being complicated high-dimensional functions optimised using local gradient-based methods such as stochastic gradient descent and variants. On the other hand, the adversarial training objective of GANs, with two deep networks competing, is clearly an important distinguishing feature, and GANs are known to be more challenging to train than single deep networks. Our objective is to capture the essential adversarial aspect of GANs in a tractable model of high-dimensional random complexity which, though being a significant simplification, has established connections to neural networks and high dimensional statistics. Our model is inspired by <cit.> with spherical multi-spin glasses being used in place of deep neural networks. We thus provide a complicated, random, high-dimensional model with the essential feature of GANs clearly reflected in its construction. By employing standard Kac-Rice complexity calculations <cit.> we are able to reduce the loss landscape complexity calculation to a random matrix theoretic calculation. We then employ various Random Matrix Theory techniques as in <cit.> to obtain rigorous, explicit leading order asymptotic results. Our calculations rely on the supersymmetric method in Random Matrix Theory, in particular the approach to calculating limiting spectral densities follows <cit.> and the calculation also follows <cit.> in important ways. The greater complexity of the random matrix spectra encountered present some challenges over previous such calculations, which we overcome with a combination of analytical and numerical approaches. Using our complexity results, we are able to draw qualitative implications about GAN loss surfaces analogous to those of <cit.> and also investigate the effect of a few key design parameters included in the GAN. We compare the effect of these parameters on our spin glass model and also on the results of experiments training real GANs. Our calculations include some novel details, in particular, we use precise sub-leading terms for a limiting spectral density obtained from supersymmetric methods to prove a required concentration result to justify the use of the Coulomb gas approximation. We note that our complexity results could be also be obtained in principle using the methods developed in <cit.>, however our work was completed several months before this pre-print appeared. Our approach for computing the limiting spectral density may nevertheless be the simplest and would be used as input to the results of <cit.>. The role that statistical physics models such as spherical multi-spin glasses are to ultimately play in the theory of deep learning is not yet clear, with arguments both for and against their usefulness and applicability. Before our contributions, the major result was <cit.> which, though influential, has received considerable criticism and could have reasonably been considered a parochial curiosity, rather than profound insight into neural network loss surfaces. Our work in Chapter <ref> considerably weakens the case against <cit.>, and our work in Chapter <ref> clearly demonstrates the potential of spin glass models (and statistical physics based models in general) to capture and explain phenomena in deep neural networks. Indeed, to the best of our knowledge, Chapter <ref> provides the first attempt to model an important architectural feature of modern deep neural networks within the framework of spin glass models. Our analysis reveals potential explanations for observed properties of GANs and demonstrates that it may be possible to inform practical hyperparameter choices using models such as ours. Much of the advancement in practical deep learning has come from innovation in network architecture, so if deep learning theory based on simplified physics models like spin-glasses is to keep pace with practical advances in the field, then it will be necessary to account for architectural details within such models; our work is a first step in that direction. §.§ Discovery of RMT universality in loss surfaces and consequences for loss surface models The other major contribution of this thesis is the instigation of the study of the role of random matrix theory statistics in deep learning at the local (i.e. microscopic) scale and the building of a strong case that the results which characterise the first half of the thesis, and other RMT-based results from the literature besides, can be expected to be much more general in applicability than their very restrictive modeling assumptions would suggest. An important and fundamental problem with Chapters <ref> and <ref> and related work in the literature is that typically the average spectral density of the Hessian of neural networks does not match that of the associated canonical random matrix ensembles that results from the modeling assumptions and are crucial in the technicalities of the calculations. This is illustrated in Figure <ref>. Put simply, one does not observe the Wigner semicircle or Marchenko-Pastur eigenvalue distributions, implied by the Gaussian Orthogonal or Wishart Ensembles. As shown in <cit.> the spectral density of neural network Hessians contain outliers and a large number of near zero eigenvalues, features not seen in canonical random matrix ensembles. Furthermore, even allowing for this, as shown in <cit.> by specifically embedding outliers as a low rank perturbation to a random matrix, the remaining bulk spectral density still does not match the Wigner semicircle or Marchenko-Pastur distributions <cit.>, bringing into question the validity of the underlying modelling. The fact that the experimental results differ markedly from the theoretical predictions has called into question the validity of neural network analyses based on canonical random matrix ensembles. Moreover, the compelling results of works such as <cit.> are obtained using very particular properties of the canonical ensembles, such as large deviation principles, as pointed out in <cit.>. The extent to which such results can be generalised is an open question. Hence, further work is required to better understand to what extent random matrix theory can be used to analyse the loss surfaces of neural networks. In Chapter <ref>, we show that the local spectral statistics (i.e. those measuring correlations on the scale of the mean eigenvalue spacing) of neural network Hessians are well modelled by those of GOE random matrices, even when the mean spectral density is different from the semicircle law. We display these results experimentally on MNIST trained multi-layer perceptrons and on the final layer of a ResNet-34 on CIFAR-10. The objective of Chapter <ref> is to motivate a new use for Random Matrix Theory in the study of the theory of deep neural networks. In the context of more established applications of random matrix theory, this conclusion may not be so surprising – it has often been observed that the local spectral statistics are universal while the mean density is not – however, in the context of machine learning this important point has not previously been made, nor its consequences explored. In Chapter <ref> we illustrate it in that setting, through numerical experiments, and start to examine some of its implications. Having established experimentally the presence of universal local random matrix statistics in real-world neural networks (though admittedly very small ones by modern standards), we proceed in Chapter <ref> to demonstrate how local random matrix statistics can be used as modeling assumptions for models of deep neural network Hessians to obtain surprisingly strong generalisations of prior spectral results. Works such as <cit.> and our own contributions in Chapters <ref> and <ref> show how detailed calculations can be completed beyond in and beyond the standard spin glass case, however these results all depend on important properties of the GOE, to which the Hessians in those cases are closely related. In a recent work, <cit.> showed how valuable practical insights about DNN optimisation can be obtained by considering the outliers in the spectrum of the loss surface Hessian. Once again, this work relies on special properties from random matrix theory, indeed an expression for the outliers follows from a known phase transition result whereby the largest eigenvalue “pops out” of the bulk. This result has been proven only for rotationally invariant matrix ensembles in <cit.>, itself a generalisation of the celebrated BBP phase transition <cit.>, though it was conjectured in <cit.> to be more general (a point which we clarify in Chapter <ref>, section <ref>). In addition, the explicit form of a Wigner semi-circle density was used to obtain the concrete outlier expression used in practice. Microscopic random matrix universality is known to be far more robust than universality on the macroscopic scale. Indeed, such results are well established for invariant ensembles and can be proved using Riemann-Hilbert methods <cit.>. For more general random matrices, microscopic universality has been proved by quite different methods in a series of works over the last decade or so, of which a good review is <cit.>. Crucial in these results is the notion of a local law for random matrices. The technical statement of local laws is given later in section <ref>, but roughly they assert that the spectrum of a random matrix is, with very high probability, close to the deterministic spectrum defined by its limiting spectral density (e.g. the semicircle law for Wigner matrices). Techniques vary by ensemble, but generally a local law for a random matrix ensemble provides the control required to demonstrate that certain matrix statistics are essentially invariant under the evolution of the Dyson Brownian motion. In the case of real symmetric matrices, the Dyson Brownian motion converges in finite time to the GOE, hence the statistics preserved under the Dyson Brownian motion must match the GOE. The n-point correlation functions of eigenvalues are one such preserved quantity, from which follows, amongst other properties, that the Wigner surmise is a good approximation to the adjacent spacings distribution. At the macroscopic scale, there are results relevant to neural networks, for example <cit.> consider random neural networks with Gaussian weights and establish results that are generalised to arbitrary distributions with optimal conditions, so demonstrating universality. On the microscopic scale, our work in Chapter <ref> provided the first evidence of universal random matrix theory statistics in neural networks and was subsequently to the weight matrices of neural networks in <cit.>, but no prior work has considered the implications of these statistics, that being the central contribution of Chapter <ref>. Our main mathematical result is a significant generalisation of the Hessian spectral outlier result recently presented by <cit.>. This generalisation removes any need for GOE or Wigner forms of the Hessian and instead leverages much more universal properties of the eigenvectors and eigenvalues of random matrices which we argue are quite likely to hold for real networks. Our results make concrete predictions about the outliers of DNN Hessians which we compare with experiments on several real-world DNNs. These experiments provide indirect evidence of the presence of universal random matrix statistics in the Hessians of large DNNs, which is noteworthy as certainly these DNNs are far too large to permit exact eigendecomposition of their Hessians as done in Chapter <ref>. Along a similar line, we show how local random matrix laws in DNNs can dramatically simplify the dynamics of certain gradient descent optimisation algorithms and may be in part responsible for their success. Finally, we highlight another aspect of random matrix universality relevant to DNN loss surfaces. Recent work <cit.> has shown that the so-called `self averaging' property of random matrix determinants is very much more universal than previously thought. The self-averaging of random matrix determinants has been used in the spin glass literature both rigorously and non-rigorously (e.g. <cit.> inter alia) and is the key property that produces the exponentially large/small number of local optima repeatedly observed. We argue that insights into the geometry of DNN loss surfaces can be conjectured from quite general assumptions about the Hessian and gradient noise and from the general self-averaging effect of random matrix determinants. §.§ Correlated noise models for neural network loss surfaces Spin glass and statistical physics based models provide an important perspective on the geometric and statistical properties of neural network loss surfaces, as is extensively explored in Chapters <ref> and <ref>, alongside prior work in the literature. Part of the appeal of these approaches is their ontological separation from classical approaches to analysis and methods of proof in statistical learning theory. Having defined a model and settled on stochastic gradient descent (or a variant) as the optimisation approach, a natural question is: does stochastic gradient descent converge under some assumptions and what, if any, guarantees are there on the parameters to which it converges? Questions like this are well-studied in the statistical learning and optimisation literature <cit.>, but in the context of neural network this work is of limited applicability as the known results all rely on properties that are not possessed by neural networks, such as convexity of the loss surface (as a function of the network weights). Some recent work has established convergence results using weaker assumptions like the PL inequality <cit.> and <cit.> has developed a theory of neural network training dynamics based on perturbation analysis. In aggregate, there are many separate results giving guarantees on SGD under a variety of assumptions, some of which are plausible for neural networks but what exists is far short of a complete theory. The results based on spin glass models, as seen in Chapter <ref> and <ref>, are quite different in nature from these SGD convergence results, providing insight into the overall structure and complexity of the loss surfaces on which SGD operates. These approaches are able to capture much more of the genuine complexity of the loss surfaces of real neural networks than the classical SGD convergence analyses, however the results they provide are somewhat like descriptive sketches of the the loss surfaces, unlike the precise convergence guarantees of the classical analyses. In Chapter <ref> we present work that bridges that gap between these two parallel streams of thought. Concretely, we obtain several variations on SGD convergence results, particularly in the case of iterate averaging. Iterate averaging is a well-known technique in stochastic optimisation, where the parameter iterates w⃗_k are simply averaged to produce the new sequence ŵ⃗̂_t = 1/t∑_k=1^t w⃗_k. Intuitively, this simple averaging should have the effect of reducing the variance in the parameter estimates, and indeed this very fact is critical in some convergence proofs, such as that for Adam <cit.> given in <cit.>. That being said, to the best of our knowledge there has been no explicit theoretical work analysing the generalisation benefit of iterate averaging. Whilst <cit.> propose that iterate averaging leads to “flatter minima which generalise better”, flatness metrics are known to have limitations as a proxy for generalisation <cit.>. <cit.> show that the iterate average convergence rate for both SGD and second-order methods are identical, but argue that second-order methods have an optimal pre-asymptotic convergence rate on a quadratic loss surface. Here, pre-asymptotic means before taking the number of iterations t →∞ and quadratic means that the Hessian is constant at all points in weight-space. The analysis does not extend to generalisation and no connection is made to adaptive gradient methods, nor to the importance of the high parameter-space dimensionality of the problem, both of which are addressed by our approach in Chapter <ref>. Amendments to improve the generalisation of adaptive methods include switching between Adam and SGD <cit.> and decoupled weight decay <cit.>, limiting the extent of adaptivity <cit.>. We incorporate these insights into our algorithms but significantly outperform them experimentally. The closest algorithmic contribution to our work is Lookahead <cit.>, which combines adaptive methods with an exponentially moving average scheme. The key contribution of Chapter <ref> is to introduce spin glass like statistical models for neural network loss into the realm of SGD convergence results. In particular, we make use of a general stationary Gaussian process model for the noise of the loss surface which is a generalisation of the spin glass models used prior work and our own and bring two important benefits. Firstly, these models are intrinsically amenable to asymptotic analysis in the regime of very large parameter dimensionality, indeed this kind of asymptotic analysis is our focus in Chapters <ref> and <ref>. As there, this is an important feature of any analysis of neural networks, as virtually all successful modern applications use large networks with very many parameters. Secondly, these loss surface models are inherently models of statistical dependence between the noise on loss surface gradient iterates, a feature which, again, is central to the calculations in Chapters <ref> and <ref>. In the context of SGD convergence results and iterate averaging, statistical dependence between gradient iterates is essential for a realistic analysis, as the weights, and hence gradients, at each iteration of stochastic gradient descent are clearly not independent. Beginning with a simple model of independent, isotropic Gaussian gradient noise, we first establish a basic result for SGD with iterate averaging in the high-dimensional regime, exhibiting explicitly the variance reduction effect of iterate averaging compared to standard SGD. We then replace the inadequate and naïve assumption of independent gradient noise with a Gaussian process model for the loss noise, from which we derive a dependent model for the gradient noise. In this setting, we prove a generalised convergence result for SGD and SGD with iterate averaging, again demonstrating the variance reducing effect of iterate averaging but also providing insights into the effect of learning rate which derives directly from the dependence between gradient iterates. We additionally establish a sequence of results for variations on the basic Gaussian process noise model and also for certain adaptive gradient descent algorithms. Overall, our work provides an entirely novel approach to the modeling and analysis SGD algorithms which incorporates important properties of modern neural networks and creates connections between two previously separate approaches in the study of their training. Our novel perspective on the issue of SGD convergence and iterate averaging provides insight into the interaction between iterate averaging, adaptive gradient descent methods and learning rates, which helps to explain why most experimental results with iterate averaging may have historically been poor. §.§ Practical application of random matrix loss surface models for hyperparameter tuning A unifying feature of all work in this thesis is the study of neural networks via models of their loss surfaces. Our work shows how such models can be developed and analysed to shed light on the important features such as the configuration of local optima and the spectral outliers of loss surface Hessians, both of which are relevant to gradient-based optimisation of f neural networks' parameters. As important as these studies are for advancing the relatively primitive theoretical understanding of what has become a ubiquitous and indispensable approach to machine learning, the immediate practical applications are quite limited. The spin-glass models of Chapters <ref> and <ref> are largely without any direct practical application, being too crude a statistical modern for practical neural networks. We demonstrate in Chapters <ref> and <ref> that universal local random matrix theory statistics can be used to build much more realistic models of neural network loss surfaces and yield detailed predictions about spectral outliers of their Hessians. It is beyond doubt that such results about spectral outliers are of practical use, as clearly demonstrated in <cit.>, where the results are used to derive practical and effective scaling rules for learning rates. Our results considerably expand and substantiate those of <cit.>, but it has not been demonstrated that these much more precise results add anything practically over the cruder and less rigorous approach of their antecedents. Chapter <ref> introduces an entirely new application of random matrix theory techniques to neural network loss surfaces, producing immediate practical benefit to the training of real-world networks. The founding idea of Chapter <ref> is a simple observation about a very common numerical `hack' used in several standard variants of stochastic gradient descent. Let L(w⃗) be the loss surface of some neural network with parameters w⃗∈^N and let H = ∇^2 L be its Hessian. Stochastic gradient descent updates weights according to the rule _k+1 = _k - α_k∇ L where w⃗_k are the network parameters after k iterations of SGD and at each iteration a different batch is used. α_k>0 is the learning rate which, in the simplest setting for SGD, does not depend on k, but in general can be varied throughout training to achieve superior optimisation and generalisation. The general form of adaptive optimiser updates is _k+1 = _k - α_kB^-1∇ L where B is a pre-conditioning matrix. The essential idea of adaptive methods is to use the pre-conditioning matrix to make the geometry of L more favourable to SGD. One approach is to take B to be diagonal, which can be thought of as having per-parameter learning rates adapted to the local loss surface geometry. More generally, one might seek an approximation B to the local loss surface Hessian, effectively changing the basis of the update rule to a natural one, with per-direction learning rates. Alternatively, if B ≈ H then the local quadratic approximation to the loss surface, i.e. the second-order term in a Taylor expansion, is isotropic in weight space. What both of these approaches have in common is that they in principle allow for bigger steps (i.e. larger α_k, as the different scales of the ∇ L in the different parameters are normalised. Indeed, a standard approach for diagonal B is to construct a diagonal approximation to H. Without this, α_k must essentially be tuned to be so small that the change of in the direction of the largest component of ∇ L is not too large. For Adam <cit.>, the most commonplace adaptive optimiser in the deep learning community, B is given by the diagonal matrix with entries √(⟨g^2_k⟩)+ϵ/⟨g_k⟩. Here is the loss gradient and ⟨·⟩ denotes an empirical exponential moving average or iterations. For many practical problems of interest, the test set performance of adaptive gradient methods is significantly worse than SGD <cit.>—a phenomenon that we refer to as the adaptive generalisation gap. As a consequence of this effect, many state-of-the-art models, especially for image classification datasets such as CIFAR <cit.> and ImageNet <cit.>, are still trained using SGD with momentum. Although less widely used, another class of adaptive methods which suffer from the same phenomenon <cit.> are stochastic second order methods, which seek to alter the learning rate along the eigenvectors of the Hessian of the loss function. KFAC <cit.> uses a Kroenecker factored approximation of the Fisher information matrix (which can be seen as a positive definite approximation to the Hessian <cit.>). Other methods use Hessian–vector products <cit.> in conjunction with Lanczos methods and conjugate gradients <cit.>. All second order and adaptive gradient methods, are endowed with an extra hyper-parameter called the damping or numerical stability co-efficient respectively. This parameter limits the maximal learning rate along the eigenvectors or unit vectors in the parameter space respectively and is typically set to a very small value by practitioners. In principle there is no reason why a certain parameter gradient should not be zero (or very small) and hence the inversion of B could cause numerical issues. This is the original reason given by <cit.> for the numerical stability coefficient ϵ. Similarly so for KFAC for which B = ∑_i^Pλ_i_i_i^T where {λ_i,_i}_i=1^P are the eigenvalue, eigenvector pairs of the kronecker factored approximation to the Hessian. Hence to each eigenvalue a small damping coefficient δ is added. Whilst for both adaptive and second order gradient methods, the numerical stability and damping coefficients are typically treated in the literature as extra nuisance parameters which are required to be non-zero but not of great theoretical or practical importance, we strongly challenge this view. In Chapter <ref>, we relate these coefficients to the well known linear shrinkage method from random matrix theory. It is clear from a random matrix theory perspective, that the sub-sampling of the Hessian will lead to the creation of a noise bulk in its spectrum around the origin, precisely the region where the damping coefficient is most relevant. We show, both experimentally and theoretically, that these coefficients should be considered as extremely important hyper-parameters whose tuning has a strong impact on generalisation. Furthermore, we derive from a random matrix theory additive noise model of the loss surface Hessian a novel algorithm for their online estimation, which we find effective in experiments on real networks and datasets. §.§ Mathematical contributions We end this section with a brief summary of the purely mathematical contributions of this thesis, much of which has been covered above but in the context of their applications. Due to the presence of an additive term deforming the GOE matrix, in Chapter <ref> we are forced to use different methods to obtain the complexity results analogous to <cit.> and in so doing provide a novel approach to these calculations. <cit.> starts by computing the index-specific complexity and then sums over index to obtain the non-specific complexity. By contrast, we use supersymmetric methods to first obtain the non-specific complexity and then use the large deviations principle to reintroduce the index dependence. To the best of our knowledge, this approach has not been used before, though there are of course many works that perform the first part of this calculation for various models. In Chapter <ref> we make use of the Coulomb gas method to calculate a random matrix determinant as part of the complexity calculation, which is entirely routine, however we also provide a proof of the validity of the Coulomb gas method for the relevant matrix ensemble. The proof structure is a standard matter of establishing complementary upper and lower bounds. The proof of the upper bound makes use of standard probabilistic inequalities and properties of Gaussians, however we use the supersymmetric method integral representations to derive error bounds on the mean spectral density which are the key ingredient in the proof of the lower bound. Finally, in Chapter <ref> we prove a novel result for the limiting spectral measures of additions of random matrices. It is well known <cit.> that the sum of two free independent random matrices with well defined limiting spectral measures has a limiting spectral measure given by the free convolution of the two. We are able to establish the same free convolutional limiting spectral measure but requiring only that one of the matrices obeys quantum unique ergodicity. The proof of this result is also a novel application of quantum unique ergodicity, as we leverage a supersymmetric representation to compute the limiting spectral and use the defining quantum unique ergodicity property to compute the integral over the matrix eigenvectors. § LITERATURE REVIEW OF DEEP LEARNING THEORY We close this chapter with a broad review of the literature on deep learning theory. This is a field experiencing a tremendous amount of activity so our review shall be far from exhaustive. We will give particular attention to the literature related to random matrix theory, but shall also seek to highlight the other broad approaches that have attained some prevalence. §.§ Random matrix theory Random and complex landscapesThe work most closely related to our own began with <cit.> where the connections between neural network loss surfaces and spin glasses were first introduced and studied, with the underpinning mathematical results being drawn from the random matrix theory literature such as <cit.>; we discuss these works in detail elsewhere in this chapter and the next. In the same lineage of work are more recent notable examples such as <cit.> which can be summarised as the study of high-dimensional signal-plus-noise models. These works avoid any direct connection to neural networks, instead focusing on much simpler random matrix and tensor models that act as playgrounds for stochastic gradient descent on high-dimensional loss surfaces. This approach is of course inspired by <cit.> and these works similarly consider issues of loss surface complexity, but with the explicit inclusion of extra structure, or `signal'. This signal was notably lacking from <cit.>, as the spin-glass is really just a model of pure noise. Intuitively, one expects that the loss surfaces of real neural networks contain some underlying structure induced by the structure of the data and the network itself, but that a considerable component of noise is also induced on the surface by the noise on the data and also possibly the weights and biases themselves. By creating simple, paired-down loss surface models containing the same kind of high-dimensional noise present in the spin glass, but with some signal (or structure) injected, these works are able to study questions about the presence and prevalence of spurious minima i.e. local minima of the noisy loss surface that are uncorrelated with the true minima of the noise-less surface. They uncover phase transitions between chaotic surfaces on which the structure-induced minima are swamped by spurious minima and surfaces which, though they contain many noise-induced minima, the structure of the minima is such that the signal is still recoverable. Random neural networks In the line of work discussed above, random matrices arise somewhat indirectly in the study of neural networks via the Kac-Rice approach to landscape complexity analysis. Since neural networks are constructed using, and parametrised by, weight matrices in each of their layers, one can naturally seek a theory of random neural networks by considering these weight matrices to be random. <cit.> bridged the gap between studies of landscape complexity and random neural networks by considering networks with i.i.d. normal weights applied to i.i.d. normal data and computing the limiting spectral density of their Hessians in the large parameter number limit. They decompose the Hessian as the sum of a positive semi-definite matrix (often called to Gauss-Newton matrix elsewhere <cit.>) and a matrix that contains all the dependence on the residuals (i.e. the error terms between the network predictions and the truth values). With this decomposition, they make assumptions of free independence to enable the use of tools from free probability to compute the limiting spectral densities. By assuming also an i.i.d. Gaussian form of the residuals parameterised by some variance ϵ, they are able to describe the spectra of neural networks Hessian at different loss values a compare with experiment. Random networks were also considered in <cit.> in the context of random feature ridge regression i.e. a 1-layer neural network with MSE loss and an L2 ridge regularisation penalty for which only the final layer is trained. The first layer, being untrained, acts as a random transformation of the the input data and then the weights of the final layer have a unique solution known in closed form, since the final layer is simply a linear ridge regression on the random features. Since the final layer weights can be solved in closed form, a closed form is available for the training error which is found to be given in terms of the resolvent Q = (N^-1Σ^TΣ + γ I)^-1 where Σ = σ(WX) are the random features produced by the random weights W and input data points X and N is the number of random features (i.e. the width of the hidden layer). The proofs rely largely on concentration properties of sub-Gaussian random variables to establish that various random matrix quantities concentrate on their expectations. In a related work <cit.> 1-layer neural networks with random weights were considered. The authors compute the limiting spectral density of the Gram matrix Y^TY of the network output Y. This work was the first in which the non-linearities introduced by neural network activation functions were handled directly and analytically in the setting of random matrix theory, since <cit.> was restricted to polynomial activation functions. The weight entries and the data entries are assumed to i.i.d. Gaussians and the proof of the limiting spectral density uses the moment method of random matrix theory. An interesting consequence of the results is that there exist certain non-linear activation functions for which the Gram matrix spectrum is the Marcenko-Pastur distribution, so that the spectrum is preserved through the non-linear activation function. The authors conjecture that these “isospectral” activation functions may have beneficial practical properties for training, as the spectral statistics remain constant through the layers, an idea somewhat reminiscent of batch norm <cit.>. <cit.> extends the results of <cit.> to more general (i.e. sub-Gaussian) entry distributions on the network weights and the data, using again a moment method proof. They also extend to the case of multiple layers, though the results in that case are very intricate and opaque. Continuing again in this line of work, <cit.> extends the analysis to 1-layers random networks with random biases ans shows that the distribution of the biases induces something like a mixture over activation functions. <cit.> considers the input-output Jacobian J of random multi-layer networks using the techniques of free probability theory to derive the spectrum of the Gram matrix JJ^T. Using these results, they are able to derive necessary and sufficient conditions on the spectra of the weight matrices to give a stable spectrum (i.e. not no explosion nor collapse) in the large network depth limit. These results were subsequently generalised and given a fully-rigorous proof in a series of papers by Pastur and collaborators <cit.>. The first paper in the sequence considers the Gaussian case, as in <cit.>, with the chief difficulty being that the free independence that is required to apply the streamlined free probability argument given in <cit.> is not apparent. The second paper extends to general i.i.d. distributions with at least four finite moments and the third extends to weights matrices with orthogonal distributions (so not i.i.d. entries). Another perspective on random neural networks is given in the works <cit.>, where the techniques of mean field theory are applied to the standard multi-layer perceptron architectures, firstly with linear or ReLU activations and then with more general activations and batch normalisation. The training loss of the network plays the role of the Lagrangian and the partition function is computed by explicitly integrating out the random (i.i.d. Gaussian) weights and biases. In the case of batch normalisation, the authors are able to use the mean field techniques to make predictions about instabilities (e.e. due to gradient explosion) of very deep networks in the presence of batch normalisation. Beyond the question of why does SGD work at all for deep neural networks, there are various phenomena observed in their training and use that lack adequate theoretical explanations. One such is the double/triple descent phenomenon, which is commonly observed in large modern deep neural networks but is at odds with classical statistical learning theory. Standard results from statistical learning theory dictate that the best attainable test loss of a particular model decreases as the number of parameters N of the model increases, but only up to a point beyond which the loss increases again. This is a reflection of the classical bias-variance trade-off <cit.> which states that the expected test error of a machine learning model can be decomposed into two additive terms, bias and variance, which account for different sources of error in the fitting process. High variance means that there is high variation in the estimated parameters between different sampled instances of the training set which indicates that the model tends to systematically fit to the noise in the training data, rather than the underlying structure (called overfitting). High bias means that the test error over different sampled instances of the training set is biased away from zero, indicating that the model tends to systematically fail to identify meaningful generalisable structure in the data (called underfitting). It is intuitive that a model with too few parameters will tend to underfit, as the model lacks the expressive capacity to capture the structure in the data. On the other hand, a model with too many parameters (i.e. more than are really needed to capture the structure in the data) will tend to overfit as it is has spare capacity that can be used to interpolate noise in the training data, which of course drives down the training loss, but at the expense of increasing the test loss. All of this holds for classical approaches to machine learning, i.e. broadly those before the deep learning revival of the 2010s, however repeated empirical observations with increasingly larger deep networks have revealed that this classical picture has its limits. Modern deep networks used in computer vision applications are routinely chosen to have 10s of millions of parameters, which by any reasonable measure is considerably more than would be required to express the true structure in the data and is indeed sufficient to allow for perfect interpolation of the training data <cit.>. Modern transformer networks used extensively in natural language processing are larger still <cit.> with 100s of billions of parameters. Repeatedly and in multiple domains, it has been observed that dramatically increasing the number of networks parameters and also the training time can lead to ever better test set performance even when training data are near perfectly interpolated. This phenomenon was dubbed the double descent, referring to the shape of the graph of test error against number of parameters. Classically, this graph has a single local minimum at the point of bias-variance balance, but very large deep neural networks have revealed a second, lower minimum in the greatly (“abundantly”) over-parameterised region <cit.>. Prior works attempted to analyse this phenomenon in the simplest cases of linear regression models <cit.>, but the key contribution of <cit.> was to analyse the effect of parameter number on single hidden layer random networks. Neural networks with a single hidden layer are the simplest example of a model in which the number of trainable parameters N can be specified separately from the input data dimension d and target data dimension C, since in a model with no hidden layers (e.g. linear or logistic regression) N is necessarily equal to dC, whereas the width of even a single hidden layer can be specified arbitrarily. The authors were able to show that single hidden layer networks with random i.i.d. Gaussian weights trained on entirely random data with random labels display a double descent, even a triple descent, with a third test error minima in an extreme “hyperabdundant” parametrisation region. Much like the earlier work <cit.>, the test error is expressed as a certain random matrix resolvent which is in turn computed by determining the limiting spectral density of a certain random matrix via tools from free probability theory and invoking notions of random matrix universality to replace the complicated, intractable matrix ensembles arising from the network with certain independent Gaussian matrices. This work produces an immediate insight: the double (triple) descent phenomenon is not unique to deep neural networks, nor even to the type of data on which they are typically trained or the training procedure, but rather it is a “background” property of over-parametrised non-linear models and generic data. Spectra of neural networks The works discussed so far consider random neural networks and random matrices in neural networks ex-ante, i.e. modeling assumptions are made, or models constructed, that explicitly introduce randomness to neural networks or their loss surfaces. Their is another line of work which is better characterised as ex-post randomisation, wherein neural networks are directly studied and, for example, spectral properties of their loss surface Hessians or weights are analysed. For the fist time in <cit.>, the spectra of loss surface Hessians of real-world neural networks were approximated and analysed. For practical modern neural networks, the loss surface Hessian is of course far too large to even store in memory, let alone compute via automatic differentiation or eigen-decompose, having N^2 entries, where the number of network parameters N is typically 10^7 or more. The key numerical advance in these works is the application of Lanczos iteration methods <cit.> to compute high-quality approximations to the spectral density of very large matrices given only the matrix-vector multiplication function ℳ_H : ℝ^N→^N with ℳ_H(v⃗) = Hv⃗ and not the whole matrix H. This can be combined with the Pearlmutter trick <cit.> which computes ∂^2 l/∂ w_i ∂ w_jv⃗ = ∂/∂ w_i(v⃗^T∂ l/∂ w_j) which is very much amenable to automatic differentiation in modern deep learning frameworks. Actually, this approach was pioneered contemporaneously by Granziol and collaborators in a sequence of pre-prints for which the best reference is <cit.>. One of the key insights in those works was to highlight the very considerable discrepancy between the spectra of real neural network Hessians and those of standard canonical random matrix models such as the GOE that is assumed by spin glass models such as <cit.> and in <cit.>, it was proposed that the spectra of products of canonical random matrix ensembles can be used to obtain agreement with certain aspects of the spectra of real neural networks, in particular their considerable rank degeneracy. These empirical analyses uncover rich and interesting structure in the spectra of real deep neural networks, in particular the spectra clearly display a bulk and some large outliers. The outliers appear to be directly attributable to the classes in a typical classification problem (i.e. one outlier per class) and naturally one expects from random matrix theory that the bulk corresponds to noise <cit.>. There is further structure still, with the discovery in an later work <cit.> of a group of eigenvalues outside of the bulk[Though not stated by the author, this extra group of outlier eigenvalues must clearly be outside the Tracy-Widom region as well.] but much smaller than the main outliers. There are typically C(C-1) of these outliers, for a C class classification problem, so they appear to correspond somehow to inter-class correlations. Rather than considering loss surface Hessians, another line of inquiry has directly analysed the spectra of neural network weight matrices before, during and after training. <cit.> consider several types of network trained on real datasets and look at the spectra of their weights matrices at initialisation and as training progresses. They identify several distinct phases of training from these spectra, beginning with full classical random matrix behaviour at initialisation and developing towards some heavy-tailed distribution leading to the conjecture that neural networks are implicitly regularised by some process inducing these heavy tailed spectra as training proceeds. Note that the idea of implicit regularisation of neural networks via stochastic gradient descent pre-dates this work by several years <cit.>. Finally, we mention <cit.> in which the spectra of random and trained neural network weight matrices was analysed but on the local scale, rather than the global scale pursued by <cit.>. This work followed on from our own in Chapter <ref> <cit.> and similarly discovered the robust presence of universal GOE random matrix spacing statistics in the spectra. §.§ Other approaches We mentioned above some mean-field approaches to the analysis of neural networks, but this would not be complete without also mentioning the recent work of Roberts and Yaida <cit.> in which this subject is developed in considerable depth. The authors proceed incrementally from linear networks at initialisation (the simplest case), to non-linear networks and ultimately training dynamics via a perturbation theory approach. This analysis relies heavily on the neural tangent kernel which can be introduced quite simply by considering the loss derivatives via the chain rule: ∂ L/∂θ_a = ∑_i ∂ L/∂ z_i∂ z_i/∂θ_a where z⃗ is the network output which is fed into the loss L. A single step of stochastic gradient descent will update the weights θ⃗ by taking a small step of scale η along the negative gradient direction, so that the leading order (in η) change in the loss is Δ L = -η∑_i,j∑_a,b∂ L/∂ z_i∂ L/∂ z_i∂ z_i/∂θ_a ∂ z_i/∂θ_b which leads to the identification of the neural tangent kernel K_i,j = ∑_a,b∂ z_i/∂θ_a ∂ z_i/∂θ_b. The neural tangent kernel can be seen to largely govern the dynamics of stochastic gradient descent for very wide networks (i.e. those with some fixed number of layer but very many parameters in each layer), see e.g. <cit.>. Building on the above-mentioned decomposition of neural network Hessian spectra into components attributable to class centres and inter-class correlations <cit.>, the concept of neural collapse has been advanced. Empirical studies of network pre-activations in <cit.> discovered that, in networks trained to good accuracy, the pre-activations coalesce around C clusters, one for each class in the classification problem. Indeed, as training progresses the pre-activations converge to very low variance around the class cluster centres and the cluster centres themselves converge to an equiangular tight frame. Another recent line of work studies neural networks in their capacity as function approximators <cit.> and attempts to characterise using the tools of mathematical analysis the sets of functions that can be well approximate by neural networks. A 2-layer (i.e. 1 hidden layer) network can be expressed as a random feature model f(x⃗, a⃗) = 1/m∑_j=1^m a_j ϕ (x⃗; w⃗_j),   ϕ(x⃗, w⃗) = σ(x⃗^Tw⃗). This expression can be rewritten as an integral by defining an atomic probability measure π = m^-1∑_j=1^m δ_w⃗_j over the first layer weights {w⃗_j}_j f(x⃗, a⃗) = ∫ a(w⃗) ϕ (x⃗; w⃗) dπ(w⃗), which suggests the generalisation of this expression to any probability measure π, so producing a type of random neural network with marginalised first layer weights. In this construction, the 2-layer MLP network can be viewed as a Monte Carlo integration approximation to this more general object. An important insight about the role of the curse of dimensionality in deep learning is revealed by this formalism. Classical function approximation theory typically constructs approximations of a function f by defining some Sobolev space with a convenient basis, say of polynomials. If m is the number of free parameters in the approximation (e.g. the maximum degree of the polynomial basis) and d is the input dimension of f, then one obtains an approximation error that scales something like m^-α/d for some α>0 defined by the details of the chosen approximation space. As the input dimension d grows, this error term becomes less and less favourable, requiring exponentially more free parameters m to achieve the same approximation error. This contrasts sharply with the above Monte Carlo integration interpretation of a 2-layer MLP, which has an error term with the standard MC scaling of m^-1/2, which crucially is independent of the input dimension d. This analysis approach provides some insight into how neural networks appear to overcome the curse of dimensionality in their input space faced by other approaches to machine learning. The results in <cit.> go further and in fact identify precisely the function spaces for which 2 layer MLPs can provide good approximations. <cit.> considers the success of stochastic gradient descent at finding high quality minima for deep neural networks. As we have already discussed, classical optimisation theory holds that finding global minima of non-convex functions is generally intractable and <cit.> argues that the considerable over-parametrisation of modern neural networks implies that their loss surfaces are filled with many local minima and they are generically not even locally convex around those minima. The PL inequality <cit.> for a loss function L with constant μ is 1/2∇ L(w⃗)^2 ≥μ L(w⃗) and, combined with a smoothness condition, is sufficient to guarantee exponential convergence of stochastic gradient descent <cit.>, but the PL condition is much weaker than even local convexity. The conclusion of this line of work is broadly that the classical picture that lack of convexity and numerous local minima mean that stochastic gradient descent on neural networks is doomed to fail is overly pessimistic and weaker, more plausible conditions may suffice to provide expectation of convergence. CHAPTER: MATHEMATICAL TOOLS This chapter aims to provide a self-contained introduction to the main mathematical tools required in the subsequent chapters, intended to be accessible to a mathematical audience with no previous familiarity with random matrix theory. § INTRODUCTION TO RANDOM MATRIX THEORY Random matrix theory provides much of the mathematical context and insight for the results in this thesis, as well as providing most of the techniques used in the calculations. It is a large a diverse field touching many areas of pure and applied mathematics and physics and we shall not attempt to provide comprehensive introduction. The classic introduction is Mehta's book <cit.>. Thorough and mathematically orientated modern treatments can be found in the books by Anderson, Guionnet and Zeitouni <cit.>, Tau <cit.> and Meckes <cit.>. Accessible and application orientated introductions are given by <cit.> and <cit.>. A detailed introduction to modern topics in a mathematically rigorous style can be found in <cit.>. Given the breadth of random matrix theory, only a fraction of its concepts and tools are required in this thesis and so we restrict this introduction to those. §.§ Random matrices A random matrix is no more nor less than one would expect, namely a matrix-valued random variable. Such objects are entirely natural in almost any branch of applied mathematics or statistics. Consider for example a sample of N data points each being represented as a tuple of M real values, such as 2-tuples of latitude and longitude for locations of house or 500-long tuples of returns data for the S&P 500 index. It is natural, at least from the perspective of computational convenience, to stack these data points into an array X of shape N× M with each row corresponding to a single sample. From the perceptive of a pure mathematician thinking of matrices as representations of linear maps on vector spaces, X does not appear to be a matrix, but just a collection of number conveniently packed into a array. Suppose that the N samples are x⃗_1, …, x⃗_N drawn from a multivariate Gaussian distribution 𝒩(0, Σ). The information contained in the sample is entirely represented by this sequence in ℝ^N, so what is the purpose of stack them into a `matrix' X? One answer is, of course, numerical convenience and efficiency. For example, suppose that Σ is known and we wish to construct the standardised variables z⃗_i = Σ^-1/2x⃗_i. One can view this as a sequence of N matrix-vector operations, but it is more mathematically compact and numerically efficient to instead view it as a single matrix-matrix operation Z = Σ^-1/2X. There are, however, deeper and richer reasons to consider X. Consider the matrix S = 1/NX^T X - an M× M positive semi-definite symmetric matrix. One can clearly write S_ij = 1/N∑_k=1^N (x⃗_k)_i (x⃗_k)_j and so S_ij is an empirical estimate from N samples of the covariance between the i-th and j-th coordinates in the data distribution. The eigenvalues and eigenvectors of S clearly have meaning, for example the eigenvector corresponding to the largest eigenvalue is the direction in ℝ^M responsible for the most variance in the data. In the S&P 500 example above, this direction would correspond to `the market', and in the coordinates example, it may correspond to a major river along whose banks most settlements are found. We need not restrict ourselves to matrices of the form of N samples of M dimensional variables. Consider data collected from a telecommunications network on N end-points (or nodes), examples of which include telephone numbers or registered users of instant messaging services. Let X_ij be the number of communication events between end-point i and end-point j observed over some time period. Properly normalised by the total number of events in the same period, X_ij could instead be an empirical estimate of the probability of communication between end-points i and j. Viewing X as a symmetric matrix, not merely and array, and computing its spectral decomposition, one will find that the eigenvectors corresponding to meaningful communities in the network, with the eigenvalues giving an estimate of the relative importance of each community in the network. These examples illustrate a critical point: viewing arrays of random variables (or data) as matrices is not a mere numerical convenience, for one finds that bona fide linear algebraic objects such as eigenvalues and eigenvectors have meaning and structure. Let us return to the example of a matrix X containing financial data, e.g. share prices or returns, for M assets sampled over N days. If M is small compared to a large sample size N, then we can expect much of the noise in the samples to average out to produce a matrix S with M meaningful eigenvector representing genuine correlations between the M assets. In the opposite extreme where M is much larger that N, we expect that many of the genuine correlations in the data will be lost in the noise. But what of the intermediate case, where M and N are of comparable size? Intuitively, one expects that the strongest signals in the data (such as the the market) will be preserved and clearly visible through the noise in the data, while more subtle signals will be lost. Translating this into the language of random matrices, the largest eigenvalues (and their eigenvectors) correspond to genuine signal in the data, while the smallest correspond to sample noise. The obvious question is whether one can separate the signal from the noise, i.e. how many of the largest eigenvalues are signal? This question can be seen, conceptually, as motivating much of the work in random matrix theory. Consider any linear algebraic property of a matrix: eigenvalues, eigenvectors, determinant, trace, characteristic polynomial, condition number, etc. Given a distribution on a matrix, what is the distribution on any of these objects? If one can answer this question for pure noise random matrices, then one can easily identify matrices that contain signal. If one can answer the question in the case of signal-plus-noise random matrices, then one can separate the signal from the noise. The above discussion has been rather statistically-focused, but historically random matrix theory was used by Wigner and Dyson <cit.> to provide elegant and powerful models for atomic nuclei. The governing quantum mechanical equation for an atomic nucleus is the Schrödinger equation H ψ_i = E_i ψ_i where H is an Hermitian operator (the Hamiltonian) on an Hilbert space, {ψ_i} is a wave functions and E_i are corresponding energy levels. The physical observables here are the energy levels, but in all but the very simplest of cases (such as a Hydrogen nucleus) they cannot be computed analytically, or even numerically, owing to the complexity of the interaction between the nucleons. Dyson and Wigner's insight was that the general appearance of energy levels on average can be described by simple statistical models of (<ref>) not requiring detailed knowledge of the equation or its solution. To quote Dyson <cit.>: [2em]2em The statistical theory will not predict the detailed sequence of levels in any one nucleus, but it will describe the general appearance and the degree of irregularity of the level structure, that is expected to occur in any nucleus which is too complicated to be understood in detail. This aspect of random matrix theory will be of particular value in this thesis. We endeavour to understand properties of very large deep neural networks applied to complicated high-dimensional tasks on real-world data. Such models may contain millions of free parameters operating on datasets of millions of samples with many thousands of dimensions per sample and complicated statistical dependence between dimensions. The dynamics of the model parameters as they are trained are far too complicated to be studied directly. As with atomic nuclei many decades earlier, the central hypothesis of this thesis and other related contemporary work is that statistical theories of deep neural networks can describe their general properties and be used to understand their behaviour without reference to the intractable details of their training dynamics. §.§ Random matrix ensembles Probability distributions on matrices are commonly referred to as ensembles in random matrix theory. There are modest number of canonical random matrix ensembles that form the foundation of much of the work in random matrix theory and about which a great deal is known in considerable mathematical detail. The importance of each of the canonical ensembles tends to vary between application areas, so we shall restrict ourselves in this section to only those ensembles that feature in the coming chapters. We shall be exclusively interested in real matrices, as the matrices that arise when studying neural networks and machine learning are almost always real. Moreover, many of the matrices we shall be interested in will be symmetric. The most important random matrix ensemble for this thesis is the Gaussian orthogonal ensemble (GOE). There is some variation between authors on unimportant normalisation, but we shall say that a N× N matrix X ∈^N× N is a GOE matrix, X∼GOE^N if X_iji.i.d.∼𝒩(0, 1 + δ_ij/2N) for i ≤ j and X_ij = X_ji for i> j, i.e. X has Gaussian entries, independent up-to symmetry and with twice the variance on the diagonal as off-diagonal. This specific variance structure allows for a powerful closed-form expression of the law of X: dμ(X) = 1/Z_Nexp(-N X^TX/2) dX where Z_N is a normalisation constant and dX is simply the standard Lebesgue product measure on the upper-diagonal and diagonal entries of X. Note that the GOE is called an orthogonal ensemble because it possesses symmetry with respect to the real orthogonal group O(N) on orthogonal matrices. Sampling a matrix X from GOE^N can be done with a very simple algorithm: Y_iji.i.d.∼𝒩(0, 1),    X = Y + Y^T/2N. The GOE is a specific case of the more general class of Wigner matrices, which have independent (up-to symmetry) Gaussian entries with variance σ_d^2/N on the diagonal and σ_u^2/N off the diagonal. Generalising even further, generalised Wigner matrices take the form X_iji.i.d.∼μ for i < j,    X_iii.i.d.∼μ_d and X_ij = X_ji for i > j for any sufficiently well-behaved measures μ and μ_d on ℝ. There are complex Hermitian and quaternionic version of GOE and Wigner matrices for details of which we refer the reader to any standard reference on random matrix theory. An alternative generalisation of the GOE is born of (<ref>), which we rewrite as dμ(X) = 1/Z_Nexp(-N V(X)) dX where V:ℝ^N× N→ℝ^N× N is defined to be V(X) = 1/2X^TX. In deference to its origins in statistical physics, V is often referred to as a potential. With this rewritten form of the GOE density, one can simply change the definition of V and so obtain different distributions on real symmetric matrices. We shall also encounter matrices distributed with Haar measure on the orthogonal group O(N). The Haar measure on any compact group G <cit.> is the unique measure μ_Haar finite on all subsets of G such that μ(gS) = μ(S)  ∀ g∈ G and S⊂ G. The Haar measure can be viewed as the `flat random' measure on G and, in the case G = O(N), a matrix distributed with Haar measure is a uniform random matrix on the real orthogonal group. Geometrically, a matrix with Haar measure on O(N) is a uniform random basis rotation. Haar random orthogonal matrices O can be sampled quite simply by sampling N i.i.d. vectors x⃗_i with i.i.d. 𝒩(0,1) entries and then applying the Gram-Schmidt algorithm the obtain an orthonormal set of vectors o⃗_1, …, o⃗_N which are the rows of the Haar-distributed matrix <cit.>. Finally, we mention the real Ginibre <cit.> ensemble on N× M matrices which are simply matrices with i.i.d. entries and no symmetry constraint. The Ginibre analogue of the GOE is and ensemble of matrices with i.i.d. 𝒩(0, 1/N) entries. §.§ Eigenvalues and spectral measures Let X_N be any real symmetric random matrix of shape N× N. The following discussion could equally be presented for any Hermitian random matrix and could be generalised much further at the expense of having to account for non-real eigenvalues. For the purposes of this thesis, we may restrict our discussion to matrices with real eigenvalues and, to be concrete, let us stick to real symmetric matrices. Let λ_1 < λ_2 < … < λ_N be the eigenvalues of X_N. The empirical spectral measure of X_N is defined as μ̂_N = 1/N∑_i=1^N δ_λ_i where δ_λ is a Dirac δ-function mass at location λ, i.e. defined by ∫_A δ_λ = {λ∈ A} for any set A ⊂ℝ. Since X_N is random, its eigenvalues {λ_1, …, λ_N} are random variable with some joint probably density p(λ_1, …, λ_N). μ̂_N is a probability measure on ℝ and moreover, it is a random probability measure, its distribution being induced by p(λ_1, …, λ_N). Imagine constructing many independent samples of X_N, hence from p(λ_1, …, λ_N) and hence of μ̂_N. Once could imagine averaging the samples of μ̂_N 1/m∑_j=1^m μ̂_N and so obtaining some indication of the average location of the eigenvalues of X_N. Intuitively, one would imagine this average measure becoming a better and better approximation to some absolutely continuous measure (though there is, of course, no general guarantee of such convergence). Extending this to a concrete mathematical question: does μ̂_N exist, and what is it? In the same way that one can imagine growing the number of sampled eigenvalues by increasing the number of independent samples of X_N, one can also consider a family of distributions on X_N, parametrised by dimension N∈, and let the dimension N for a single sample grow. In this context, there is another natural question: does lim_N→∞μ̂_N exist, how strong is the convergence, and what it the limit measure? When it exists, we shall define μ_∞ = lim_N→∞μ̂_N to be the limiting spectral measure of X_N (being intentionally vague about the strength of convergence, for now). Likewise, when the expectation exists, we define μ_N = μ̂_N to be the mean spectral measure of X_N. When either of these measure are absolutely continuous with respect to Lebesgue measure, we define ρ_∞(λ) = dμ_∞/dλ to be the limiting spectral density (LSD) and similarly ρ_N(λ) = dμ_N/dλ is the mean spectral density. Let us now be concrete and consider some specific examples, beginning with the most famous. [GOE] Recalling the form (<ref>) of the GOE measure on N × N matrices, we can now explore why it was described as “powerful”. Let X be an N× N GOE random matrix. Since X is real symmetric, it is an elementary result of linear algebra that X can be written in the form X = U^TΛ U, where U∈ O(N) is a real orthogonal matrix and Λ is a real diagonal matrix. Of course, the diagonal entries of Λ are the eigenvalues of X and the rows of U are the corresponding eigenvectors. But now exp(-N X^TX/2) = exp(-N U^TΛ U U^T Λ U/2) = exp(-N U^TΛ^2 U/2)=exp(-NΛ^2/2) which depends only on the eigenvalues of X and not on the eigenvectors. We must deal with the Jacobian of the change of variables from X to (Λ, U). A standard calculation found in any introductory text on random matrix theory shows that ∏_1≤ i ≤ j≤ N dX_ij = Δ({λ_i}_i=1^N)dμ_Haar(U)∏_i=1^N dλ_i where the Vandermonde determinant is defined by Δ({λ_1, …, λ_N}) =|[ 1 1 ⋯ 1; λ_1 λ_2 ⋯ λ_N; λ_1^2 λ_2^2 ⋯ λ_N^2; ⋮ ⋮ ⋮ ⋮; λ_1^N-1 λ_2^N-1 ⋯ λ_N^N-1; ]|= ∏_1≤ i < j≤ N |λ_i - λ_j|. Overall, we see that dμ(X) = dμ_Haar(U) Δ({λ_i}_i=1^N)∏_i=1^N dλ_i e^-Nλ_i^2/2/√(2π N) where there was of course no need to compute the normalisation constant, as it can simply be written down from the simple Gaussian product measure form in the λ_1, …, λ_N. The form (<ref>) already reveals much about the statistics of the eigenvalues and eigenvectors of X. It is immediately obvious that the eigenvalues are independent of the eigenvectors. The eigenvectors are Haar-distributed, so they are simply a flat random orthonormal basis of ^N. The eigenvalues have richer structure, but we can immediately make some heuristic comments on their statistics. Absent the Vandermonde term, the eigenvalues would be i.i.d. centred Gaussians with variance 1/N, so the larger N is, the less dispersed the eigenvalues will be around 0. The Vandermonde term introduces dependence between all of the eigenvalue, and specifically it introduces repulsion, as Δ(λ_i) is a decreasing function of the distance between the eigenvalues. We can predict therefore that the distribution of the λ_1, …, λ_N is some equilibrium balancing the repulsion between all eigenvalues and the independent confining Gaussian potentials on each eigenvalue. We shall now turn our attention to the mean and limiting spectral measures of the GOE. There are several quite different routes by which one can obtain these results. For a general and entirely rigorous approach, which in fact applies to any generalised Wigner matrix, we direct the reader to <cit.>. We present an approach using supersymmetric methods later in this chapter. For now, we shall present a derivation using the Coulomb gas method <cit.> which, in addition to the supersymmetric method, is of great relevance to the central calculations of this thesis. Let us introduce the following reformulation of the eigenvalue joint density function of the GOE: p(λ_1, …, λ_N) = Δ({λ_i}_i=1^N)∏_i=1^N e^-Nλ_i^2/2/√(2π N) = 1/(2π N)^N/2exp(-N/2∑_i=1^N {λ_i^2- 1/N∑_j≠ ilog|λ_i - λ_j| }). Further, using the definition of the empirical spectral density, we can write p(λ_1, …, λ_N) = 1/(2π N)^N/2exp(-N^2/2∫ dμ̂_N(λ){λ^2- ∫_λ' ≠λ dμ̂_N(λ') log|λ' - λ| }) from which the repulsion vs confinement statistics of the eigenvalues is made most clear. The logarithmic (Coulomb) potential has a singularity and 0 which penalises eigenvalue configurations with insufficient space between eigenvalues, whereas the quadratic potential penalises configurations with any eigenvalues too far from the origin. Continuing in the parlance of statistical physics, define the Lagrangian ℰ(λ; μ) = λ^2 - ∫_λ'≠λ dμ(λ') log|λ - λ'| and thence the action ℐ[μ] = ∫ dμ(λ) ℰ(λ; μ) with which we have p(λ_1, …, λ_N) = 1/Z_Nexp(-N^2/2ℐ[μ̂_N]). Let us consider N→∞ and for now assume that μ̂_N converges, in some sense, to μ_∞. It is clear from the action principle (or Laplace's method for asymptotic evaluation of integrals) that μ_∞ must be a global minimiser of ℐ. As such, μ_∞ must be a deterministic probability measure on , so we shall assume weak almost sure convergence of μ̂_N to μ_∞. It remains just to solve the variational problem argmin_μ∈𝒫()ℐ[μ] where 𝒫() is the set of all probability measures on . The solution for ρ_∞ can be found e.g. in <cit.> and is ρ_∞(λ) = ρ_SC(λ) = 1/π√(2 - λ^2) which is the celebrated Wigner semi-circle density. The semi-circle density is striking by its simplicity and elegance which, in fact, hint at a much deeper role in random matrix theory than just the limiting spectral density of a particular random matrix ensemble. Firstly, the semi-circle is not unique to the GOE but is shared by all generalised Wigner matrices (though, of course, the derivation above is possible only for the three canonical Gaussian Wigner ensembles: GOE, GUE, and GSE). More importantly, the semi-circle takes the place of the Gaussian in an analogue of the the central limit theorem for random matrices, about which we provide more discussion in section <ref>. So far, we have spoken only of the LSD, but what of the mean spectral density? We shall defer an explicit calculation for the GOE to section <ref>, but we shall see that the density ρ_N of the mean spectral measure μ_N can be written as ρ_N(λ) = ρ_SC(λ) + o(1) where the o(1) term is uniformly small in N for λ∈. Once again, this property of the very special GOE ensemble points to a much deeper phenomenon in random matrix theory: self-averaging. To leading order in large N, the spectral density of a single random GOE matrix of size N× N is deterministic and identical to the mean spectral density, which is an average of the whole GOE ensemble of random matrices. [An invariant ensemble] Recall the Lagrangian defined above ℰ(λ; μ) = λ^2 - ∫_λ≠λ' dμ(λ') log|λ - λ'| with which we were able to express the GOE joint eigenvalue density as p(λ_1, …, λ_N) = 1/Z_Nexp(-N^2/2∫ dμ̂_N(λ) ℰ(λ; μ̂_N)). The origin of the two terms in ℰ is quite plain: λ^2 simply comes from the Gaussian distribution of the GOE entries, while the logarithmic term comes from the Vandermonde determinant. The Vandermonde term is therefore universal to any real symmetric matrix ensemble, as it follows simply from the matrix change of variables. Similarly, if we were discussion complex Hermitian matrices there would be a universal Vandermonde term simply twice that for real symmetric matrices. So for any real symmetric matrix ensemble, we could in principle repeat the above procedure and arrive at a Lagrangian with exactly the same logarithmic Vandermonde term, along with some ensemble specific term. Of course, in general this term would not factorise nicely over the eigenvalues, so the above reduction to simply ∫μ̂_N(λ) ℰ(λ, μ̂_N) would not be possible. Let us then just consider ensembles for which this factorisation does occur, so that one would obtain the same Lagrangian form of the eigenvalues density but with Lagrangian ℰ_V(λ; μ) =V(λ) - ∫_λ≠λ' dμ(λ') log|λ - λ'| where V → is some function with sufficient smoothness and sufficiently fast growth at infinity to define a normalisable probability density. Such a random matrix ensemble is known as an invariant ensemble because it retains the same orthogonal invariance possessed by the GOE. The matrix density for an invariant ensemble can be simply written as p(X)dX ∝ e^-N/2 V(X)dX. For a real symmetric matrix argument X = O^TΛ O one has the power series definition V(X) = ∑_r≥ 0 a_r X^r = ∑_r≥ 0 a_r O^TΛ O… O^T Λ O = ∑_r≥ 0 a_r Λ^r = V(Λ) = V(Λ), so p(X) dX ∝ e^-N/2∑_j=1^N V(λ_j)Δ({λ_i}_i=1^N) dλ_1… dλ_N dμ_Haar(O) which confirms the Lagrangian expression given above. §.§ The Wigner surmise As we have seen in the preceding section, though the semi-circle plays a deep role in random matrix theory, it is by no means a universal spectral density for random matrix ensembles. Simply change the potential to deviate from the simple quadratic case was sufficient to produce entirely different spectral densities with invariant ensembles. So, at the level of the mean (or limiting) spectral measure, the semi-circle is more general that the GOE and the Gaussian Wigner ensembles, but is specific to Wigner matrices. One of the most astonishing results in random matrix theory is that there are properties of GOE matrices that are, in fact, universal in the sense that they are properties shared by a very wide class of matrices beyond the GOE and Wigner ensembles. A full discussion of this kind of random matrix universality is deferred to the later Section <ref>. Random matrix theory was first developed in physics to explain the statistical properties of nuclear energy levels, and later used to describe the spectral statistics in atomic spectra, condensed matter systems, quantum chaotic systems etc; see, for example <cit.>. None of these physical systems exhibits a semicircular empirical spectral density. However they all generically show agreement with random matrix theory at the level of the mean eigenvalue spacing when local spectral statistics are compared. The key insight here is that while almost any realistic physical system, model or even the machine learning systems which are the central objects of study for this thesis, will certainly not posses semi-circular densities at the macroscopic scale of the mean spectral density, but nevertheless random matrix theory can still describe spectral fluctuations on the microscopic scale of the mean eigenvalue spacing. It is worth noting in passing that possibilities other than random-matrix statistics exist and occur. For example, in systems that are classically integrable, one finds instead Poisson statistics <cit.>; similarly, Poisson statistics also occur in disordered systems in the regime of strong Anderson localisation <cit.>; and for systems close to integrable one finds a superposition of random-matrix and Poisson statistics <cit.>. So showing that random matrix theory applies is far from being a trivial observation. Indeed it remains one of the outstanding challenges of mathematical physics to prove that the spectral statistics of any individual Hamiltonian system are described by it in the semi-classical limit. Random matrix calculations in physics re-scale the eigenvalues to have a mean level spacing of 1 and then typically look at the nearest neighbour spacings distribution (NNSD), i.e. the distribution of the distances between adjacent pairs of eigenvalues. One theoretical motivation for considering the NNSD is that it is independent of the Gaussianity assumption and reflects the symmetry of the underlying system. It is the NNSD that is universal (for systems of the same symmetry class) and not the average spectral density, which is best viewed as a parameter of the system. The aforementioned transformation to give mean spacing 1 is done precisely to remove the effect of the average spectral density on the pair correlations leaving behind only the universal correlations. In contrast to the LSD, other k-point correlation functions are also normalised such that the mean spacing between adjacent eigenvalues is unity. At this microscopic scale, the LSD is locally constant and equal to 1 meaning that its effect on the eigenvalues' distribution has been removed and only microscopic correlations remain. In the case of Wigner random matrices, for which the LSD varies slowly across the support of the eigenvalue distribution, this corresponds to scaling by √(P). On this scale the limiting eigenvalue correlations when P→∞ are universal; that is, they are the same for wide classes of random matrices, depending only on symmetry <cit.>. For example, this universality is exhibited by the NNSD. Consider a 2× 2 GOE matrix, in which case the j.p.d.f has a simple form: p(λ_1, λ_2) ∝ |λ_1 - λ_2| e^-1/2(λ_1^2 + λ_2^2). Making the change of variables ν_1 = λ_1 - λ_2, ν_2 = λ_1 + λ_2, integrating out ν_2 and setting s = |ν_1| results in a density ρ_Wigner(s) = π s/2e^-π/4s^2, known as the Wigner surmise (see Figure <ref>). For larger matrices, the j.p.d.f must include an indicator function 1{λ_1≤λ_2≤…λ_P} before marginalisation so that one is studying pairs of adjacent eigenvalues. While the Wigner surmise can only be proved exactly, as above, for the 2 × 2 GOE, it holds to high accuracy for the NNSD of GOE matrices of any size provided that the eigenvalues have been scaled to give mean spacing 1.[An exact formula for the NNSD of GOE matrices of any size, and one that holds in the large P limit, can be found in <cit.>.] The Wigner surmise density vanishes at 0, capturing `repulsion' between eigenvalues that is characteristic of RMT statistics, in contrast to the distribution of entirely independent eigenvalues given by the Poisson law ρ_Poisson(s) = e^-s. The Wigner surmise is universal in that the same density formula applies to all real-symmetric random matrices, not just the GOE or Wigner random matrices. §.§ Eigenvectors What of the eigenvectors of random matrices? We have already seen that GOE matrices, and invariant ensembles in general, have Haar-distributed eigenvectors entirely independent of the eigenvalues. Just as the semi-circle is unique to Wigner matrices but the GOE Wigner surmise is seen in all matrices with orthogonal group symmetry, so Haar-distributed eigenvectors independent of the eigenvalues are seen only in invariant ensembles (not even in non-Gaussian Wigner matrices) but certain properties of Haar matrices are universal across a similarly wide class of random matrices. Once again, the discussion of these deep universality results will be given in Section <ref>, but we shall set the scene by first describing the delocalisation property of Haar-distributed eigenvectors. Let U be an N× N Haar-distributed orthogonal matrix and let u⃗_1, …, u⃗_N be its rows. Recall from the discussion above wherein the Haar distribution was introduced the following construction: Let g⃗_1, …, g⃗_N be i.i.d. vectors from 𝒩(0, I_N); let v⃗_1, …, v⃗_N be the results of a Gram-Schmidt algorithm; then, in distribution, {u⃗_i}_i=1^N = {v⃗_i/v⃗_i_2}_i=1^N. Fix some r < N and introduce the event B_N(υ) {| N^-1⟨g⃗_i, g⃗_j⟩ - δ_ij| ≤ N^-υ,     1≤ i, j ≤ r}. Then it is an exercise in Gaussian calculations and asymptotics, as given in <cit.>, to conclude that under the i.i.d Gaussian law of the (g⃗_j)_j=1^N the complementary event has low probability for large N: ℙ(B_N(υ)^c) =𝒪( C(υ) e^-α N^1-2υ), where α, C(υ) > 0 and we take 0<υ < 1/2 to make this statement meaningful. What's more, one can directly obtain that, given B_N, g⃗_i_2^2 = N^1 - υ for any υ > 0. So, restricting to only a fixed subset of the eigenvectors as N→∞, the simply i.i.d. Gaussian vectors g⃗_i from which they are constructed are, with high probability, close to being orthogonal even before applying Gram-Schmidt algorithm and they all have the same L_2 norm to leading order in N. This line of reasoning leads to the fact that, with high probability, ||N^-1/2g̃⃗̃_j - u⃗_j|| ≤ N^-υ/2, so, indeed, in the above precise probabilistic sense, any subset of Haar-distributed eigenvectors are extremely close to an corresponding set of i.i.d. standard Gaussian vectors, re-scaled by N^-1/2. § KAC-RICE FORMULAE The majority of chapters <ref> and <ref> is concerned with computing the expected complexity of certain loss surfaces in the limit as the number of parameters N→∞. Let us recall a basic definition of complexity as introduced above. Let ℳ be a compact, oriented, N-dimensional C^1 manifold with a C^1 Riemannian metric g. Let ψ:ℳ→ℝ be a random field on ℳ. For an open set A⊂ℝ, let C(A) ≡|{x∈ℳ | ψ(x) = 0,  ψ(x)∈ A}|. C(A) simply counts the number of local optima of ψ for which ψ lies in the set A. Note that the condition of a compact manifold ℳ is important here; without other constraints (for which see e.g. <cit.>) there is no guarantee of a finite value for C(A) given a non-compact ℳ. Computing anything about C(A) appears extremely challenging, but one can make some informal progress rather directly with an integral expression C(A) = ∫_∇ψ(ℳ) du⃗ δ(u⃗) {ψ(∇ψ^-1(u⃗))∈ A} which one can write down simply from the sampling property of the δ-function. The the composition property of the δ-function gives C(A) = ∫_ℳ dx⃗  |∇^2ψ(x⃗)|δ(∇ψ(x⃗)) {ψ(x⃗)∈ A}. From this simple argument, we see that the Hessian of ψ, and in particular the absolute value of its determinant, will be central to calculation of C(A). Recall that ψ is a random field, so its Hessian ∇^2ψ is a random matrix of size N× N, so one can see already that the complexity of random functions is connected with random matrix theory. What these simple arguments lack is any reference to the probability density of ψ. Since ψ is random, so also is C(A), so we must be more precise about what `calculating C(A)' means. One could attempt to compute the entire density of C(A), but this is clearly the most difficult objective. Let us restrict our consideration to simple statistics of C(A) and, in particular, its expected value. Proceeding informally, we have C(A) = {∫_ℳdx⃗  |∇^2ψ(x⃗)|δ(∇ψ(x⃗)) {ψ(x⃗)∈ A}| ∇ψ(x⃗) = 0 } p_x⃗(0) where p_x⃗ is the density of ∇ψ at the point x⃗∈ℳ. Within the conditional expectation, the delta function can be dropped, giving simply C(A) = [∫_ℳdx⃗  |∇^2ψ(x⃗)| {ψ(x⃗)∈ A}| ∇ψ(x⃗) = 0 ] p_x⃗(0) and finally swapping the order of integration informally gives C(A) = ∫_ℳdx⃗  p_x⃗(0)[|∇^2ψ(x⃗)| {ψ(x⃗)∈ A }| ∇ψ(x⃗) = 0 ]. We see now that C(A) will be tractable if we can compute the joint distribution of ψ, ∇^2ψ conditional on ∇ψ, and subsequently evaluate the random determinant's expected value. The expression (<ref>) is an example of a Kac-Rice formula <cit.>. These kind of informal arguments have been extensively used in the mathematical physics literature to compute quantities such as expected landscape complexities and cardinalities of other level-sets of random functions <cit.>. These arguments can be made fully rigorous and cast in a more general setting as is shown in the important book by Adler and Taylor <cit.>. We repeat here the foundational Kac-Rice result from that work which is central to our complexity calculations in the coming chapters. Let ℳ be a compact , oriented, N-dimensional C^1 manifold with a C^1 Riemannian metric g. Let ϕ:ℳ→ℝ^N and ψ:ℳ→ℝ^K be random fields on ℳ. For an open set A⊂ℝ^K for which ∂ A has dimension K-1 and a point u⃗∈ℝ^N let N_u⃗≡|{x∈ℳ | ϕ(x) = u⃗,  ψ(x)∈ A}|. Assume that the following conditions are satisfied for some orthonormal frame field E: * All components of ϕ, ∇_E ϕ, and ψ are a.s. continuous and have finite variances (over ℳ). * For all x∈ℳ, the marginal densities p_x of ϕ(x) (implicitly assumed to exist) are continuous at u⃗. * The conditional densities p_x(·|∇_Eϕ(x),ψ(x)) of ϕ(x) given ψ(x) and ∇_Eϕ(x) (implicitly assumed to exist) are bounded above and continuous at u⃗, uniformly in ℳ. * The conditional densities p_x (·|ϕ(x) = z⃗) of (∇_E_jϕ^i (x)) given are continuous in a neighbourhood of 0 for z⃗ in a neighbourhood of u⃗ uniformly in ℳ. * The conditional densities p_x (·|ϕ (x) = z⃗) are continuous for z⃗ in a neighbourhood of u⃗ uniformly in ℳ. * The following moment condition holds sup_x∈ℳmax_1≤ i,j≤ N{|∇_E_jϕ^i(x)|^N}< ∞ * The moduli of continuity with respect to the (canonical) metric induced by g of each component of ψ, each component of ϕ and each ∇_E_jϕ^i all satisfy, for any ϵ > 0 ℙ( ω(η) >ϵ) = o(η^N),   as η↓ 0 where the modulus of continuity of a real-valued function G on a metric space (T, τ) is defined as (c.f. <cit.> around (1.3.6)) ω(η) sup_s,t : τ(s,t)≤η|G(s) - G(t)| Then N_u⃗ = ∫_ℳ{|∇_Eϕ(x)|{ψ(x)∈ A} |  ϕ(x) = u⃗}p_x(u⃗) Vol_g(x) where p_x is the density of ϕ and Vol_g is the volume element induced by g on ℳ. Note the greater generality of this theorem compared to the heuristic derivation above. The required result for complexity can be obtained as a special case by taking ϕ = ∇ψ and u⃗ = 0. § SUPERMATHEMATICS Grassmann variables are entirely algebraic objects defined by an anti-commutation rule. Let {χ_i}_i be a set of Grassmann variables, then by definition χ_iχ_j = - χ_jχ_i,   ∀ i,j. The complex conjugates χ_i^* are separate objects, with the complex conjugation unary operator ^* defined so that (χ_i^*)^* = -χ_i^*, and Hermitian conjugation is then defined as usual by χ^† = (χ^T)^*. The set of variables {χ_i, χ_i^*}_i=1^N generate a graded algebra over ℂ. Mixed vectors of commuting and anti-commuting variables are called supervectors, and they belong to a vector space called superspace. The integration symbol ∫ dχ_idχ^* is defined as a formal algebraic linear operator by the properties ∫ dχ_i = 0,      ∫ dχ_i  χ_j = δ_ij, and these are called Berezin integrals. Functions of the the Grassmann variables are defined by their formal power series, e.g. e^χ_i = 1 + χ_i + 1/2χ_i^2 + … = 1 + χ_i where the termination of the series follows from χ_i^2 = 0   ∀ i, which is an immediate consequence of (<ref>). From this it is apparent that (<ref>), along with (<ref>), is sufficient to define Berezin integration over arbitrary functions of arbitrary combinations of Grassmann variables. Finally we establish our notation for supersymmetric (or graded) traces of supermatrices. We will encounter supermatrices of the form M = ([ A B; C D ]) where A, D are square block matrices of commuting variables and B, C are rectangular block matrices of Grassmann variables. In this case, the graded trace is given by M = A - D and such matrices are referred to as (B+F)× (B + F), where A is shape B× B and D is shape F× F. We refer the reader to <cit.> for a full introduction to supersymmetric variables and methods. Grassmann variables play an important role in quantum field theory and related fields <cit.>, being the algebraic representation of fermions, with bosons being represented by commuting variables. As such, even in applications unrelated to quantum physics, the particle nomenclature may be used; for example the diagonal blocks of the matrix M above may be referred to as bosonic blocks and the off-diagonals referred to as fermionic blocks. There are important connections between random matrix theory and quantum field theory in which the role of supersymmetry in both is made quite plain <cit.>, but for the purposes of this thesis, Grassmann variables and supersymmetric methods are simply mathematical tools that we use to compute certain random matrix quantities. Supersymmetric methods provide a powerful way of computing random matrix determinants, which in turn can have many applications to compute various quantities of interest <cit.>. We will focus on two such applications that are used in chapters <ref>, <ref> and <ref>. Consider a random N× N matrix X and suppose if has a limiting spectral measure μ with density ρ and Stieljtes transform g. Given a density on X, an important question is to determine the spectral density ρ, by the Stieljtes inversion formula, it is sufficient to compute g: ρ(x) = 1/πlim_ϵ→ 0 g(x + iϵ). Let G(z) be the Stieljtes transform of the empirical spectral measure of X, i.e. G(z) = 1/N∑_i=1^N 1/z - λ_i where λ_i are the eigenvalues of X. G is a random function and for many matrix ensembles will have the convergence property G→ g weakly almost surely as N→∞. Similarly, G will typically have the self-averaging property G → g in the sense of deterministic functions. It follows that computing ρ can be achieved by computing the leading order term in an asymptotic expansion for G in the limit N→∞. The key to this approach is that, if the average G can be computed over the matrix ensemble X, then the asymptotic analysis for large N can be performed on deterministic objects to obtain ρ, rather than having to deal with asymptotics of random functions. To see the connection with random determinants and thence supersymmetry, we can rewrite G(z) = 1/N1/(zI - X)∑_i=1^N∏_k≠ i^N (z - λ_k)=1/N∂/∂𝔧|_𝔧 =0(zI - X + 𝔧I)/(zI - X) where the first equality is a simple algebraic identity and the second follows from the product rule of differentiation. It follows that computing G is equivalent to computing the random matrix average (zI - X + 𝔧I)/(zI - X) followed by some differentiation. Note that this `trick' involving the introduction of the dummy variable 𝔧 is widely used as well in the perturbation theory approach to quantum field theory <cit.>. We have seen in the previous section that random matrix determinants are to be expected for example in complexity calculations, i.e. one needs to compute | X| for a random N× N matrix X, and doing so in the large N limit may be sufficient. The presence of the absolute value here is a particular nuisance, but just like the Stieljtes transform above, ratios of determinants can be used to provide an alternate formulation: | X| = ( X)^2/( √( X))^2 = X X/√( X)√( X) where the principal branch of the square root is taken. The general challenge here is to compute expectations of ratios of integer and half integer powers of random matrix determinants. This topic has been much explored in the literature, see e.g. <cit.>. The role of supersymmetric methods in this approach stems from a familiar change of variables result. For a non-singular Hermitian N× N matrix X 1/(-i)^N π^N∫_^N dz⃗ e^-iz⃗^T X z⃗ = 1/√( X) where the determinant is the simply the Jacobian of the transformation from variables z⃗ to X^1/2z⃗. Similarly, using complex integration variables one can obtain 1/(2π)^N∫_^N dz⃗dz⃗^* e^-iz⃗^† X z⃗ = 1/ X. The final ingredient is to use Grassmann integration variables to obtain an analogous expression for X, as opposed to powers of its reciprocal. Indeed, by introducing Grassmann variables χ_i, χ_i* and a Berezin integral, we obtain 1/(-i)^N∫∏_i=1^N dχ_i dχ_i^* e^-iχ^†Xχ = X. Rather than a change of variable result from multivariate calculus, this result is proved by expanding the exponential. Recall that ∫ dχ_i   1 = 0, so the only therm in the exponential expansion that can be non-zero after Berezin integration are those that contain each χ_i and χ_i^* at least once. But also, since χ_i^2 = 0 = (χ_i^*)^2, the only non-zero terms are those that contain each χ_i, χ_i^* exactly once, hence ∫∏_i=1^N dχ_i dχ_i^* e^-iχ^†Xχ =1/N!∫∏_i=1^N dχ_i dχ_i^* (-iχ^†Xχ)^N = (-i)^N1/N!∫∏_i=1^N dχ_i dχ_i^* ∑_j_1, k_1,…, j_N, k_N=1^N χ^*_j_1X_j_1k_1χ_k_1…χ^*_j_NX_j_Nk_Nχ_k_N The only non-zero terms from the sum must have j_1, …, j_N equal to a permutation of 1, …, N and similarly k_1, …, k_N, so we can write ∫∏_i=1^N dχ_i dχ_i^* e^-iχ^†Xχ =(-i)^N1/N!∫∏_i=1^N dχ_i dχ_i^* ∑_σ, τ∈ S_Nχ^*_σ(1)X_σ(1)τ(1)χ_τ(1)…χ^*_σ(N)X_σ(N)τ(N)χ_τ(N). Re-indexing the sum over the symmetric group by defining σ = σ' ∘τ, we see that the sum over τ can be rendered trivial, giving just a constant factor of N!, so ∫∏_i=1^N dχ_i dχ_i^* e^-iχ^†Xχ =(-i)^N ∫∏_i=1^N dχ_i dχ_i^* ∑_σ'∈ S_Nχ^*_σ'(1)X_σ'(1)1χ_1…χ^*_σ'(N)X_σ'(N)Nχ_N. Finally, the Grassmann terms must be commuted to render them in the correct order to agree with the differentials, i.e. ∫∏_i=1^N dχ_i dχ_i^* e^-iχ^†Xχ =(-i)^N ∫∏_i=1^N dχ_i dχ_i^* ∏_j=N^1 χ_j^* χ_j ∑_σ∈ S_Nsgn(σ) ∏_k=1^N X_σ(k)k =(-i)^N X. To conclude, ratios of certain powers of random matrix determinants can be written as Gaussian integrals over supersymmetric (i.e. mixed commuting and Grassmann) vectors. While this may seem at first like an increase in complexity, the supersymmetric representations have certain advantages, such as linearity since e^-iϕ^† (X + Y) ϕ=e^-iϕ^† X ϕ e^-iϕ^† Y ϕ. For example, if X and Y are independent, then a supersymmetric representation allows |(X+Y)| to be computed as two separate and independent expectations of X and Y. It is this linearisation effect of supersymmetric representations that is at the heart of its application to many calculations, including those in chapters <ref> and <ref>. In all applications of the supersymmetric method in random matrix theory, the random matrix calculation is reduced to `simply' e^-i XK where the matrix K = ϕϕ^† + χχ^† for some commuting vector ϕ and Grassmann χ of dimension N. The distribution of X is then encoded in this Fourier transform like object, and the remainder of the calculation is then an exercise in evaluating supersymmetric integrals. In the case of the GOE, this average is particular easy to compute: e^-i XK = N^N/(2π)^N/2∫ dX exp{-N/2 X^2 - i XK} = N^N/(2π)^N/2∫ dX exp{-N/2(X+ i1/NK)^2 - 1/2N K^2} = = e^-1/2N K^2. The final technique we need to introduce for supersymmetric methods is the Hubbard-Stratonovich transformation. Consider complex commuting integration variables ϕ∈^N and Grassmann variables χ. Then (ϕϕ^† + χχ^†)^2 = Q^2 where Q = ([ ϕ^†ϕ ϕ^†χ; χ^†ϕ χ^†χ ]). The Hubbard-Stratonovich transformation introduces a 2× 2 matrix integration variable σ which is of the same supersymmetric 1+1 type as Q then e^-1/2N Q^2 = ∫ dσ  e^-N/2σ^2 - iψ^†σψ where ψ = ϕ⊗([ 1; 0 ]) + χ⊗([ 0; 1 ]). The power of the Hubbard-Stratonovich transformation is that it linearises the dependence of the supersymmetric integrand on the supersymmetric N-dimensional vectors at the cost only of introducing an integral over an extra 2× 2 (or in general k× k) supersymetric matrix. In many calculations, this transformation makes the N-dimensional supersymmetric integral easy to compute, leaving only an integration of a fixed number of supersymmetric variables, which is precisely the conditions required for applying standard techniques from asymptotic analysis for N→∞. § LARGE DEVIATIONS PRINCIPLES Consider as an example a N× N GOE matrix X normalised so that the semi-circular radius is 2. By the very existence of such a compact limiting spectral density, eigenvalues greater than 2 or less than -2 are in some sense unlikely for large N. Large deviations principles (LDPs) answer the question of precisely how unlikely these eigenvalues are. Let λ_1 ≤…≤λ_N be the eigenvalues of X. The large deviation event for λ_1 is {λ_1 < x} for x < -2, and similarly for λ_N is {λ_N > x} for x > 2. Fixing an integer k≥ 1 as N→∞ there are also the large deviations events {λ_k < x} for x<-2 (and similarly for λ_N-k). Formally, a large deviations principle for λ_k with speed α(N) and rate function I_k(x) requires lim sup_N→∞1/α(N)log(λ_k < x) = -I_k(x)    for x ≤ -2, lim sup_N→∞1/α(N)log(λ_k ≥ x) = -∞   for x ∈ (-2, 2). For x∈ (-2, 2), if λ_k≥ x, then there is an non-empty interval (-2, x) in which there are at most k eigenvalues, so this represents a configuration of eigenvalues for which the difference between μ̂_N and μ is not negligible, which must be extremely unlikely since μ̂_N converges to μ. The large deviations principle encodes this as the infinity in (<ref>), which says that {λ_k ≥ x} is very much more unlikely than even {λ_k < -2}. In the case of the GOE <cit.>, α(N) = N and I_k(x) = kI_1(x) = k/2∫_x^-2 dz√(z^2 - 4),    for x ≤ -2, ∞ otherwise. Note that the infinity in (<ref>) should be expected from the expression (<ref>) of the eigenvalue j.p.d.f. as e^-N^2 ℐ[μ̂_N], since λ_k ≥ x for x∈(-2, 2) implies ℐ[μ̂_N] > 0; figure <ref> shows this argument pictorially. Indeed, this intuition is a good representation of the full rigorous argument to prove this LDP. Note that that μ̂_N is a random probability measure, so it appears as though μ̂_N obeys a LDP with speed N^2 and rate function something like ℐ, where recall ℐ[μ] = ∫ dμ(λ) ℰ(λ; μ) = ∫ dμ(λ) λ^2 - ∫ dμ(λ)dμ(λ') log|λ - λ'|. This is in fact the case and was established in <cit.>, where the rate function was found for for β=1,2,4 to be J_β[μ]=1/2( ∫ dμ(λ) λ^2 -β∫ dμ(λ)dμ(λ') log|λ - λ'| + β/2logβ/2 - 3/4β), which has a unique minimiser, with with value 0, among all probability measures on at the semi-circle measure with radius √(2β). This fact alone is sufficient to establish (<ref>) for the GOE. Indeed, consider the bounded Lipschitz distance on probability measure on d_Lip(μ, ν) = sup_f_Lip≤ 1| ∫ f(x) d(μ - ν)(x)| where the supremum is taken over all Lipschitz function with Lipschitz constant at most 1. Using this metric, one can define a ball B_ϵ(μ_SC) of radius ϵ centred on the minimiser μ_SC of J_1. For x∈ (-2, 2), if λ_k ≥ x then d_Lip(μ_SC, μ̂_N) ≥| ∫_-2^x dμ_SC(λ) - k/N| > ϵ for all large enough N and for some fixed ϵ>0 (independent of N). So if λ_k ≥ x, then μ̂_N lies outside the ball of radius ϵ centred on μ_SC, but since J_1 has a unique minimiser, it follows that J_1[μ̂_N] > δ for all large enough N and for some δ >0 (independent of N), so the LDP on μ̂_N yields the infinite limit (<ref>). The proof to establish the complementary limit (<ref>) also makes use of the LDP on μ̂_N. The joint density p(λ_1, …, λ_N) can be split as p(λ_1, …, λ_N) = p(λ_k+1, …, λ_N) f(λ_1, …, λ_k; λ_k+1, …, λ_N) ∝Δ({λ_i}_i=1^k) p(λ_k+1, …, λ_N) exp(-N∑_j=1^k{λ_j^2/2 - ∫ dμ̂_N-k(λ) log |λ - λ_j|}). By placing all eigenvalues inside large ball of radius M, the left-over Vandermonde term Δ({λ_i}_i=1^k) can be bounded by 2M^k^2, say. Since N is very large and k fixed, the LDP for the empirical spectral density μ̂_N-K of λ_k+1,…, λ_N applies and μ̂_N-k can be effectively replaced by μ_SC, incurring only an error term suppressed by an LDP bound of size e^-cN^2 for some constant c>0. It then remain to bound the contribution from eigenvalues outside the ball of radius M and to evaluate the supremum over λ < x of ∫ dμ_SC(λ') log |λ' - λ| - 1/2λ^2, which gives precisely the result I_k(x) stated above. § RANDOM DETERMINANTS We have discussed how the supersymmetric method can be used in random determinant calculations such as _Xlog | X| and this in fact provides the basis for much of our work with random determinants arising in Kac-Rice formulae in chapters <ref> and <ref>. In this section, we provide some broader background on random determinant calculations using different techniques and for other statistics. A foundational work in random determinant calculations is <cit.>. The focus of that work is the calculation of the complexity of the basic spherical p-spin glass model. Let f^N → be the spin glass and consider the following sets of points on the N-sphere: {x⃗∈ S^N  | ∇ f(x⃗) = 0,   f(x⃗)< √(N)u}, {x⃗∈ S^N  | ∇ f(x⃗) = 0,   f(x⃗)< √(N)u,   i(∇^2 f(x⃗)) = k}, where i(·) is the index, which simply counts the number of negative eigenvalues of a real symmetric matrix. For fixed N, note that these sets are almost surely finite and so one can unambiguously define the following notions of complexity simply as the cardinality of these sets: C_N(u) = |{x⃗∈^N  | ∇ f(x⃗) = 0,   f(x⃗)< √(N)u}|, C_N,k(u) = |{x⃗∈^N  | ∇ f(x⃗) = 0,   f(x⃗)< √(N)u,   i(∇^2 f(x⃗)) = k}|, The appropriateness of the scale √(N) of the upper bound on f will become apparent below. The argument proceeds in the following steps: * Apply a Kac-Rice formula to express the complexity as integral involving the absolute value of the determinant of a random Hessian: C_N,k(u) = ∫_S^N dx⃗ [|∇^2f| {f(x⃗) ≤√(N)u}{i(∇^2 f(x⃗)) = k} | ∇ f(x⃗) = 0] p_x⃗(0) where p_x⃗ is the density of ∇ f at x⃗. * Exploit spherical symmetry of the integrand to dispense with the integral over the N-sphere. * Use the covariance function of f to derive the joint distribution of f, its derivatives and its Hessian. The derivatives must be taken parallel to the N-sphere, so the Hessian is an N-1× N-1 matrix. One discovers that ∇ f is independent of f and ∇ f, which greatly simplifies the above expectation. Moreover, ∇^2 f just has Gaussian entries and Gaussian conditioning laws can be used to derive the distribution of ∇^2 f  |  f(x⃗) = y. One finds that it is a shifted GOE X - yI, where X is a standard GOE. In addition, ∇ f is an isotropic Gaussian vector with variance p. * The complexity is then given by C_N,k∝∫_-∞^u dy   e^-Ny^2/2[ |(X - yI)|  | {i(X-yI) = k}]. * The determinant simplifies greatly and can be written as a product over eigenvalues. Then the expectation can be rewritten as [ |(X - yI)|  | {i(X-yI) = k}] ∝∫ dλ_1…λ_N-1 e^-Ny^2/2∏_j=1^N-1 e^-(N-1)λ_j^2/2Δ({λ_i}_i=1^N-1) ∏_j=1^N-1 |λ_j - y| {λ_1 ≤…≤ y ≤λ_N-1}. Note that from the above expression it is clear that √(N) is the correct scaling to make the density of f agree with that of the eigenvalues of ∇^2 f. With some re-scaling of variables, the determinant and the Vandermonde terms combine to give an N× N Vandermonde, so overall C_N, k(u) ∝(λ_k ≤ A_Nu) with the probability taken over an N× N GOE and for some constant A_N. C_N,k can then be computed using a large deviations principle for the k-th eigenvalue of an N× N GOE. * C_N can be derived from C_N,k by summing over all k. In reality, the main results of <cit.> and related work (such as our own) focus on computing the leading order term in a large N asymptotic expansion of log | (X - yI)|, though in some cases it is possible to compute the sharp leading order term in | (X - yI)|, as done in <cit.> and also in chapter <ref>. To state the precise results from <cit.>, we require the following definitions: Θ_p(u) = 1/2log(p-1) - p2-2/4(p-1)u^2 - I_1(u; E_∞)    if u≤ -E_∞, 1/2log(p-1) - p-2/4(p-1)u^2 if -E_∞≤ u ≤ 0, 1/2log(p-1) if 0≥ u, where E_∞ = 2p-1/p, and I_1(·; E) is defined on (-∞, -E] by I_1(u; E) = 2/E^2∫_u^-E (z^2 - E^2)^1/2 dz = -u/E^2u^2 - E^2 - log(-u + u^2 - E^2) + log E, and Θ_p,k(u) = 1/2log(p-1) - p-2/4(p-1)u^2 - (k+1)I_1(u; E_∞)    if u≤ -E_∞, 1/2log(p-1) - p-2/p if u > -E_∞. Then we have the following limit results lim_N→∞1/Nlog C_N(u) = Θ_p(u),  lim_N→∞1/Nlog C_N,k(u) = Θ_p,k(u). There are some important features to highlight about these results. Note that E_∞ plays the role of the left edge of the support of a semi-circle density which, of course, has its origin in the GOE distribution of f's Hessian. In particular, note that Θ_p, k includes large deviations terms for u below -E_∞, the effective left edge of a semi-circle, but not above it. We also note the structure of stationary points of f that is encoded in Θ_p and Θ_p,k for which we show plots in Figure <ref>. Negative values of Θ_p,k(u) correspond to upper bounds on f below which it has `exponentially few' stationary points of index k i.e. effectively none. Positive values, by contrast, correspond to exponentially many stationary points of index k. This therefore is the mathematical description of the `layered structure' of spin glass stationary points on which <cit.> and our results in chapters <ref> and <ref> depend. There exist critical values E_i_i=1^∞ such that Θ_p, i(-E_i) = 0. For f below the critical value -E_0, there are effectively no stationary points of f. Between -E_0 and -E_1, there are exponentially many local minima, but effectively no stationary points of any other index. Between -E_1 and -E_2 there are exponentially many local minima and stationary points of index 1, but effectively none of any higher indices. The final critical value is -E_∞, above which stationary points of all indices are found The quantity log C_N, where the logarithm is taken after the expectation is known as the annealed average, and so the corresponding complexity is known as the annealed complexity. The alternative is known as the quenched complexity, in which the expectation is taken after the logarithm. We shall discuss the differences between the two below. The first few steps outlined above are quite general and we shall see them repeated, mutatis mutandis, in chapters <ref> and <ref>. The later steps, however, are clearly highly specific to the precise conditional Hessian distribution of the spin-glass. In particular, if the Hessian were a GOE shifted by a some matrix other than a multiple of the identity, then one would be unable to so easily dispense with the eigenvector component of the matrix expectation. Further, step 5 is an miraculous simplification wherein the conditional value of f is effectively inserted as an extra eigenvalue of the GOE, so reducing the whole calculation to a tail probability of the k-th eigenvalue of a GOE. We shall in chapters <ref>, <ref> and <ref> how supersymmetric techniques, among others, can be be employed to generalise these steps in more complicated settings. In a sequence of recent works <cit.> the question of random determinants was considered for considered for very general random matrices. Indeed, there is every reason to believe that the general framework developed particularly in <cit.> provides close to optimal conditions under which the annealed average over absolute values of random matrix determinants can be computed. The method developed in that work is, in essence, a rigorous mathematically justified version of a general mathematical physics approach known as the Coulomb gas method <cit.>. Consider a random N× N matrix X with (random) eigenvalues λ_1,…, λ_N, empirical spectral density μ̂_N and assume a limiting spectral density μ. Let us consider real symmetric X, but of course what we describe can be equally well presented for Hermitian X. One can simply express the determinant of X in terms of its eigenvalues alone and then use the definition of μ̂_N to write | X| = ∏_j=1^N |λ_j| = exp{N∫ dμ̂_N(λ) log|λ|}. Recall from (<ref>) that the eigenvalue density can be written in the form p(λ_1, …, λ_N) = 1/Z_Nexp(-N^2/2ℐ[μ̂_N]), so that | X| = 1/Z_N∫ dλ_1… dλ_N exp{N∫ dμ̂_N(λ) log|λ|}exp(-N^2/2ℐ[μ̂_N]). Heuristically, the Laplace method can be applied to conclude that the dominant leading order contribution to this integral as N→∞ comes from μ̂_N in a small ball around μ, so | X| ∼exp{N∫ dμ(λ) log|λ|} and then 1/Nlog | X| ∼∫ dμ(λ) log|λ|. This approach gives solid intuition for the asymptotic behaviour of | X| in general, but is of course only heuristic. In chapter <ref>, the Coulomb gas method plays an important part in the Kac-Rice calculation of complexity, however we have to expend some effort to provide the rigorous justification for its use in that particular case and these arguments are quite specific to the matrix ensemble in question. The main theorems of <cit.> provide a general justification for the Coulomb gas method, or really the result above that can be derived using it. The theorems are quite general but rely on a number of technical conditions on the matrix ensemble and much of the effort in that paper and its companions <cit.> is devoted to proving satisfaction of these conditions for some particular matrix ensembles of interest. Interestingly, parts of the argument in <cit.> are not dissimilar to the Laplace method heuristic above, as one of the key ingredients is a condition on X giving good enough bounds on the convergence rate of μ̂_N to μ̂_N and μ̂_N to μ. At the time of writing, these results are the most general and powerful tools for calculating | X|, however establishing satisfaction of their conditions is by no means straightforward so for some matrix ensembles, less general techniques may be easier to apply. We close this section by mentioning the differences between the annealed averages that we have discussed in some detail and the alternative quenched averages. The Jensen-Shannon theorem gives the inequality log | X| ≤log | X| so the annealed average is an upper bound for the quenched average and likewise the annealed complexity of a random function is an upper bound for the quenched complexity. The annealed complexity has received much more attention in the literature, in part because it is more analytically tractable. At least heuristically, one can see why this should be by just trying to repeat the simple Coulomb gas argument above. Recall that the key to the argument's success (and, in some real sense, the success of <cit.>) is expressing | X| as exp(N∫ dm̂û_N(λ) log|λ|). This expression, written as a functional of μ̂_N and in the form e^N… is exactly what is required for Laplace style asymptotic analysis when combined with the eigenvalue density inside the expectation. By contrast, log | X| = N∫ dμ̂_N(λ)log|λ|, which cannot be expressed in the above Laplace-amenable form. <cit.> is an important recent work that begins the extension of the Kac-Rice approach to quenched complexity via the non-rigorous replica method. The authors highlight that the quenched and annealed complexities do not in general agree even to leading order and argue that the quenched complexity is, in some sense, the better representation of a surface's complexity. In chapters <ref> and <ref> in which annealed complexity calculations feature significantly, we use highly simplified statistical physics models of much more complicated objects (deep neural networks), attempting to retain just enough of the original structure to provide some insight while still having an analytically tractable complexity. What's more, the complexity itself, annealed or quenched, is just a static snapshot of the already much simplified loss landscape, whereas real-world neural networks are trained over some complex stochastic trajectory in parameter space. As with any model of a complex system, these complexity calculations can only ever be expected to provide some limited insight into aspects of the underlying system. Given that the models themselves are very simplified and a focus on just their complexity is a considerable simplification of real training dynamics, we argue that the distinction between annealed on quenched complexity in this context, while important, is not the most significant factor affecting ecological validity. Finally, we note that quantities other than the expectation of complexity (equivalently: absolute values of determinants) have been considered. In the context of spherical p-spin glass considered in <cit.>, the variance of the complexity is obtained in <cit.> which is necessary to determine whether the expected value is typical. The proofs in this case are much more technical than those for the expectation and extensions to more complicated models such as those considered in Chapters <ref> and <ref> appears out of reach. § FREE PROBABILITY Free probability theory is a rich and deep field describing probability distributions on non-commuting algebras. The notation of freeness itself provides the generalisation of the concept of independence from standard probability theory to non-commuting algebras. The theory extends beyond the boundaries of random matrix theory to probability distributions on more general algebras <cit.>, but its connection to random matrix theory is immediately clear: random matrices are non-commuting objects endowed with probability distributions. For the purposes of this thesis, we will need only a basic introduction to free probability in the context of random matrices. Consider two N× N real matrices A and B, where A is random and B may be random or deterministic. Suppose that A is rotationally invariant, i.e. its eigenvectors follow Haar measure on the orthogonal group. A is then said to be in general position compared to B, which means roughly that there is entirely no correlation or dependence between their eigenspaces. In this case, A and B can be shown to be free independent of each other. Suppose that both A and B have limiting spectral measures μ and ν respectively and let C = A + B. Since A and B are free independent, it is known <cit.> that C has the limiting spectral measure μ⊞ν, which is known as the free additive convolution between the measures μ and ν. To define the free additive convolution, we must introduce some integral transforms. Let g_μ, g_ν be the Stieljtes transforms of μ and ν and let B_μ = g^-1_μ and B_ν = g^-1_ν be their inverses. The R-transforms are then defined as R_μ(z) = B_μ(z) - z^-1 and R_ν(z) = B_ν(z) - z^-1. The R-transforms play the role of Fourier transforms for probability measures, as one has the result R_μ⊞ν = R_μ + R_ν. In fact, one must take care with the definitions of these transforms. The above expressions are just a consequence of their true definitions as formal power series in the complex plane. The Stieljtes transform of a measure is given by the power series g_μ(z) = ∑_n≥ 0 m_n^(μ) z^-(n+1) where m_n^(μ)= ∫ dμ(x)   x^n is the n-th moment of μ (likewise for ν). The R-transform of a measure is defined as a formal power series <cit.> R_μ(z) = ∑_n=0^∞ k_n+1^(μ) z^n where k_n^(μ) is the n-th cumulant of the measure μ. It is known <cit.> that k_n^(μ)=C_n^(μ) where the functional inverse of the Stieljtes transform of the measure is given by the formal power series B_μ(z) = 1/z + ∑_n=1 C_n^(μ) z^n-1. So the key result (<ref>) is really a statement about the cumulants of μ, ν and μ⊞ν, namely k_n^(μ⊞ν) = k_n^(μ) + k_n^(ν). There is a useful relation between cumulants and moments which can be found, for example, in the proof of Lemma 5.3.24 in <cit.>: m_n = ∑_r=1^n  ∑_0≤ i_1,…, i_r≤ n-r i_1+… + i_r = n-r k_r m_i_1… m_i_r. The final concept we need from free probability theory is subordination functions. Given measures μ, ν there exists a subordination function ω:→ such that g_μ⊞ν(z) = g_ν(ω(z)) <cit.>. Depending on the context, the subordination function formulation relating μ⊞ν to μ and ν can prove more convenient than the formulation via sums of R-transforms, see e.g. <cit.> and chapter <ref> below. We conclude this briefest of introduction to free probability by providing a few concrete results for integral transforms of the a specific measure, namely the semi-circle μ_SC with density ρ_SC(x) = π^-1√(2 - x^2). We shall include the calculations as we have been repeatedly frustrated to find them absent from the literature. Henceforth μ = μ_SC and we will drop all μ and SC labels. Stieltjes transform For odd n clearly m_n =0 by the symmetry of the semi-circle measure. Now consider the even moments: m_2n = π^-1∫ dx   x^2n√(2 - x^2) = 2^1 + nπ^-1∫_-π/2^π/2 dθcos^2θsin^2nθ = 2^1 + nπ^-1∫_-π/2^π/2 dθ (sin^2nθ - sin^2(n+1)θ) The trigonometric integrals are standard exercises in basic calculus[The usual approach is to write sin^2nθ = sin^2n-2θ - cos^2θsin^2n-2θ, apply integration by parts to the second term and then iterate.]: ∫_-π/2^π/2 dθsin^2nθ = π2n - 1/2n2n - 3/2n-2…1/2 so m_2n = 2^1 + n2n - 1/2n2n - 3/2n-2…1/2(1-2n+1/2n+2) = 2^1 + n2n - 1/2n2n - 3/2n-2…1/21/2n + 2. Thus we have the Stieltjes transform g(z) = ∑_n=0^∞ z^-(2n + 1) 2^1+n1/2n + 22n - 1/2n2n - 3/2n-2…1/2 = z ∑_n=0^∞(z^2/2)^-(n+1)1/2n + 22n - 1/2n2n - 3/2n-2…1/2 = z ∑_n=0^∞(z^2/2)^-(n+1)1/(n+1)!(2n - 1)(2n-3)… 1/2^n+1 = z ∑_n=0^∞(-z^2/2)^-(n+1)1/(n+1)!(2n - 1)(2n-3)… 1/2^n+1(-1)^n+1 and we can now identity the Taylor expansion of a familiar function, so g(z) = z (1- √(1 - 2/z^2)) = z-√(z^2 - 2). For a general semi-circle with radius r, we can thence immediately write down its Stieltjes transform 2/r(z-√(z^2 - r^2)) where the pre-factor comes simply from the appropriate normalisation of the density √(r^2 - x^2) relative to √(2 - x^2). Inverting this Siteltjes transform is simple. Let y(z) = g_r^-1(z), then rz = 2y - 2√(y^2 - r^2) 4y^2 -4r^2 = 4y^2 - 4zry + z^2r^2 g_r^-1(z) = y(z) = 1/z + rz/4 from which it follows that R_r(z) = rz/4. § LOCAL LAWS AND UNIVERSALITY Earlier in this chapter, we introduced the Wigner surmise and the rough notion of local universality in random matrices. This section provides further details about universality, with particular emphasis on the rather stunning sequence of papers beginning around <cit.> that are well on the way to answering quite definitively the question of local universality. Broadly speaking, universality refers to the phenomenon that certain properties of special random matrix ensembles (such as the GOE) remain true for more general random matrices that share some key feature with the special ensembles. For example, the Wigner semicircle is the limiting spectral density of the Gaussian Wigner ensembles, i.e. matrices with Gaussian entries, independent up to symmetry (symmetric real matrices, Hermitian complex matrices) <cit.>. The Gaussian case is the simplest to prove, and there are various powerful tools not available in the non-Gaussian case, however the Wigner semicircle has been established as the limiting spectral density for Wigner matrices with quite general distributions on their entries <cit.>. While surprisingly general is some sense, the Wigner semicircle relies on independence (up to symmetry) of matrix entries, a condition which is not typically satisfied in real systems. The limiting form of the spectral density of a random matrix ensemble is a macroscopic property, i.e. the matrix is normalised such that the average distance between adjacent eigenvalues is on the order of 1/√(N), where N is the matrix size. At the opposite end of the scale is the microscopic, where the normalisation is such that eigenvalues are spaced on a scale of order 1; at this scale, random matrices display a remarkable universality. For example, any real symmetric matrix has a set of orthonormal eigenvectors and so the set of all real symmetric matrices is closed under conjugation by orthogonal matrices. Wigner conjectured that certain properties of GOE matrices hold for very general random matrices that share the same (orthogonal) symmetry class, namely symmetric random matrices (the same is true of Hermitian random matrices and the unitary symmetry class). The spacings between adjacent eigenvalues should follow a certain explicit distribution, the Wigner surmise, and the eigenvectors should be delocalised, i.e. the entries should all be of the same order as the matrix size grows. Both of these properties are true for the GOE and can be proved straightforwardly with quite elementary techniques. Indeed, in the case of 2× 2 GOE, it is a standard first exercise in random matrix theory to prove that the eigenvalue spacing distribution is precisely the Wigner surmise (for N× N GOEs it is only a good approximation and improves as N→∞). Microscopic random matrix universality is known to be far more robust than universality on the macroscopic scale. Indeed, such results are well established for invariant ensembles and can be proved using Riemann-Hilbert methods <cit.>. For more general random matrices, microscopic universality has been proved by quite different methods in a series of works over the last decade or so, of which a good review is <cit.>. Crucial in these results is the notion of a local law for random matrices. The technical statements of some local laws are given below, but roughly they assert that the spectrum of a random matrix is, with very high probability, close to the deterministic spectrum defined by its limiting spectral density (e.g. the semicircle law for Wigner matrices). Techniques vary by ensemble, but generally a local law for a random matrix ensemble provides the control required to demonstrate that certain matrix statistics are essentially invariant under the evolution of the Dyson Brownian motion. In the case of real symmetric matrices, the Dyson Brownian motion converges in finite time to the GOE, hence the statistics preserved under the Dyson Brownian motion must match the GOE. The n-point correlation functions of eigenvalues are one such preserved quantity, from which follows, amongst other properties, that the Wigner surmise is a good approximation to the adjacent spacings distribution. The process we have just outlined is known as the `three step strategy', which we now state in its entirety for real Wigner matrices, though the essence of the strategy is much more general. * Establish a local semi-circle law for the general Wigner ensemble X. * Universality for Gaussian divisible ensembles. Consider a random matrix X_t= e^-t/2X + √(1 - e^-t) G, where G is a standard GOE matrix. One must show that X_t has universality for t=N^-τ for any 0< tau < 1. The clearest interpretation of this result is that, as X evolves under a matrix Ornstein-Uhlenbeck process, its local eigenvalue statistics have `relaxed' to those of the GOE after any timescales greater than N^-1. Concretely this process is dX_t = 1/√(N)dB_t - 1/2 X_t dt where B_t is a standard symmetric Brownian motion and the initial data is X_0 = X. The local law on X is a key ingredient in establishing this result. * Approximation by a Gaussian divisible ensemble. This final step, sometimes called the `comparison step', has to show that the local statistics of the matrix X can be well approximated by those of the Gaussian divisible ensemble X_t for short times scale N^-τ where τ < 1. Combining with step 2, one then obtain universality for X. We now make the preceding statements about correlation functions precise, following the treatment in <cit.>. For an N× N matrix X, let p_N^(k) be its k-point correlation function, i.e. p_N^(k)(x_1, …, x_k) = ∫ dλ_k+1… dλ_N p_N(x_1, …, x_k, λ_k+1, …, λ_N) where p_N is simply the symmetrised joint probability density of the eigenvalues of X (i.e. the joint density of the unordered eigenvalues). Assume that X has a limiting spectral density ρ with compact support and is normalised so that it the support is [-√(2), √(2)]. Assume also that the symmetry group of X is O(N), i.e. X is real-symmetric. One statement of spectral universality for X is that for any κ>0 and for any E∈ [-√(2) + κ, √(2)-κ] we have lim_N→∞1/ρ(E)^k∫_^k dα⃗ F(α⃗) p_N^(n)( E + α⃗/Nρ(E)) = ∫_^k dα⃗ F(α⃗) q_GOE^(k) (α⃗) for any smooth and compactly supported function F^k →. Here q_GOE^(k) is simply the k-point correlation function for a GOE scaled so that its semi-circular radius is √(2). This is so-called spectral universality in the bulk. From this statement, the local nature of spectral universality is quite plain. One fixes some location inside the bulk of the limiting spectral density of X, referred to as an energy E[The physics terminology is due to the historical origins of spectral universality in the Wigner surmise within the context of random matrix models for quantum mechanical Hamiltonians.], then ones takes an fixed number k of eigenvalues and looks at their marginal joint probability density in a region of the spectrum centred tightly on E. As the matrix size N diverges, so the small region around E shrinks and the joint distribution of the k eigenvalues in the small region converges to simply the joint distribution of k eigenvalues of a standard GOE matrix. Note that the `small region' around the location E in the spectral bulk has a precisely prescribed scaling of 1/N, which is the scaling so that, with overwhelming probability, the number of eigenvalues in the small region is of order 1. Spectral universality as presented above is clearly good deal stronger than the Wigner surmise and is describing at least a similar phenomenon. We can go further however, an consider a different formulation of spectral universality that is a direct generalisation of the Wigner surmise, namely spectral gap universality in the bulk. Of course, we note that all of the above has been stated for real symmetric matrices and the GOE, but could equally well have been stated for Hermitian matrices and the GUE. For an 0<α<1 and any integers r,s∈ [α N, (1-α)N] lim_N→∞| _X F(Nρ(λ_r)(λ_r - λ_r+1), …, Nρ(λ_r)(λ_r- λ_r+k)) - _GOE F(Nρ_SC(λ_s)(λ_s - λ_s+1), …, Nρ_SC(λ_s)(λ_s- λ_s+K)) | = 0 where F is an arbitrary function as before. These two formulations of spectral universality are known to be equivalent <cit.>. To recover the Wigner surmise, take n=1 and then one obtains lim_N→∞|_X F(Nρ(λ_r)(λ_r - λ_r+1)) - _GOE F(Nρ_SC(λ_s)(λ_s - λ_s+1))| = 0. Note that ρ_SC(λ_s)N is precisely the scaling required around λ_s to bring the GOE eigenvalues onto the scale on which the mean spacing is unity, thus for large N _GOE F(Nρ_SC(λ_s)(λ_s - λ_s+1)) = ∫ dr ρ_Wigner(r) F(r) + o(N), and so (<ref>) is indeed the precise statement of the universality of the Wigner surmise for X. There are several forms of local law, but all provide high probability control on the error between the (random) matrix Green's function G(z) = (z - X)^-1 and certain deterministic equivalents. In all cases we use the set S⃗ = {E + iη∈| |E| ≤ω^-1,   N^-1 + ω≤η≤ω^-1} for ω∈(0, 1) and the local law statements holds for all (large) D>0 and (small) ξ > 0 and for all large enough N. The averaged local law states: sup_z∈S⃗(|1/N G(z) - g_μ(z)| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D. The isotropic local law states: sup_u⃗,v⃗ = 1, z∈S⃗( |u⃗^TG(z)v⃗ - g_μ(z)| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D. The anisotropic local law states: sup_u⃗,v⃗ = 1, z∈S⃗( |u⃗^TG(z)v⃗ - u⃗^TΠ(z)v⃗| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D where Π(·) is an N× N deterministic matrix function on ℂ. The entrywise local law states: sup_z∈S⃗, 1≤ i,j≤ N( |G_ij(z) - Π_ij(z)| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D. The anisotropic local law is a stronger version of the entrywise local law. The anisotropic local law is a more general version of the isotropic local law, which can be recovered in the isotropic case by taking Π = g_μ I. The entrywise local law can also be applied in the isotropic case by taking Π = g_μ I. The averaged local law is weaker than all of the other laws. General Wigner matrices are known to obey isotropic local semi-circle laws <cit.>. Anisotropic local laws are known for general deformations of Wigner matrices and general covariance matrices <cit.> as well as quite general classes of correlated random matrices <cit.>. Local universality is not limited to the eigenvalues of random matrices. Recall that the eigenvectors of the canonical Gaussian orthogonal, unitary and symplectic ensembles are distributed with Haar measure on their respective symmetry groups. We have seen the precise and deep sense in which the eigenvalues of very general random matrices are similar to those of the very special canonical Gaussian orthogonal ensemble of the same symmetry class, but what of the eigenvectors? Is there some precise sense in which the eigenvectors of quite general random matrices are similar to Haar-distributed sets of vectors on their corresponding symmetry group? The first steps in this direction can be found in <cit.> where quantum unique ergodicity (QUE) is proved for generalised Wigner matrices. It is well known that the eigenvectors of quite general random matrices display a universal property of delocalisation, namely |u_k|^2 ∼1/N for any component u_k of an eigenvector u⃗. Universal delocalisation was conjectured by Wigner along with the Wigner surmise for adjacent eigenvalue spacing. QUE states that the eigenvectors of a random matrix are approximately Gaussian in the following sense (<cit.> Theorem 1.2): sup_||q⃗|| = 1sup_I⊂ [N], |I| = n| P((N|q⃗^Tu⃗_k|^2)_k∈ I) - P((|𝒩_j|^2)_j=1^n)| ≤ N^-ϵ, for large enough N, where 𝒩_j are i.i.d. standard normal random variables, (u⃗_k)_k=1^N are the normalised eigenvectors, P is any polynomial in n variables and ϵ > 0. Note that the set I in this statement is a subset of [N]≡{1,2,…, N} of fixed size n; n is not permitted to depend on N. Recall from earlier in this chapter, around (<ref>), that fixed size subsets of Haar distributed eigenvectors of large random matrices can be well approximated by vectors of independent Gaussian entries. Note that the statement of QUE given above is of precisely the same character = CHAPTER: NEURAL NETWORKS WITH GENERAL ACTIVATION FUNCTIONS The content of this chapter was published first as a pre-print in April 2020 (<https://arxiv.org/abs/2004.03959>) and later as a journal article: “The loss surfaces of neural networks with general activation functions”. Nicholas P Baskerville, Jonathan P Keating, Francesco Mezzadri and Joseph Najnudel. Journal of Statistical Mechanics: Theory and Experiment, 2021(6):064001, 2021. NPB suggested general activation functions as a focus, performed all of the calculations and experiments and wrote the paper. The other authors contributed ideas for possible approaches, provided feedback on results throughout and made small revisions to the drafts. Anonymous reviewers spotted some minor errors, advised on changes of presentation and provided useful references. § INTRODUCTION §.§ Multi-layer perceptron neural networks Let f:ℝ→ℝ be a suitably well-behaved (e.g. differentiable almost everywhere and with bounded gradient) non-linear activation function which is taken to applied entry-wise to vectors and matrices. We study multi-layer perceptron neural networks of the form y⃗(x⃗) = f(W^(H)f(W^(H-1)f(… f(W^(1)x⃗)…))) where the input data vectors x⃗ lie in ℝ^d and the weight matrices {W^(ℓ)}_ℓ=1^H have any shapes compatible with x⃗∈ℝ^d and y⃗(x⃗)∈ℝ^c. As discussed in Chapter <ref>, the matrices W^(ℓ) are parameters of the neural network f and in practice they will be randomly initialised with some standard distribution and then “learned” using some gradient descent algorithm on a data set. Their shapes are essentially arbitrary up-to compatibility constraints and the choice of hidden layer widths (i.e. the number of rows in each W^ℓ)) is an engineering decision unique to each concrete application. Note that, as in <cit.>, we do not consider biases in the network. §.§ Outline of results and methods Following <cit.>, we view y⃗ as a random function over a high-dimensional weight-space and explore its critical points, i.e. vanishing points of its gradient. The randomness will come from taking the input data to be random. We define the following key quantities[Recall that the index of a critical points is the number of negative eigenvalues of the Hessian at that point.]: C_k,H(u) = expected number of critical points of y⃗ of index k taking values at most u, C_H(u) = expected number of critical points of y⃗ taking values at most u. In Section <ref> we make precise our heuristic definitions in (<ref>)-(<ref>). Following <cit.> we obtain precise expressions for C_k,H and C_H as expectations under the Gaussian Orthogonal Ensemble (GOE) and use them to study the asymptotics in the large-network limit. Our results reveal almost the same `banded structure' of critical points as first found in <cit.>. In particular we establish the existence of the same critical values E_0 > E_1 > ... >E_∞ such that, with overwhelming probability, critical points taking (scaled) values in (-E_k, -E_k+1) have index at-most k+2, and that there are exponentially many such critical points. We further obtain the exact leading order terms in the expansion of C_H(u), this being the only point at which the generalised form of the activation function f affects the results. In passing, we also show that the network can be generalised to having any number of output neurons without much affecting the calculations of <cit.> who only consider single-output networks. In Section <ref> we extend the derivation of <cit.> to general activation functions by leveraging piece-wise linear approximations, and we extend to multiple outputs and new loss functions with a simple extension of the corresponding arguments in <cit.>. In Section <ref> we obtain expressions for the complexities C_k,H, C_H using a Kac-Rice formula as in <cit.> but are forced to deal with a perturbed GOE matrix, preventing the replication of the remaining calculations in that work. Instead, in Section <ref> we use the supersymmetric method following closely the work of <cit.> and thereby reach the asymptotic results of <cit.> by entirely different means. § NEURAL NETWORKS AS RANDOM FUNCTIONS In this section we show that, under certain assumptions, optimising the loss function of a neural network is approximately equivalent to minimising the value of a random function on a high dimensional hypersphere, closely related to the spin glass. Our approach is much the same as <cit.> but is extended to a general class of activation functions and also to networks with multiple output neurons. §.§ Modelling assumptions We make the following assumptions, all of which are required for the specific analytic framework of the results in this chapter and are taken either exactly from, or by close analogy with <cit.>. We defer a discussion of their plausibility and necessity to Section <ref>. * Components of data vectors are i.i.d. standard Gaussians. * The neural network can be well approximated as a much sparser[As in <cit.>, a network with N weights is sparse if it has s unique weight values and s≪ N.] network that achieves very similar accuracy. * The unique weights of the sparse network are approximately uniformly distributed over the graph of weight connections. * The activation function is twice-differentiable almost everywhere in ℝ and can be well approximated as a piece-wise linear function with finitely many linear pieces. * The action of the piece-wise linear approximation to the activation function on the network graph can be modelled as i.i.d. discrete random variables, independent of the data at each node, indicating which linear piece is active. * The unique weights of a the sparse neural network lie on a hyper-sphere of some radius. An alternative to assumption <ref> would be to take the activation function to be random (and so too its piece-wise linear approximation). In this paradigm, we consider the ensuing analysis of this chapter to be a study of the mean properties of the induced ensemble of neural networks. Resorting to studying mean properties of complicated stochastic systems is a standard means of simplifying the analysis. We do not develop this remark further, but claim that the following calculations are not much affected by switching to this interpretation. §.§ Linearising loss functions In <cit.> the authors consider networks with a single output neuron with either L_1 or hinge loss and show that both losses are, in effect, just linear in the network output and with positive coefficient, so that minimising the loss can be replaced with minimising the network output. Our ensuing analysis can just as well be applied to precisely these situations, but here we present arguments to extend the applicability to multiple output neurons for L_1 regression loss and the widely-used cross-entropy loss <cit.> for classification. L_1 loss. The L_1 loss is given by ℒ_L_1(y⃗(X⃗), Y⃗) ∑_i=1^c| y_i(X⃗) - Y_i| where X⃗ is a single random data vector and Y⃗ a single target output. Following <cit.>, we assume that the absolute values in (<ref>) can be modelled by using Bernoulli random variables, M_i say, taking values in {-1, 1}. Precisely, we replace |y_i(X⃗) - Y_i| with M_i(y_i(X⃗) - Y_i), so that the Bernoulli variables M_i model which section of the absolute value function y_i(X⃗) - Y_i lies in. We do not expect X⃗, Y⃗ and the M_i to be independent, however it may be reasonable to assume that X⃗ and the M_i are conditionally independent conditioned on Y⃗. We then have _M | Y⃗ℒ_L_1(y⃗(X⃗), Y⃗) = _M | Y⃗∑_i=1^c M_i (y_i(X⃗) - Y_i) = ∑_i=1^c (2π_i-1) y_i(X⃗) - ∑_i=1^c _M | Y⃗ M_iY_i = ∑_i=1^c (2π_i-1) y_i(X⃗) - ∑_i=1^c (2π_i - 1)Y_i where the M_i are Bernoulli random variables with (M_i = 1) = π_i. Observe that the second term in (<ref>) is independent of the parameters of the network. Hinge loss. The hinge loss <cit.> is given by ℒ_hinge(y⃗(X⃗), Y⃗) ∑_i=1^c max(0, 1 - Y_i y_i(X⃗)). We again use Bernoulli random variables, M_i' say, to model the max in (<ref>) so that _M' | Y⃗ℒ_hinge(y⃗(X⃗), Y⃗) = _M' | Y⃗[∑_i=1^c M_i'(1 - Y_iy_i(X⃗))] = ∑_i=1^c π_i' (1-Y_iy_i(X⃗)) where M' is a Bernoulli random variable taking values in {0,1} with ℙ(M'= 1) = π'. In the context of hige-loss classifier, the Y_i take values in {-1,1}. IN <cit.> THEY CAN NOW USE THAT -X∼ X TO GET RID OF THE Y_i and NEGATIVE SIGN, BUT WE CAN'T DO THAT BECAUSE OF THE NON-RANDOM TERM. IT IS POSSIBLE THAT OUR HAND-WAVING TOWRDS THE END OF THE SECTION STILL APPLIES. Cross-entropy loss. The cross-entropy loss is given by ℒ_entr(y⃗(X⃗), Y⃗) -∑_i=1^c Y_i log(SM[y⃗(X⃗)]_i) where SM is the soft-max function: SM : ℝ^c →ℝ^c, z⃗↦exp(z⃗)/∑_i=1^m exp(z_i) and exp(·) is understood to be applied entry-wise. Note that we are applying the standard procedure of mapping network outputs onto the simplex Δ^c-1 to allow us to calculate a mutual entropy. Restricting to c-class classification problems and using one-hot label vectors <cit.>, we obtain ℒ_entr(y⃗(X⃗), Y⃗) = -∑_i=1^c Y_i{y_i(X⃗) - log(∑_j=1^c exp(y_j(X⃗)))} We note that classification networks typically produce very `spiked' soft-max outputs <cit.>, therefore we make the approximation ∑_i=1^c exp(y_i(X⃗)) ≈max_i=1,…, c{exp(y_i(X⃗))} and so we obtain from (<ref>) and (<ref>) ℒ_entr(y⃗(X⃗), Y⃗) ≈ -∑_i=1^c{ Y_i y_i(X⃗) - Y_imax_j=1,…,c{y_j(X⃗)}} We now model the max operation in (<ref>) with a categorical variable, M” say, over the indices i=1,…, c and take expectations (again assuming conditional independence of X⃗ and M”) to obtain _M” | Y⃗ℒ_entr(y⃗(X⃗), Y⃗) = -∑_i=1^c Y_i (y_i(x⃗) - ∑_j=1^c π_j” y_j(X⃗)) Now Y⃗ is a one-hot vector and so (<ref>) in fact reduces to _M” | Y⃗ℒ_entr(y⃗(X⃗), Y⃗) = ∑_j=1^c π_j” y_j(x⃗) - y_i(x⃗) for some i. The arguments in this section are not intended to be anything more than heuristic, so as to justify our study of a⃗^Ty⃗ for some constant a⃗ instead of the actual loss function of a neural network. The modelling assumptions required are no stronger than those used in <cit.>. §.§ Network outputs as spin glass-like objects We assume that the activation function, f, can be well approximated by a piece-wise linear function with finitely many linear pieces. To be precise, given any ϵ > 0 there exists some positive integer L and real numbers {α_i, β_i}_i=1^L and real a_1 < a_2 < … < a_L-1 such that |f(x) - (α_i+1 x + β_i+1)| < ϵ   ∀ x∈(a_i, a_i+1],   1 ≤ i ≤ L-2, |f(x) - (α_1 x + β_1)| < ϵ   ∀ x∈(-∞, a_1], |f(x) - (α_L x + β_L)| < ϵ   ∀ x∈(a_L-1, ∞). Note that the {α_i, β_i}_i=1^L and {a_i}_i=1^L-1 are constrained by L-1 equations to enforce continuity, viz. α_i+1a_i + β_i+1 = α_ia_i + β_i,      1≤ i ≤ L-1 A continuous piece-wise linear function with L pieces f̂(x; {α_i, β_i}_i=1^L , {a_i}_i=1^L-1) is an (L,ϵ)-approximation to to a function f if |f(x) - f̂(x; {α_i, β_i}_i=1^L , {a_i}_i=1^L-1)| < ϵ for all x∈ℝ. Given the above definition, we can establish the following. Let f̂(· ; {α_i, β_i}_i=1^L , {a_i}_i=1^L-1) be a (L, ϵ)-approximation to f. Assume that all the W^(i) are bounded in Frobenius norm[Recall assumption <ref>, which is translated here to imply bounded Frobenius norm.]. Then there exists some constant K>0, independent of all W^(i), such that ‖ f(W^(H)f(W^(H-1)f(… f(W^(1)x⃗)…))) - f̂(W^(H)f̂(W^(H-1)f̂(…f̂(W^(1)x⃗)…)))‖_2 < Kϵ for all x⃗∈ℝ^d. Suppose that (<ref>) holds with H-1 in place of H. Because f̂ is piece-wise linear and continuous then we clearly have |f̂(x) - f̂(y)| ≤max_i=1,…, L{|α_i|} |x-y|≡ K'|x-y| which can be seen by writing f̂(x) - f̂(y) = (f̂(x) - f̂(a_i)) + (f̂(a_i) - f̂(a_i-1)) + … + (f̂(a_j+1) - f̂(a_j)) + (f̂(a_j) - f̂(y)) for all intermediate points a_j, …, a_i ∈ (y, x). Using (<ref>) and our induction assumption we obtain ‖f̂(W^(H)f(W^(H-1)f(W^(H-2)f(… f(W^(1)x⃗)…))) - f̂(W^(H)f̂(W^(H-1)f̂(W^(H-2)f̂(…f̂(W^(1)x⃗)…)))‖_2 ≤ cK'‖ W^(H)[f(W^(H-1)f(W^(H-2)f(… f(W^(1)x⃗)…))) - f̂(W^(H-1)f̂(W^(H-2)f̂(…f̂(W^(1)x⃗)…)))]‖_2 ≤ cKK'‖ W^(H)‖_F ϵ ≤ K”ϵ, for some K”, where on the last line we have used the assumption that the network weights are bounded to bound ‖ W^(H)‖_F. The result for H=1 follows immediately from (<ref>). One could be more explicit in the construction of the piece-wise linear approximation f̂ from f given the error tolerance ϵ by following e.g. <cit.>. We do not develop this further here as we do not believe it to be important to the practical implications of our results. In much the same vein as <cit.> (c.f. Lemma 8.1 therein), we now use the following general result for classifiers to further justify our study of approximations to a neural network in the rest of the chapter. Let Z_1 and Z_2 be the outputs of two arbitrary c-class classifiers on a dataset 𝒳. That is, Z_1(x),Z_2(x) take values in {1,2,…, c} for x∈𝒳. If Z_1 and Z_2 differ on no more than ϵ|𝒳| points in 𝒳, then corr(Z_1, Z_2) = 1 - 𝒪(ϵ) where, recall, the correlation of two random variables is given by 𝔼(Z_1Z_2) - 𝔼Z_1𝔼Z_2/std(Z_1)std(Z_2). Let _i⊂ be the set of data points for which Z_1=i for i=1,2, …, c. Let _i,j⊂_i be those points for which Z_1 = i but Z_2 = j where j≠ i. Define the following: p_i = |_i|/||,    ϵ_i^+ = ∑_j≠ i|_i,j|/||,    ϵ_i^- = ∑_j≠ i|_j,i|/||. We then have Z_1 = ∑_i=1^c i p_i, Z_2 = ∑_i=1^c i (p_i - ϵ^+_i + ϵ^-_i) Z_1Z_2 = ∑_i=1^c i^2 (p_i - ϵ_i^+) + ∑_1≤ i < j ≤ c ij|_i,j| + |_j,i|/|| std(Z_1) = [∑_i=1^c i^2p_i - ∑_i,jijp_ip_j]^1/2 std(Z_2) = [∑_i=1^c i^2(p_i -ϵ_i^+ + ϵ_i^-) - ∑_i,jij(p_i - ϵ_i^+ + ϵ_i^-)(p_j - ϵ_j^+ + ϵ_j^-)]^1/2. Now, by assumption ∑_i ϵ_i^±≤𝒪(ϵ) and so ϵ_i^±≤𝒪(ϵ) for all i. Similarly, |_i,j|/|| ≤𝒪(ϵ) and so we quickly obtain from (<ref>)-(<ref>) cov(Z_1, Z_2) = ∑_i=1^c i^2p_i - ∑_i,jijp_ip_j + 𝒪(ϵ). Finally, combining (<ref>) - (<ref>) we obtain corr(Z_1, Z_2) = 1 + 𝒪(ϵ)/(1 + 𝒪(ϵ))^1/2 = 1 + 𝒪(ϵ). The final intermediate result we require gives an explicit expression for the output of a neural network with a piece-wise linear activation function. Consider the following neural network ŷ⃗̂(x⃗) = f̂(W^(H)f̂(…f̂(W^(1)x⃗)…)) where f̂(·; {α_i, β_i}_i=1^L , {a_i}_i=1^L-1) is a piece-wise linear function with L pieces. Then there exist A_i,j taking values in 𝒜{∏_i=1^H α_j_i  :  j_1,…, j_H ∈{1,…, L}} and A^(ℓ)_i,j taking values in 𝒜^(ℓ){β_k∏_r=1^H-ℓα_j_r :  j_1,…, j_H-ℓ, k ∈{1,…, L}} such that ŷ_̂î(x⃗) = ∑_j=1^d ∑_k∈Γ_ix_j,k A_j,k∏_l=1^H w_j,k^(l) + ∑_ℓ=1^H∑_j=1^n_ℓ∑_k∈Γ_i^(ℓ) A_j,k^(ℓ)∏_r=ℓ +1^H w_j,k^(r) where Γ_i is an indexing of all paths through the network to the i-th output neuron, Γ_i^(ℓ) is an indexing of all the paths through the network from the ℓ-th layer to the i-th output neuron, w_j,k^(l) is the weight applied to the j-th input on the k-th path in the l-th layer, x_j,k = x_j, and n_ℓ is the number of neurons in layer ℓ. Firstly, for some j=1,…, L f̂(W^(1)x⃗)_i = α_j (W^(1)x⃗)_i + β_j and so there exist j_1, j_2,…∈{1,…, L} such that [W^(2)f̂(W^(1)x⃗)]_i = ∑_k W^(2)_ik( α_j_k(W^(1)x⃗)_k + β_j_k) = ∑_k α_j_kW^(2)_ik∑_l W^(1)_klx_l + ∑_k W^(2)_ikβ_j_k. Continuing in the vein of (<ref>), there exist k_1,k_2, …∈{1, …, L} such that f̂(W^(2)f̂(W^(1)x⃗))_i = α_k_i∑_r α_j_rW_ir^(2)∑_l W_kl^(1)x_l + α_k_i∑_r W^(2)_irβ_j_r + β_k_i from which we can see that the result follows by re-indexing and induction. We now return to the neural network y⃗(·). Fix some small ϵ>0, let f̂(·; {α_i, β_i}_i=1^L, {x_i}_i=^L-1) be a (L,ϵ)-approximation to f and let ŷ⃗̂ be the same network as y⃗ but with f replaced by f̂. By Lemma <ref>, we have[Here we use the standard notation that, for a function p on ℬ, p≲ϵ if there exists a constant K such that p(x) ≤ Kϵ for all x∈ℬ.] ‖y⃗(x⃗) - ŷ⃗̂(x⃗)‖_2 ≲ϵ for all x⃗∈ℝ^d, and so we can adjust the weights of ŷ⃗̂ to obtain a network with accuracy within 𝒪(ϵ) of y⃗. We then apply Lemma <ref> to ŷ⃗̂ and assume[This assumption is the natural analogue of the assumption used in <cit.>.] that the A_i,j and A_i,j^(ℓ) can be modelled as i.i.d. discrete random variables with A_i,j = ρ,     A_i,j^(ℓ) = ρ_ℓ and then ŷ_i(X⃗) = ρ_x⃗∑_j=1^d ∑_k∈Γ_iX_j,k∏_l=1^H w_j,k^(l) + ∑_ℓ=1^H ρ_ℓ∑_j=1^n_ℓ∑_k∈Γ_i^(ℓ)∏_r=ℓ +1^H w_j,k^(r). Our reasoning is now identical to that in Section 3.3 of <cit.>. We use the assumptions of sparsity and uniformity (Section <ref>, assumptions <ref>, <ref>) and some further re-indexing to replace (<ref>) by ỹ_i(X⃗) = ρ_X⃗∑_i_1, …, i_H = 1^Λ X_i_1, …, i_H∏_k=1^H w_i_k + ∑_ℓ=1^Hρ_ℓ∑_i_ℓ + 1, …, i_H=1^Λ∏_k=ℓ +1^H w_i_k where Λ is the number of unique weights of the network and, in particular, the sparsity and uniformity assumptions are chosen to give _X⃗‖ỹ⃗̃(X⃗) - ŷ⃗̂(X⃗)‖_2 ≲ϵ. (<ref>) and (<ref>) now give _X⃗‖ỹ⃗̃(X⃗) - y⃗(X⃗)‖_2 ≲ϵ and in the case of classifiers, (<ref>) ensures that the conditions for Theorem <ref> are met, so establishing that corr(ỹ⃗̃(X⃗) ,y⃗(X⃗)) = 1 - 𝒪(ϵ). As in <cit.>, we use these heuristics to justify studying ỹ⃗̃ hereafter in place of y⃗. Recalling the results of Section <ref>, in particular (<ref>) and (<ref>) we conclude that to study the loss surface of ỹ⃗̃ under some loss function it is sufficient to study quantities of the form ∑_i=1^c η_i ỹ_i and, in particular, we study the critical points. The X are centred Gaussian random variables and so any finite weighted sum of some X is a centred Gaussian variable with some variance. We can re-scale variances and absorb constants into the ρ_ℓ and thereby replace ∑_i η_i ỹ_i(X⃗) with ỹ_i(X⃗). Note that we assumed an L_2 constraint on the network weights (Section <ref>, point 6) and that now carries forward as 1/Λ∑_i=1^Λ w_i^2 = 𝒞 for some constant 𝒞. For ease of notation in the rest of the chapter, we define g(w⃗) = ∑_i_1, …, i_H = 1^Λ X_i_1, …, i_H∏_k=1^H w_i_k + ∑_ℓ=1^Hρ_ℓ' ∑_i_ℓ + 1, …, i_H=1^Λ∏_k=ℓ +1^H w_i_k where ρ_ℓ' ρ_ℓ/ρ. Finally, recall that we assumed the data entries X_i are i.i.d standard Gaussians. To allow further analytic progress to be made, we follow <cit.> and now extend this assumption to X_i_1, …, i_Hi.i.d∼𝒩(0,1). The random function g is now our central object of study and, without loss of generality, we take 𝒞=1 in (<ref>) so that g is a random function on the (Λ-1)-sphere of radius Λ. Observe that the first term in (<ref>) is precisely the form of an H-spin glass as found in <cit.> and the second term is deterministic and contains (rather obliquely) all the dependence on the activation function. Having demonstrated the link between our results and those in <cit.>, we now set Λ = N for convenience and to make plain the similarities between what follows and <cit.>. We also drop the primes on ρ_ℓ'. §.§ Validity of the modelling assumptions. The authors of <cit.> discuss the modelling assumptions in <cit.>. We add to their comments that the hyper-sphere assumption <ref> seems easily justifiable as merely L_2 weight regularisation. Assumption <ref> from Section <ref> is perhaps the least palatable, as the section of a piece-wise linear activation function in which a pre-activation value lies is a deterministic function of that pre-activation value and so certainly not i.i.d. across the network and the data items. It is not clear how to directly test the assumption experimentally, but we can certainly perform some experiments to probe its plausibility. For the sake of clarity, consider initially a activation function. Let 𝒩 be the set of all nodes (neurons) in a neural network, and let 𝒟 be a dataset of inputs for this network. Assumption <ref> says that we can model the action of the activation function at any neuron 𝔫∈𝒩 and any data point x⃗∈𝒟 as i.i.d. Bernoulli random variables. In particular, this is why the the expectations over the activation function indicators and the data distribution can be taken independently in (<ref>). If one fixes some neuron 𝔫∈𝒩, and observes its pre-activations over all data points in 𝒟, one will observe some proportion ρ^𝔫 of positive values. Assumption <ref> implies that this proportion should be approximately the same for each 𝔫∈𝒩, namely p, where p is the success probability of the Bernoulli. Taking all of the ρ^𝔫 together, their empirical distribution should have low variance and be centred on p. More precisely, for large |𝒟| each ρ^𝔫 should be close in distribution to i.i.d. Gaussian with mean p and variance of order |𝒟|^-1, a fact that can be derived simply from the central limit theorem applied to i.i.d. Bernoulli random variables. Similarly, assumption <ref> implies that one can exchange data points and neurons in the previous discussion and so observe proportions ρ̅^x⃗ for each x⃗∈𝒟, which again should have an empirical distribution centred on p and with low variance. The value of p is not prescribed by any of our assumptions and nor is it important, all that matters is that the distributions of {ρ^𝔫}_𝔫∈𝒩 and {ρ̅^x⃗}_x⃗∈𝒟 are strongly peaked around some common mean. We will now generalise the previous discussion to the case of any number of linear pieces of the activation function. Suppose that the activation function is piece-wise linear in L pieces and denote by I_1, …, I_L the disjoint intervals on which the activation function is linear; {I_i}_i=1^L partition ℝ. Let ι(x⃗, 𝔫) be defined so that the pre-activation to neuron 𝔫∈𝒩 when evaluating at x⃗∈𝒟 lies in I_ι(x⃗, 𝔫). We consider two scenarios, data averaging and neuron averaging. Under data averaging, we fix a neuron and observe the pre-activations observed over all 𝒟, i.e. define for j=1,…, L the counts χ_j^𝔫 = |{x⃗∈𝒟 : ι(x⃗, 𝔫) = j}| and thence the L-1 independent ratios ρ_j^𝔫 = χ_j^𝔫/∑_i=1^L χ_1^𝔫 for j=2,…, L. Similarly, in neuron averaging we define χ̅_j^x⃗ = |{𝔫∈𝒩 : ι(x⃗, 𝔫) = j}|, ρ̅_j^x⃗ = χ̅_j^x⃗/∑_i=1^Lχ̅_1^x⃗. We thus have the sets of observed real quantities R_j = {ρ_j^𝔫 : 𝔫∈𝒩}, R̅_j = {ρ̅_j^x⃗ : x⃗∈𝒟}. Under assumption <ref>, the empirical variance of the values in R_j and R̅_j should be small. We run experiments to interrogate this hypothesis under a variety of conditions. In particular: * Standard Gaussian i.i.d. data vs. `real' data (MNIST digits <cit.>). * Multi-layer perceptron (MLP) vs. convolutional (CNN) architecture. * Trained vs. randomly initialised weights. * Various piece-wise linear activation functions. In particular: * We generate 10000 i.i.d. Gaussian data vectors of length 784 (to match the size of MNIST digits). * We fix a MLP architecture of 5 layers and a CNN architecture with 3 convolutional layers and 2 fully-connected. The exact architecture details are given in the Appendix. * We train all networks to test accuracy of at least 97% and use dropout with rate 0.1 during training. * We test (2 pieces), (3 pieces) and a custom 5 piece function. Full details are given in Appendix <ref>. To examine the R_j and R̅_j, we produce histograms of R_2 for L=2 (i.e. ), joint density plots of (R_2, R_3) for L=3 (i.e. ) and pair-plots of (R_2, R_3, R_4, R_5) for L=5. We are presently only interested in the size of the variance shown, but these full distribution plots are included in-case any further interesting observations can be made in the future. Figures <ref>-<ref> show the results for activations and Figures <ref>-<ref> show the results for . The qualitative trends are much the same for all three activation functions, but the plots for the 5-piece function are very large and so are relegated to the supplementary material[<https://github.com/npbaskerville/loss-surfaces-general-activation-functions/blob/master/Loss_surfaces_of_neural_networks_with_general_activation_functions___supplimentary.pdf>]. We make the following observations: * The variance of R̅_2 is `small' in all cases for networks except when evaluating MNIST-trained MLP networks on i.i.d. random normal data. This is the least relevant case practically. * For R_2, the results are much less convincing, though we do note that, with random weights and i.i.d. data, the MLP network does have quite a strongly peaked distribution. In other cases the variance is undeniably large. * The variance of R̅_2,3 is `small' in all cases for except when evaluating LeNet architectures on MNIST data. * For R_3 in networks, the variance seems to be low when the weights are random, but not when trained. Overall, we see that in some circumstances, particularly with un-trained weights, the assumption <ref> is not as unreasonable as it first sounds. More importantly for the present work, comparing the three examined activation functions supports the hypothesis that, insofar as modeling the action of the activation function by independent Bernoulli random variables was valid in <cit.>, our analogous modelling of the action of general piece-wise linear functions by independent discrete random variables is also valid. Put another way, it does not appear that the assumptions we make here are any stronger than those made in <cit.>. We finally note an interesting comparison between, for example, Figures <ref> and <ref>, or equally Figures <ref> and <ref>. In both cases, the variance is low for both distributions, and the only difference between the two experiments is the evaluation data, being i.i.d. Gaussian in the one case, and MNIST in the other. These results seem to demonstrate that the assumption of i.i.d. Gaussian data distribution is not trivialising the problem as one might expect a priori. Taking all of the results of this section together, we see that the case for our extension of <cit.> is quite strong, but there are clearly realistic cases where the modelling assumptions applied to activation functions in <cit.> are convincingly violated. § STATEMENT OF RESULTS We shall use complexity to refer to any of the following defined quantities which we define precisely as they appear in <cit.>. For a Borel set B⊂ℝ and non-negative integer k, let ^g(B) = |{w⃗∈NS^N-1 :  g(w⃗)=0, g(w⃗)∈ B,   i(^2g)=k}| where i(M) for a square matrix M is the index of M, i.e. the number of negative eigenvalues of M. We also define the useful generalisation x(M) to be the number of eigenvalues of M less than x, so 0(M) = i(M). For a Borel set B⊂ℝ, let ^g(B) = |{w⃗∈NS^N-1 :  g(w⃗)=0, g(w⃗)∈ B}|. We now state our main identities, which we find simpler to prove by scaling w⃗ to lie on the hyper-sphere of unit radius: h(w⃗) N^-H/2g(Nw⃗). For convenience, we define ρ_ℓ^(N) = ρ_ℓ N^-ℓ/2 so that, recalling the form of g in (<ref>), we obtain h(w⃗) = ∑_i_1, …, i_H = 1^Λ X_i_1, …, i_H∏_k=1^H w_i_k + ∑_ℓ=1^Hρ_ℓ^(N)∑_i_ℓ + 1, …, i_H=1^Λ∏_k=ℓ +1^H w_i_k. Though the complexities have been defined using general Borel sets, as in <cit.>, we focus on half-infinite intervals (-∞, u), acknowledging that everything that follows could be repeated instead with general Borel sets mutatis mutandis. We will henceforth be studying the following central quantities (note the minor abuse of notation): ^h(Nu) = |{w⃗∈ S^N-1 :  h(w⃗)=0, h(w⃗)∈Nu,   i(^2h)=k}|, ^h(Nu) = |{w⃗∈ S^N-1 :  h(w⃗)=0, h(w⃗)∈Nu}| and it will be useful to define a relaxed version of (<ref>) for 𝒦⊂{0,1,…, N}: C_N, 𝒦^h(Nu) = |{w⃗∈ S^N-1 :  h(w⃗)=0, h(w⃗)∈Nu,   i(^2h)∈𝒦}|. Our main results take the form of two theorems that extend Theorems 2.5 and 2.8 from <cit.> to our more general spin glass like object g, and a third theorem with partially extends Theorem 2.17 of <cit.>. In the case of Theorem 2.8, we are able to obtain exactly the same result in this generalised setting. For Theorem 2.5, we have been unable to avoid slackening the result slightly, hence the introduction of the quantity C^h_N, 𝒦 above. In the case of Theorem 2.17, we are only able to perform the calculations of the exact leading order term in one case and obtain a term very similar to that in <cit.> but with an extra factor dependent on the piece-wise linear approximation to the generalised activation function. This exact term correctly falls-back to the term found in <cit.> when we take f=. theoremauffindk Recall the definition of ^h in (<ref>) and let Θ_H be defined as in <cit.>: Θ_H(u) = 1/2log(H-1) - H-2/4(H-1)u^2 - I_1(u; E_∞)    if u≤ -E_∞, 1/2log(H-1) - H-2/4(H-1)u^2 if -E_∞≤ u ≤ 0, 1/2log(H-1) if 0≥ u, where E_∞ = 2H-1/H, and I_1(·; E) is defined on (-∞, -E] as in <cit.> by I_1(u; E) = 2/E^2∫_u^-E (z^2 - E^2)^1/2 dz = -u/E^2u^2 - E^2 - log(-u + u^2 - E^2) + log E, then lim_N→∞1/Nlog C_N^h(Nu) = Θ_H(u). theoremauffdepk Recall the definition of C_N, 𝒦^h in (<ref>) and let Θ_H,k be defined as in <cit.>: Θ_H,k(u) = 1/2log(H-1) - H-2/4(H-1)u^2 - (k+1)I_1(u; E_∞)    if u≤ -E_∞, 1/2log(H-1) - H-2/H if u > -E_∞, then, with 𝒦 = {k-1, k, k+1} for k>0, Θ_H,k+1(u) ≤lim_N→∞1/Nlog C_N,𝒦^h(Nu) ≤Θ_H,k-1(u) and similarly with 𝒦 = {0, 1} Θ_H,1(u) ≤lim_N→∞1/Nlog C_N,𝒦^h(Nu) ≤Θ_H,0(u). Note that Theorem <ref> holds for networks (equivalently, pure multi-spin glass models), as indeed it must. It can be seen as an immediate (weaker) consequence of the Theorem 2.5 in <cit.> of which it is an analogue in our more general setting. theoremauffexact Let u<-E_∞ and define v = -2u/E_∞. Define the function h by (c.f. (7.10) in <cit.>) h(v) = (|v - 2|/|v + 2|)^1/4 + (|v + 2|/|v - 2|)^1/4, and the functions q(θ') = 1/2sin^2 2θ' + 1/4(3+4cos 4θ'), j(x, s_1, θ') = 1 + 1/2s_1x^2 - 2h(x)^2 - s_1^2 q(θ')|x^2 - 2|h(x)^2, T(v, s_1) = 2/π∫_0^π/2j(-v, s_1, θ')dθ'. The N-1 × N-1 deterministic matrix S is defined subsequently around (<ref>). S has fixed rank r=2 and non-zero eigenvalues {s_1, N^-1/2s_2} where s_j = 𝒪(1). The specific form of S is rather cumbersome and uninformative and so is relegated to Appendix <ref>, and the vector v⃗ is defined in Lemma <ref>. Then we have C_N^h(Nu) ∼N^-1/2/2π He^-v⃗^2/2HT(v, s_1) h(v) e^NΘ_H(u)e^I_1(u; E_∞) - 1/2u I_1'(u; E_∞)/H-2/2(H-1)u + I_1'(u; E_∞). We include in Figures <ref> and <ref> plots of the functions Θ_H and Θ_H,k for completeness, though these figures are precisely the same as those appearing in <cit.>. The critical observation from these plots is that each of the Θ_H,k and Θ_H are monotonically increasing and that there exist unique E_0 > E_1 > … > E_∞ such that Θ_H,k(-E_k) = 0 and so the critical values -E_k are the boundaries between regions of exponentially many and `exponentially few' critical points of each respective index. It is interesting to compare the expression (<ref>) to the analogous expression for the model of <cit.>. In that work, when scaled to the unit hypersphere and scaled so that the spin glass term is composed of 𝒪(1) terms, the scale of the deterministic term is 𝒪(N^1/2), while the corresponding scale in (<ref>) is 𝒪(N^-1/2). Based on this, one might well conjecture Theorem <ref> and Theorem <ref>, however one would have no means by which to conjecture Theorem <ref>, and as far we can see no means to prove Theorem <ref> and Theorem <ref>. As mentioned in the introduction, the single fixed distinguished direction in <cit.> is quite a special feature and is not present in (<ref>). § GOE EXPRESSIONS FOR THE COMPLEXITY FROM KAC-RICE FORMULAE In this section we conduct analysis similar to that in <cit.> to obtain expressions for the the expected number of critical points of the function h as defined in (<ref>). We start with an elementary lemma deriving the 2-point covariance function for h. For w⃗∈ S^N-1, h is defined as in (<ref>): h(w⃗) = ∑_i_1, …, i_H = 1^Λ X_i_1, …, i_H∏_k=1^H w_i_k + ∑_ℓ=1^Hρ_ℓ^(N)∑_i_ℓ + 1, …, i_H=1^Λ∏_k=ℓ +1^H w_i_k,      X_i_1, …,, i_Hi.i.d.∼𝒩(0,1). For any w⃗, w⃗'∈ S^N-1 the following holds Cov(h(w⃗), h(w⃗')) = (w⃗·w⃗')^H. Let us begin by writing h(w⃗) = ∑_i_1, …, i_H = 1^N X_i_1, …, i_H∏_k=1^H w_i_k + h^(2)(w⃗) ≡ h^(1)(w⃗) + h^(2)(w⃗) where h^(2) is deterministic. Then we have Cov(h(w⃗), h(w⃗')) ≡[h(w⃗)h(w⃗')] - h(w⃗) h(w⃗') = [ h^(1)(w⃗) h^(1)(w⃗') - h^(1)(w⃗)h^(2)(w⃗') - h^(2)(w⃗)h^(1)(w⃗') + h^(2)(w⃗)h^(2)(w⃗')]     - h^(2)(w⃗)h^(2)(w⃗') = [ h^(1)(w⃗)h^(1)(w⃗')] = ∑_i_1,… i_H=1^N ∏_k=1^H w_i_kw_i_k' = ∏_k=1^H ∑_i_k=1^N w_i_kw_i_k' = (w⃗·w⃗')^H where we have used h^(1) = 0 in going from the first to the second and the second to the third lines. The following lemma calculates the full joint and thence conditional distribution of h and its first and second derivatives. The calculations follow closely those of <cit.> and the results are required for later use in a Kac-Rice formula. Pick some Cartesian coordinates on S^N-1 and let w⃗ be the north-pole of the sphere w⃗ = (1,0,0,…). Let h_i = ∂_i h(w⃗) and h_ij = ∂_i∂_j h(w⃗) where {∂_i}_i=1^N-1 are the coordinate basis around w⃗ on the sphere. Then the following results hold. * For all 1 ≤ i,j,k < N, h(w⃗), h_i(w⃗), h_jk(w⃗) are Gaussian random variables whose distributions are given by [h(w⃗)] = ∑_ℓ=1^H ρ_ℓ^(N) Var [h(w⃗)] = 1 h_i(w⃗) = ∑_ℓ=1^H-1ρ_ℓ^(N)[(H-ℓ) + (H - ℓ - 1)δ_i1]≡ v_i [h_ij(w⃗)] = ∑_ℓ=1^H-2ρ_ℓ^(N){[(H-ℓ)(H-ℓ-1) +1] δ_i1δ_j1 + (H-ℓ - 2)(δ_i1 + δ_j1) +1 } Cov(h(w⃗), h_i(w⃗)) = 0 Cov(h_i(w⃗), h_jk(w⃗)) = 0 Cov(h_i(w⃗), h_j(w⃗)) = Hδ_ij Cov(h(w⃗), h_ij(w⃗)) = -Hδ_ij Cov(h_ij(w⃗), h_kl(w⃗)) = H(H-1)(δ_ikδjl + δ_ilδ_kl) + H^2 δ_ijδ_kl. To reiterate, note that we define the vector v⃗ in (<ref>) as v_i = ∑_ℓ=1^H-1ρ_ℓ^(N)[(H-ℓ) + (H - ℓ - 1)δ_i1]. * Make the following definitions: ξ_0 = ∑_ℓ=1^Hρ_ℓ^(N) ξ_1 = ∑_ℓ=1^H-2ρ_ℓ^(N)[(H-ℓ)(H-ℓ -1) +1 ] ξ_2 =∑_ℓ=1^H-2ρ_ℓ^(N)(H-ℓ - 2) ξ_3 = ∑_ℓ=1^H-2ρ_ℓ^(N) Then, conditional on h(w⃗) = x, for x∈ℝ, the random variables h_ij(w⃗) are independent Gaussians satisfying [h_ij(w⃗)  |  h(w⃗)=x] =ξ_3 + ξ_2(δ_i1 + δ_j1) + ξ_1δ_i1δ_j1 - (x-ξ_0)δ_ij Var[h_ij(w⃗)  |  h(w⃗)=x] = H(H-1)(1+δ_ij). Or, equivalently, (h_ij(w⃗)  |  h(w⃗)=x) ∼2(N-1)H(H-1)( M^N-1- 1/2(N-1)H(H-1) H(x- ξ_0)I + S) where M^N-1∼ GOE^N-1 and the matrix S is given by S_ij = 1/2(N-1)H(H-1)(ξ_3 + ξ_2(δ_i1 + δ_j1) + ξ_1δ_i1δ_j1). Clearly all entries of S are of order N^-1, recalling the scale of ρ_ℓ^(N) given in (<ref>). Moreover, S is of rank 2 and has eigenvalues {s_1, N^-1/2s_2} for real s_i=𝒪(1). * Becuase the X_i_1, …, i_H are centred Gaussians and w⃗ = (1,0,0,…, 0), we immediately obtain (<ref>). (<ref>)-(<ref>) can be seen to be true similarly, e.g. (<ref>) by observing that the stochastic term is again zeroed-out by taking the expectation and the only terms that survive in the non-stochastic part are of the form ∂^2/∂ w_i ∂ w_jw_iw_j w_1^H-ℓ-2  (i,j≠ 1),    ∂^2/∂ w_i ∂ w_1 w_i w_1^H-ℓ-1  (i≠ 1),    ∂^2/∂ w_1^2 w_1^H-ℓ. The remaining results (<ref>), (<ref>)-(<ref>) all match those in Lemma 3.2 of <cit.> and follow similarly from Lemma <ref> and the following (<cit.>): Cov(∂^kh̅(x)/∂ x_i_1…∂ x_i_k, ∂^lh̅(y)/∂ y_j_1…∂ y_j_l) = ∂^k+lCov(h̅(x),h̅(y))/∂ x_i_1…∂ x_i_k∂ y_j_1…∂ y_j_l where h̅ h∘Φ^-1 and Φ is a coordinate chart around w⃗. * (<ref>), (<ref>) and the conditional independence result follow from (<ref>), (<ref>), (<ref>), (<ref>) and the standard result for the conditional distribution of one Gaussian under another (see e.g. <cit.> Section 2.5), just as in the proof of Lemma 3.2 in <cit.>. To show (<ref>), recall that a GOE^N matrix is a real symmetric random matrix M and whose entries are independent centred Gaussians with with M_ij^2 = 1+δ_ij/2N. Finally we have to determine the eigenvalues of S. With a = ξ_1 + 2ξ_2 + ξ_3, b=ξ_2 + ξ_3 and c=ξ_3, S has entries S = 1/2(N-1)H(H-1)([ a b b … b; b c c … c; b c c … c; ⋮ ⋮ ⋮ ⋮ ⋮; b c c … c; ]), and so has non-null eigenvectors (1, u, u, …, u)^T with eigenvalues (2(N-1)H(H-1))^-1/2λ, where (after some simple manipulation) λ^2 - (a - c(N-1))λ + ca(N-1) - b^2(N-1) = 0,       u = λ - a/(N-1)b. Recalling the scale of ρ_ℓ^(N) = 𝒪(N^-ℓ/2) in (<ref>) and the definitions ξ_1,ξ_2, ξ_3, we see that a, b, c=𝒪(N^-1/2) and so one easily obtains two solutions for λ, one of order N^1/2 and another of order N^-1/2, hence S has two non-zero eigenvalues of order 1 and N^-1/2. Our next lemma establishes for use in this context a Kac-Rice fomula that will provide the first step in the computation of C^h_N and C^h_N, 𝒦. Let F̂ be a real-valued centred Gaussian field on S^N-1 that is almost surely (a.s.) C^2, F̃ be some non-random, real-valued C^2 function on S^N-1 and let F F̂ + F̃. Let 𝒜 = {U_α, Φ_α}_α∈ I be a finite atlas on S^N-1. Let h^α = h∘Φ_α^-1, and let h^α_i, h^α_ij denote derivatives of h in the coordinate basis of the chart (U_α, Φ_α). Assume that the joint distribution (F^α_i(x⃗), F^α_ij(x⃗)) is non-degenerate for all α and for all x⃗∈ S^N-1 and that there exist constants K_α, β >0 such that max_i,j|Var(F̂_ij^α(x⃗)) + Var(F̂_ij^α(y⃗)) - 2Cov(F̂_ij^α(x⃗), F̂_ij^α(y⃗))| ≤ K_α|log|x-y||^-1-β Then the following holds ^F(B) = ∫_S^N-1 p_x⃗(0) 𝒮_N-1(dx⃗) [|^2 F(x⃗)|{F(x⃗)∈ B,  i(^2F(x⃗)) = k} |  F(x⃗)=0] where p_x⃗ is the density of F at x⃗ and 𝒮_N-1 is the usual surface measure on S^N-1. Similarly, ^F(B) = ∫_S^N-1 p_x⃗(0) 𝒮_N-1(dx⃗) [|^2 F(x⃗)|{F(x⃗)∈ B} |  F(x⃗)=0] The proof of Lemma <ref> shall rely heavily on the Kac-Rice result Theorem <ref>. Following the proofs of Theorem 12.4.1 in <cit.> and Lemma 3.1 in <cit.>, we will apply Theorem <ref> to the choices ϕ F ψ (F, _i_jF) A B × A_k ≡ B ×{H∈Sym_N-1× N-1 |  i(H)=k}⊂ℝ×Sym_N-1× N-1, u⃗ = 0 Then, if the conditions of Theorem <ref> hold for these choices, we immediately obtain the result. It remains therefore to check the conditions of Theorem <ref>. Firstly, A is indeed an open subset of of ℝ×Sym_N-1× N-1 (in turn, isomorphic to some ℝ^K) as can be easily deduced from the continuity of a matrix's eigenvalues in its entries. Condition (a) follows from the assumption of F̂ being a.s. C^2 and F̃ being C^2. Conditions (b)-(f) all follow immediately from the Gaussianity of F̂. To establish condition (g), we define ω̂(η) and ω̃(η) in the obvious way and note that ω̃ is non-random. Then, because F̃ is continuous, given ϵ > 0 there exists some η_0 >0 such that for all η < η_0, ω̃(η) ≤ϵ. Let ω̃_0 ω̃(η_0) and choose some η_1 such that for all η < η_1, ω̃(η) < ω̃_0. We have ω(η) ≤ω̂(η) + ω̃(η) and so for η < η_1 ℙ(ω(η) > ϵ) ≤ℙ(ω̂(η) + ω̃(η) > ϵ) =ℙ(ω̂(η) > ϵ - ω̃(η)) ≤ℙ(ω̂(η) > ϵ - ω̃_0) and we note that ϵ - ω̃_0 ≥ 0 by construction. ω̂ is the modulus of continuity for a centred Gaussian field and so the condition (g) follows from (<ref>) and the assumption (<ref>) by the Borell-TIS inequality <cit.>, just as in the proof of Corollary 11.2.2 in <cit.>. (<ref>) is obtained in precisely the same way but simply dropping the i(H) = k condition. § ASYMPTOTIC EVALUATION OF COMPLEXITY In this section we conduct an asymptotic analysis of the GOE expressions for the complexity found in the preceding section. We first consider the case of counting critical points without any condition of the signature of the Hessian, which turns out to be easier. We then introduce the exact signature condition on the Hessian and proceed by presenting the necessary modifications to certain parts of our arguments. §.§ Complexity results with no Hessian signature prescription We need to establish a central lemma, which is a key step towards a generalisation of the results presented in <cit.> but established by entirely different means, following the supersymmetric calculations of <cit.>. Before this main lemma, we require a generalisation of a result from <cit.>, whose proof is given at the end of the chapter (Section <ref>). lemmafyodgeneral Given m vectors in ℝ^N x⃗_1, …, x⃗_m, denote by Q(x⃗_1, …, x⃗_m) the m× m matrix whose entries are given by Q_ij = x⃗_i^Tx⃗_j. Let F be any function of an m× m matrix such that the integral ∫_ℝ^N…∫_ℝ^Ndx⃗_1… dx⃗_m |F(Q)| exists, and let S be a real symmetric N× N matrix of fixed rank r and with non-zero eigenvalues {N^αs_i}_i=1^r for some α < 1/2. Define the integral 𝒥_N, m(F; S) ∫_ℝ^N…∫_ℝ^Ndx⃗_1… dx⃗_m F(Q) e^-iN∑_i=1^N x⃗_i^T Sx⃗_i. Then as N→∞ we have 𝒥_N,m(F; S) =( 1 + o(1))) π^m/2(N - m-1/2)/∏_k=0^m-1Γ(N-k/2)∫_Sym_≥ 0(m)dQ̂(Q̂)^N-m-1/2F(Q̂)∏_i=1^N∏_j=1^r( 1+ 2iN^αQ̂_iis_j)^-1/2. Now we state and prove the main lemma. Let S be a rank r N× N symmetric matrix with non-zero eigenvalues {s_j}_j=1^r, where r=𝒪(1) and s_j = 𝒪(1), and suppose S has all entries of order 𝒪(N^-1) in a fixed basis. Let x<0 and let M denote an N× N GOE matrix with respect to whose law expectations are understood to be taken. Then |(M - xI + S)| = K_Nlim_ϵ↘ 0 e^2N(x^2 - ϵ^2)(1 + o(1)) ∭_0^π/2 dθ dθ'dθ̂∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2 J_1(p_1, p_2, θ'; S, N)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂ exp{-N(2ψ^(+)_L(r_1; x; ϵcos2θcos2θ̂)           +2ψ^(+)_U(r_2; x; ϵcos2θcos2θ̂) +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ'))} where J_1(p_1, p_2, θ'; {s_j}_j=1^r, N) =∏_j=1^r(1 + 2iN^1/2s_j(p_1 + p_2) - Ns_j^2[sin^22θ' (p_1^2 + p_2^2) + (3 + 4cos4θ')p_1p_2 ])^-1/2, J_2(r_1, r_2, p_1, p_2; ϵ) = (r_1 + p_1)(r_2 + p_1)(r_1 + p_2)(r_2 + p_2)|r_1 - r_4|^4 |p_1-p_2| (r_1r_2)^-2 (p_1p_2)^-3/2 and K_N = N^N+3(-i)^N /Γ(N/2)Γ(N-1/2) π^3/2 and the functions ψ^±_L, ψ^(±)_U are given by ψ^(±)_L(z; x,ϵ) = 1/2z^2 ± i(x+iϵ)z - 1/2log z, ψ^(±)_U(z;x,ϵ) = 1/2z^2 ± i(x-iϵ)z - 1/2log z, and Γ is a contour bounded away from zero in ℂ, e.g. that shown in Figure <ref>. We begin with the useful expression for real symmetric matrices A <cit.> | A| = lim_ϵ→ 0 A A/ (A - iϵ) (A +iϵ) where the limit is taken over real ϵ, and WLOG ϵ > 0. We're free to deform the matrices in the numerator for the sake of symmetry in the ensuing calculations, so | A| = lim_ϵ↘ 0 (A - iϵ) (A + iϵ)/ (A - iϵ) (A +iϵ). For convenience of notation we put Δ_ϵ(M; x, S) = (M - xI + S - iϵ) (M - xI + S + iϵ)/ (M - xI + S - iϵ) (M - xI + S +iϵ). Then we express the determinants and half-integer powers of determinants as Gaussian integrals over anti-commuting and commuting variables respectively as in <cit.> and <cit.>: Δ_ϵ(M; x, S) = K^(1)_N∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^†exp{-ix⃗_1^T(M-(x + iϵ)I+S)x⃗_1 - ix⃗_2^T(M-(x-iϵ)I + S)x⃗_2} + exp{ i ζ_1^†(M-(x+iϵ) I+S)ζ_1 + i ζ_2^†(M-(x - iϵ)I+S)ζ_2} where K^(1)_N = (-i)^N π^-N, which follows from standard facts about commuting Gaussian integrals and Berezin integration. The remainder of the calculation is very similar to that presented in <cit.> but we present it in full to keep track of the slight differences. Let A = x⃗_1x⃗_1^T + x⃗_2x⃗_2^T + ζ_1ζ_1^† + ζ_2ζ_2^† and note that, by the cyclicity of the trace, x⃗_j^T(M-(x ± iϵ)I+S)x⃗_j = ((M-(x± iϵ)I+S)x⃗_jx⃗_j^T) ζ_j^†(M-(x ± i ϵ)I+S)ζ_j = -((M-(x± i ϵ)I+S)ζ_jζ_j^†) and so we can rewrite (<ref>) as Δ_ϵ(M; x, S) = K^(1)_N∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^† exp{-i MA - i SA + i(x + iϵ)x⃗_1^Tx⃗_1 + i(x - iϵ)x⃗_2^Tx⃗_2 } exp{-i(x + iϵ)ζ_1^†ζ_1 -i (x - iϵ)ζ_2^†ζ_2 }. We then define the Bosonic and Fermionic matrices Q_B = ([ x⃗_1^Tx⃗_1 x⃗_1^Tx⃗_2; x⃗_2^Tx⃗_1 x⃗_2^Tx⃗_2 ]),    Q_F = ([ ζ_1^†ζ_1 ζ_1^†ζ_2; ζ_2^†ζ_1 ζ_2^†ζ_2 ]) and also B = x⃗_1x⃗_1^T + x⃗_2x⃗_2^T. Note that (<ref>) is true for all real symmetric matrices A and so for all real symmetric M,S and real values x we have lim_ϵ↘ 0Δ_ϵ(M; x, S) = |(M - xI + S)| and so with respect to the GOE law for M we certainly have Δ_ϵ(M; x, S) a.s.→ |(M - xI + S)|    as ϵ↘ 0 thus meaning that the ϵ↘ 0 limit can be exchanged with a GOE expectation over M. We therefore proceed with fixed ϵ>0 to compute the GOE expectation of Δ_ϵ. We have the standard Gaussian Fourier transform result for matrices: e^-i MA = exp{-1/8N(A + A^T)^2} and from <cit.>[Note that (4.100) in <cit.> contains a trivial factor of 4 error that has non-trivial consequences in our calculations.] (A+A^T)^2 = 4 Q_B^2 - 2 Q_F^2 + 4ζ_1^Tζ_2ζ_2^†ζ_1^* - 8ζ_1^†Bζ_1 - 8 ζ_2^†Bζ_2 so we can take the GOE average in (<ref>) and obtain Δ_ϵ(M; x, S) = K^(1)_N∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^†exp{ -1/2N Q_B^2 - i SB + ix Q_B + ϵ Q_Bσ} exp{1/4N Q_F^2 - 1/2Nζ_1^Tζ_2ζ_2^†ζ_1^* + ∑_j=1^2 ζ_j^†(B/N + iS - i(x + i(-1)^j-1ϵ))ζ_j}. where we have defined σ = ([ -1 0; 0 1 ]). We can then use the transformation exp{1/4N Q_F^2} = N^2/π Vol(U(2))∫ dQ̂_F exp{-NQ̂_F^2 + Q_FQ̂_F} to obtain Δ_ϵ(M; x, S) = K^(2)_N ∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^† dQ̂_F exp{ -1/2N Q_B^2 - i SB + ix Q_B + ϵ Q_Bσ} exp{-NQ̂_F^2 + Q̂_FQ_F - 1/2Nζ_1^Tζ_2ζ_2^†ζ_1^* + ∑_j=1^2 ζ_j^†(B/N + iS - i(x + i(-1)^j-1ϵ)ζ_j} where K^(2)_N = K^(1)_N N^2/π Vol(U(2)). The Fermionic cross-term in (<ref>) can be dealt with using (see <cit.> (4.104)) exp(-1/2Nζ_1^Tζ_2ζ_2^†ζ_1^*) = 2N/π∫ d^2u exp(-2Nu̅u - i(uζ_1^†ζ_2^* + u̅ζ_2^†ζ_1)) where d^2u = du  du, and so we obtain Δ_ϵ(M; x, S) = K^(3)_N∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^† dQ̂_F d^2u exp{ -1/2N Q_B^2 - i SB + ix Q_B + ϵ Q_B σ} exp{-NQ̂_F^2 - 2N u u̅} exp{Q̂_FQ_F - i(uζ_1^†ζ_2^* + u̅ζ_2^Tζ_1) + ∑_j=1^2 ζ_j^†(B/N + iS - i(x + i(-1)^j-1ϵ)ζ_j} where K_N^(3) = K_N^(2)2N/π. To simplify the Fermionic component of (<ref>) and make apparent its form, we introduce ζ^T=(ζ_1^†, ζ_1^T, ζ_2^†, ζ_2^T) and then (<ref>) reads Δ_ϵ(M; x, S) = K^(3)_N∫ dx⃗_1 dx⃗_2 dζ dQ̂_F d^2u exp{ -1/2N Q_B^2 - i SB + ix Q_B + ϵ Q_B σ} exp{-NQ̂_F^2 - 2N u u̅} exp{1/2ζ^Tℳζ} = K^(3)_N∫ dx⃗_1 dx⃗_2 dQ̂_F d^2u exp{ -1/2N Q_B^2 - i SB + ix Q_B + ϵ Q_Bσ} exp{-NQ̂_F^2 - 2N u u̅} ℳ where the matrix ℳ is given by ℳ = ([ 0 A_1 - iu q_12^*; -A_1 0 - q_12 iu̅; iu q_12 0 A_2; -q_12^* -iu̅ -A_2 0 ]) and, by analogy with (4.107) in <cit.>, A_j = q_jj - i(x + i(-1)^j-1ϵ)+ 1/NB + iS, where q_ij are the entries of Q̂_F. To evaluate ℳ, we make repeated applications of the well-known result for block 2× 2 matrices consisting of N× N blocks: ([ A B; C D ]) = (A - BD^-1C)(D). This process quickly results in ℳ = (A_1A_2 - (uu̅ + q_12q̅_12)) = ([(Q̂_F - ix- ϵσ) - u̅u]I + (Q̂_F - ix- ϵσ)(1/NB + iS) + (1/NB + iS)^2) = ( G_1 + N^-1B + iS)( G_2 + N^-1B + iS) where we have chosen G_1, G_2 to be solutions to G_1G_2 = (Q̂_F - ix- ϵσ) - u̅u G_1 + G_2 = (Q̂_F - ix- ϵσ). Recalling the B has rank 2 we let O_B be the N× 2 matrix of the non-null eigenvectors of B and λ^(B)_1,2 be its non-null eigenvalues and use the determinantal identity found in equation (3) of <cit.> to write[Note that we here include explicitly the identity matrix symbols to make plain the dimension of the determinants.] (G_j I_N + N^-1B + iS) = (G_jI_N + iS)(I_2 + N^-1O_B^T(G_jI_N + iS)^-1O_Bdiag(λ^(B)_1, λ^(B)_2)). We would now like to apply the integral formula found in Appendix D of <cit.> to re-write the integrals over the N-dimensional vectors x⃗_1, x⃗_2 as a single integral over a 2× 2 symmetric matrix Q_B. However, the integrand does not only depend on x⃗_1, x⃗_2 through Q_B ≡([ x⃗_1^Tx⃗_1 x⃗_1^Tx⃗_2; x⃗_2^Tx⃗_1 x⃗_2^Tx⃗_2 ]) thanks to the dependence on the eigenvectors of B in (<ref>) and also in the term SB in (<ref>). Before addressing this problem, we will continue to manipulate the Q̂_F and u integrals along the lines of <cit.>. First make the change of variables Q̂_F ←Q̂_F + ix + ϵσ and x⃗_j ←Nx⃗_j in (<ref>) using (<ref>) to obtain Δ_ϵ(M; x, S) = K^(4)_N∫ dx⃗_1 dx⃗_2 dQ̂_F d^2u exp{ -N/2 Q_B^2 - iN SB + ixN Q_B + ϵ N Q_Bσ} exp{-NQ̂_F^2 - 2N (ix + ϵσ)Q̂_F - N(ix + ϵσ)^2 - 2N u u̅} ∏_j=1^2(G_j + B + iS) where K^(4)_N = N^NK_N^(3) and now the terms G_1, G_2 are given by the modified versions of (<ref>)-(<ref>): G_1G_2 = Q̂_F - u̅u G_1 + G_2 = Q̂_F . We now diagonalise the Hermitian matrix Q̂_F = Ûdiag(q_1, q_2)Û^† in (<ref>), but the term σQ̂_F is not unitarily invariant, so we follow <cit.> and introduce an explicit parametrization[<cit.> uses an incorrect parametrization with only two angles. The calculations are are invariant in the extra angles α,β and so this detail only matters if one is tracking the multiplicative constants, as we do here.] of the unitary matrix Û Û =e^iϕ̂/2([ e^iα̂/2 0; 0 e^-iα̂/2 ])([ cosθ̂ sinθ̂; -sinθ̂ cosθ̂ ])([ e^iβ̂/2 0; 0 e^-iβ̂/2 ]) where ϕ̂,α̂, β̂∈ [0,2π), θ̂∈ [0,π/2) and elementary calculations give the Jacobian factor |q_1 - q_2|^2 sin(2θ̂). Further brief elementary calculations give Q̂_Fσ = (q_2 - q_1)cos(2θ̂). and so, integrating out ϕ̂, α̂, β̂, Δ_ϵ(M; x, S) = K^(5)_N e^2N(x^2 - ϵ^2)∫ dx⃗_1 dx⃗_2 ∬_-∞^∞ dq_1dq_2 ∫ d^2u∫_0^π/2dθsin2θ̂ exp{ -N/2 Q_B^2 - iN SB + ixN Q_B + ϵ N Q_Bσ} exp{-N(q_1^2 + q_2^2) - 2Nix (q_1 + q_2) - 2Nϵ(q_2 - q_1)cos2θ̂ - 2N u u̅} ∏_j=1^2(G_j + B + iS)|q_1 - q_2|^2 with K^(5) = (2π)^3 K^(4)_N and now G_1G_2 = q_1q_2 - u̅u G_1 + G_2 = q_1 + q_2 . We form an Hermitian matrix R = ([ q_1 u̅; u q_2 ]) and so (<ref>) is rewritten as Δ_ϵ(M; x, S) = K^(6)_Ne^2N(x^2 - ϵ^2)∫ dx⃗_1 dx⃗_2 ∫ dR|R_11 - R_22|^2∫_0^π/2dθsin2θ̂ exp{ -N/2 Q_B^2 - iN SB + ixN Q_B + ϵ N Q_Bσ} exp{-N R^2 -2Nix R -2ϵ N(R_22 - R_11)cos2θ̂} ∏_j=1^2(G_j + B + iS) with K_N^(6) = 1/16π^2K_N^(5) and G_1G_2 = R G_1 + G_2 = R . The factor of (16π^2)^-1 comes from the change of variables (q_1, q_2, u, u̅) ↦ R. Indeed, clearly dq_1dq_2dudu̅ = Z^-1dR for some constant Jacobian factor Z. We can most easily determine Z by integrating against a test function: 4π Vol(U(2))/Z = 1/Z∫_Herm(2) dR e^-1/2 R^2 = ∬_-∞^∞ dq_1 dq_2 ∬_-∞^∞ du  du e^-1/2(q_1^2 + q_2^2 + 2uu̅)= 2π^2 Z = 2Vol(U(2))/π = 16π^2. We diagonalise R = Udiag(r_1, r_2)U^†, but again the integrand in (<ref>) is not unitarily invariant in R so we repeat the previous procedure using U =e^iϕ/2([ e^iα/2 0; 0 e^-iα/2 ])([ cosθ sinθ; -sinθ cosθ ])([ e^iβ/2 0; 0 e^-iβ/2 ]). Overall, integrating out ϕ, α, β, (<ref>) becomes Δ_ϵ(M; x, S) = K^(7)_Ne^2N(x^2 - ϵ^2)∬_0^π/2 dθ dθ̂∫ dx⃗_1 dx⃗_2 ∬_-∞^∞ dr_1dr_2 |r_1 - r_2|^4 sin2θcos^22θsin2θ̂ exp{ -N/2 Q_B^2 - iN SB + ixN Q_B + ϵ N Q_Bσ} exp{-N(r_1^2 + r_2^2) - 2Ni(x-iϵcos2θcos2θ̂)r_1     - 2Nix(x + iϵcos2θcos2θ̂) } ∏_j=1^2(G_j + B + iS) where K^(7) =(2π)^3 K^(6) and now G_1G_2 = r_1r_2, G_1 + G_2 = r_1 + r_2 {G_1, G_2} = {r_1, r_2}. We can now clearly take r_j = G_j without loss of generality. The terms (r_j + B + iS) and e^-iN SB depend on the eigenvectors of B and prevent an application of the integral formula of <cit.> as used by <cit.>. In fact, it is possible the adapt this integral formula for use in the presence of the term e^-iN SB, as seen in Lemma <ref>. Since S has all entries of order N^-1, we can expand the nuisance determinants: (r_j + B + iS) = ∏_i=1^2 (r_j + λ^(B)_i) (1 + o(1)). For this step to be legitimate in the sense of asymptotic expansions, we must have that the error term is uniformly small in the integration variables x⃗_1, x⃗_2, r_1, r_2, θ, θ̂. Note that the integrand in (<ref>) is analytic in r_1, r_2 and so we can deform the contours of integration from (-∞, ∞) to Γ, a contour that, say, runs from -∞ along the real line to -1 and then follows the unit semi-circle in the upper half plane to 1 before continuing to ∞ along the real line. We show an example contour in Figure <ref>. It is now clear that r_1, r_2 are bounded away from 0 and so the error terms in (<ref>) are uniform, so giving Δ_ϵ(M; x, S) = K^(7)_Ne^2N(x^2 - ϵ^2)∬_0^π/2 dθ dθ̂∫ dx⃗_1 dx⃗_2 ∬_-∞^∞ dr_1dr_2 |r_1 - r_2|^4 sin2θcos^22θsin2θ̂ exp{ -N/2 Q_B^2 - iN SB + ixN Q_B + ϵ N Q_Bσ} exp{-N(r_1^2 + r_2^2) - 2Ni(x-iϵcos2θcos2θ̂)r_1       - 2Nix(x + iϵcos2θcos2θ̂) } ∏_i,j=1^2(r_j + λ^(B)_i)(1 + o(1)) Lemma <ref> can now be applied: Δ_ϵ(M; x, S) = K^(8)_N e^2N(x^2 - ϵ^2) (1 + o(1)) ∬_0^π/2 dθ dθ̂∫_Sym_≥ 0(2) dQ_B ∬_Γ dr_1dr_2 cos^22θsin2θsin2θ̂ exp{ -N/2 Q_B^2+ ixN Q_B + ϵ N Q_Bσ} exp{-N(r_1^2 + r_2^2) - 2Ni(x-iϵcos2θcos2θ̂)r_1          - 2Nix(x + iϵcos2θcos2θ̂) } ∏_j=1^r(1 + 2is_j Q_B - 4p_11p_22s_j^2)^-1/2 ∏_i,j=1^2(r_j + λ^(B)_i)|r_1 - r_2|^4(r_1r_2)^N-2( Q_B)^N-3/2, where p_ij are the entries of the matrix Q_B and K^(8) = π^N π^-1/2/Γ(N/2)Γ(N-1/2) K^(7)_N. We now wish to diagonalise Q_B and integrate out its eigenvectors, but as before (around (<ref>)) the integrand is not invariant under the action of the orthogonal group on Q_B and so we instead diagonalise Q_B = Odiag(p_1, p_2)O^T and parametrize O as O = ([ cosθ' sinθ'; -sinθ' cosθ' ]) but we must be careful to choose domain of integration for θ and (p_1, p_2) such that the transformation is a bijection. Consider a general positive semi-definite symmetric matrix Q_B = ([ a c; c b ]). Solving for the eigenvalues gives two choices for (p_1, p_2) because of the arbitrary ordering of the eigenvalues. We want a simple product domain for the (p_1, p_2) integrals and both eigenvalues are non-negative, so we choose (p_1, p_2) ∈ (ℝ_≥ 0)^2. One can easily find that c = p_2 - p_1/2sin2θ a = p_1 + p_2 + (p_1 - p_2)cos2θ/2 b = p_1 + p_2 + (p_2 - p_1)cos2θ/2 and so we see immediately that the domain of integration of θ must be restricted to an interval of length π to obtain a bijection. But further, because of the chosen domain for (p_1, p_2) the quantity (p_1 - p_2) takes all values in ℝ and thus we must in fact restrict θ to, say, [0, π/2) to obtain a bijection. The Jacobian of this transformation is |p_1 - p_2| and further p_11p_22 = (p_1cos^2θ' + p_2 sin^2θ')(p_2cos^2θ' + p_1sin^2θ') =(p_1^2 + p_2^2)(cosθ'sinθ')^2 + p_1p_2(cos^4θ' + sin^4θ') = 1/4sin^22θ' (p_1^2 + p_2^2) + 1/4(3 + 4cos4θ')p_1p_2 and so we get Δ_ϵ(M; x, S) = K^(8)_N e^2N(x^2 - ϵ^2) (1 + o(1))∭_0^π/2 dθ dθ'dθ̂∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2 |r_1 - r_2|^4(r_1r_2)^N-2(p_1p_2)^N-3/2cos^22θsin2θsin2θ̂ exp{ -N/2(p_1^2 + p_2^2) + iN(x-iϵcos2θ')p_1 + iN(x+iϵcos2θ')p_2 } exp{-N(r_1^2 + r_2^2) - 2Ni(x-iϵcos2θcos2θ̂)r_1       - 2Nix(x + iϵcos2θcos2θ̂) } ∏_i,j=1^2(r_j + p_i)J_1(p_1, p_2, θ'; {s_j}_j=1^r, N) where J_1(p_1, p_2, θ'; {s_j}_j=1^r, N) =∏_j=1^r(1 + 2is_j(p_1 + p_2) - s_j^2[sin^22θ' (p_1^2 + p_2^2) + (3 + 4cos4θ')p_1p_2 ])^-1/2. Now let us define the functions ψ^(±)_U(z; x; ϵ) = 1/2z^2 ± i(x - iϵ)z - 1/2log z ψ^(±)_L(z; x; ϵ) = 1/2z^2 ± i(x + iϵ)z - 1/2log z and also J_2(r_1, r_2, p_1, p_2) = |r_1 - r_2|^4 |p_1 - p_2| (r_1r_2)^-2 (p_1p_2)^-3/2 (r_1 + p_1)(r_1 + p_2)(r_2 + p_1)(r_2 + p_2) and then we finally rewrite (<ref>) as Δ_ϵ(M; x, S) = K^(8)_N e^2N(x^2 - ϵ^2)(1 + o(1))∭_0^π/2 dθ dθ'dθ̂∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2 J_1(p_1, p_2, θ'; S, N)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂ exp{-N(2ψ^(+)_L(r_1; x; ϵcos2θcos2θ̂) +2ψ^(+)_U(r_2; x; ϵcos2θcos2θ̂) +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ'))}. We will need the asymptotic behaviour of the constant K_N defined in Lemma <ref>. As N→∞ K_N∼ (-i)^N N^9/2/42π^5/2(2e)^N. Using Stirling's formula for the Gamma function gives K_N ∼N^N+3(-i)^N/π^3/2 N^-N/2 + 1/2(N-1)^-N/2 + 1 2^N/2 - 1/2 2^N/2 - 1 e^N/2 e^N/2 - 1/2(2π)^-1 = N^N+3(-i)^N/π^3/2 N^-N N^3/2 2^N 2^-5/2 e^N e^-1/2π^-1(N-1/N)^-N/2 + 1 ∼ (-i)^N N^9/2/42π^5/2(2e)^N. Building on Lemma <ref>, we can prove a generalisation of Theorem 2.8 from <cit.>, namely Theorem <ref>. * Combining Lemmata <ref> and <ref> and observing that the integrand in the Kac-Rice formula of Lemma <ref> is spherically symmetric, we obtain C_N^h(Nu) = (2(N-1)(H-1)H)^N-1/2ω_N e^-v⃗^2/2H/(2π H)^(N-1)/2_Ω_N∫_-∞^u_N dx  1/2πt e^-x^2/2t^2𝔼^N-1_GOE |(M - xI + S)| where u_N = uHN/2(N-1)(H-1), the variance t^2 = H/2(N-1)(H-1), ω_N = 2π^N/2/Γ(N/2) is the surface area of the N-1 sphere and S and v⃗ are defined in Lemma <ref>. Note that the first term in Ω_N comes from the expression (<ref>) and the third term from (<ref>) and (<ref>), i.e. this is the density of ∇ h evaluated at 0 as appears in Lemma <ref>. The conditions for Lemma <ref> are shown to be met in Lemma <ref>, so we obtain C_N^h(Nu) = Ω_N K_N-12(N-1)(H-1)/H(1 + o(1)) ∫_-∞^u_Ndx  1/2πlim_ϵ↘ 0∭_0^π/2 dθ dθ̂dθ' ∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2 J_1(p_1, p_2, θ'; {s_j}_j=1^r, N-1)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂ exp{-(N-1)(2ψ^(+)_L(r_1; x; ϵcos2θcos2θ̂) +2ψ^(+)_U(r_2; x; ϵcos2θcos2θ̂) +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ') - H+1/Hx^2)} = c_N,H∫_-∞^u_Ndx  lim_ϵ↘ 0∭_0^π/2 dθ dθ̂dθ'∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2 J_1(p_1, p_2, θ'; {s_j}_j=1^r, N-1)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂ exp{-(N-1)(2ψ^(+)_L(r_1; x; ϵcos2θcos2θ̂) +2ψ^(+)_U(r_2; x; ϵcos2θcos2θ̂) +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ') - H+1/Hx^2)} where we have defined the constant c_N,H = Ω_N K_N-1(H-1)(N-1)/Hπ(1 + o(1)). We pause now to derive the asymptotic form of c_N,H. The vector v⃗ was defined in Lemma <ref> and has entries of order N^-1/2, so v⃗^2 = 𝒪(1). Using Stirling's formula for the Gamma function Ω_N ∼ 2 (N-1)^N-1/2 (H-1)^N-1/2π^1/2 N^-N/2 + 1/2 2^N/2 - 1/2 e^N/2(2π)^-1/2e^-v⃗^2/2H =(H-1)^N-1/2 (2e)^N/2(N-1/N)^N-1/2e^-v⃗^2/2H ∼ (H-1)^N-1/2 (2e)^N/2 e^-1/2e^-v⃗^2/2H Ω_N (H-1)(N-1)/Hπ ∼ (H-1)^N/2 (2e)^N/2 e^-1/2 H^-1/2π^-1/2 (N-1)^1/2e^-v⃗^2/2H and so Lemma <ref> gives c_N,H ∼ (-i)^N-1 (N-1)^9/2/42π^5/2(2e)^N-1(H-1)^N/2 (2e)^N/2 e^-1/2 H^-1/2π^-1/2 (N-1)^1/2e^-v⃗^2/2H ∼(-i)^N-1 N^5/4π^3 H^1/2 (2e)^3/2(N-1) (H-1)^N/2e^-v⃗^2/2H. In the style of <cit.>, the multiple integral in (<ref>) can be written as an expansion over saddle points and saddle points of the integrand restricted to sections of the boundary. Recalling the form of ψ^(±)_U and ψ^(±)_L, we see that the integrand vanishes on the boundary and so we focus on the interior saddle points. Let us define the exponent function Φ(r_1, r_2, p_1, p_2, x; S, ϵ) = 2ψ^(+)_L(r_1; x, ϵ)+ 2ψ^(+)_U(r_2; x, ϵ) + ψ^(-)_L(p_1; x, ϵ) + ψ^(-)_U(p_2; x, ϵ) - (H+1)/H x^2 It is clear that the cosθ, cosθ̂ and cosθ' terms in the exponent of (<ref>) do not affect the saddle point asymptotic analysis, since we take the limit ϵ→ 0, and θ, θ̂, θ'∈ [0,π/2) and it is only the signs of the 𝒪(ϵ) terms that are significant. Therefore, to simplify the exposition, we will suppress these terms. The (r_1,r_2,p_1,p_2) components of Φ are of the form z↦ z ± i(x± iϵ) - 1/2z and so the only saddle in Φ restricted to those components is at r_1 = -i(x+iϵ) + (2-(x+iϵ)^2)^1/2/2 z^(+)_L r_2 = -i(x-iϵ) + (2-(x-iϵ)^2)^1/2/2 z^(+)_U p_1 = i(x+iϵ) + (2-(x+iϵ)^2)^1/2/2 z^(-)_L p_2 = i(x-iϵ) + (2-(x-iϵ)^2)^1/2/2 z^(-)_U. To deform the (r_1,r_2,p_1,p_2) contours through this saddle, we are required to choose a branch of the functions in (<ref> - <ref>). Each has branch points at ±2 + iϵ or ±2 -iϵ. Since the initial contour of x integration lies along the real line, we take the following branch cuts in the complex x plane and respective angle ranges (see Figure <ref>) [2 + iϵ, 2 + i∞],   [π/2, 5π/2] [2 - iϵ, 2 - i∞],   [-π/2, 3π/2] [-2 + iϵ, -2 + i∞],   [π/2, 5π/2] [-2 - iϵ, -2 - i∞],   [-π/2, 3π/2]. It is simple to compute ψ_U^(±)(z^(±)_U) and ψ_L^(±)(z^(±)_L): ψ_L^(+)(z^(+)_L) = 1/4(1+(x+iϵ)^2 + log 2) + 1/4log 2 +1/4i(x+iϵ)(2 - (x+iϵ)^2)^1/2 -1/2log[ -i(x+iϵ) + (2-(x+iϵ)^2)^1/2] ψ_U^(+)(z^(+)_U) = 1/4(1+(x-iϵ)^2 + log 2) + 1/4log 2 +1/4i(x-iϵ)(2 - (x-iϵ)^2)^1/2 - 1/2log[ -i(x-iϵ) + (2-(x-iϵ)^2)^1/2] ψ_L^(-)(z^(-)_L) = 1/4(1+(x+iϵ)^2 + log 2) + 1/4log 2 -1/4i(x+iϵ)(2 - (x+iϵ)^2)^1/2 -1/2log[ i(x+iϵ) + (2-(x+iϵ)^2)^1/2] ψ_U^(-)(z^(-)_U) = 1/4(1+(x-iϵ)^2 + log 2) + 1/4log 2 -1/4i(x-iϵ)(2 - (x-iϵ)^2)^1/2 - 1/2log[ i(x-iϵ) + (2-(x-iϵ)^2)^1/2]. Let us consider x still restricted to the real line. We are free to restrict to ϵ>0 and then x± iϵ lies just above (below) the real line. For x<-2 the angle from all four branch points is π and so we obtain Φ_(4)(x) lim_ϵ→ 0Φ(z_L^(+), z_U^(+), z_L^(-), z_U^(-), x; ϵ) = 3/2(1+x^2+log 2) + 3/2log 2 - 1/2xx^2 -2-2log[-ix + ix^2 - 2] - log[ix + ix^2 - 2] - H+1/Hx^2 =3/2(1+log 2) + H-2/2Hx^2 + 3/2log 2 - 1/2xx^2 -2-log[-ix + ix^2 - 2]    - log 2 =3/2(1+log 2)+ H-2/2Hx^2 + 1/2log 2 - 1/2xx^2 -2-log[-x + x^2 - 2]     - log i = 3/2(1+log 2) +H-2/2Hx^2+ I_1(x; 2) - log i However for -x < x < 2 the angles about the branch points are π, π, 2π, 0 in the order of (<ref>-<ref>). It follows that the square root terms in both of ψ^(±)_L(z^(±)_L) and both of ψ^(±)_U(z^(±)_U) have opposite signs and so Φ_(4)(x) = 3/2( 1+ log2) + H-2/2Hx^2- 3/2log(-2) + 3/2log 2 = 3/2( 1 + log2) + H-2/2Hx^2 - 3/2log(-1). Finally, the above reasoning can be trivially extended to x>2 to obtain Φ_(4)(x) = 3/2(1 + log2) + H-2/2Hx^2 + I_1(-x; 2) - logi. It is apparent from (<ref>)[Note that I_1(x;2) is monotonically decreasing on (-∞, -2].], (<ref>) and (<ref>) that the branch choice (<ref>-<ref>) and deforming through each of the saddles of in (r_1, r_2, p_1, p_2) gives a contour of steepest descent in x with the critical point being at x=0. We must also consider the end-point contributions from the p_1, p_2 each near 0, but we obviously need only consider one of them by interchangeability. For some appropriate length scale δ>0 the p_1=0 end-point integral is ∫_0^δ dp_1  exp{ -N( 1/2p_1^2 - ixp_1 - 1/2log p_1)} = ∫_0^δdp_1   p_1^N/2exp{-N(1/2p_1^2 - ixp_1)} where we have neglected the N-independent terms in the integrand and we shall for time-being not be concerned with any branch choices. Setting p = |x|Np_1 gives ∫_0^δ |x|Ndp   p^N/2 |x|^-N/2 - 1 N^-N/2 - 1e^isgn(x)p -1/2p^2 x^-2N^-1 ∼∫_0^∞dp   p^N/2 |x|^-N/2 - 1 N^-N/2 - 1 e^isgn(x)pGiven N^-1≪δ≪ N^-1/2. = (sgn(x)i)^1 - N/2 |x|^-N/2 -1N^-N/2 - 1Γ(N/2 - 1) = (i)^1 - N/2 |x|^-N/2 -1N^-N/2 - 1(N/2e)^N/2Nπ(1 + o(1))By Stirling's approx. = (i)^1 - N/2 |x|^-1 N^-1/2π(1+o(1))exp{-N/2(log|x| + log 2 + 1)} We abuse notation and write (<ref>) as ψ_L^(-)(0) = ψ_U^(-)(0) = 1/2(log|x| + log2 + 1) and therefore for x< - 2 Φ_(3)lim_ϵ→ 0Φ(z^(+)_L, z^(+)_U, 0, z_U^(-), x; ϵ) = lim_ϵ→ 0Φ(z^(+)_L, z^(+)_U, z_L^(-), 0, x; ϵ) = 5/4(1+x^2 + log2) - H+1/Hx^2+5/4log 2- 3/4xx^2 - 2 - 3/2log[-x + x^2 -2]     - 1/2log2 - 5/4logi + 1/2(log|x| + 1 + log2) = 5/4(1+x^2+log2) - H+1/Hx^2+ 1/2(1 + log2 + log|x|) + 3/2I_1(x; 2) - 5/4logi and similiarly Φ_(2)lim_ϵ→ 0Φ(z^(+)_L, z^(+)_U, 0, 0, x; ϵ) = (1+x^2+log2) - H+1/Hx^2+ (1 +log2 + log|x|) + 2I_1(x; 2) - 3/2logi. (<ref>), (<ref>) and (<ref>) make it apparent that our choice of branch gives steepest descent contours in the complex x plane from deforming through all 4 saddles and also from deforming through only 3 or 2 of them provided that the x contour lies within (-∞, -2), i.e. provided that u< - E_∞. Further, numerics easily show that Φ_(4) < Φ_(3) < Φ_(2) We are thus able to write down the leading order asymptotics for (<ref>) for all real u coming either from the end-point x=2u/E_∞ or the critical point x=0. We begin with u< -E_∞ by using (<ref>): 1/Nlog𝔼C^h_N(Nu) ∼ -3/2log2 -3/2 -H-2/2HHu^2/2(H-1) - I_1(u; E_∞) + logi+ 1/Nlog c_N,H ∼1/2log(H-1) - H-2/4(H-1)u^2 - I_1(u; E_∞) since by (<ref>) logc_N,H∼1/2Nlog(H-1) + 3/2(N-1)(1 + log2) + (N-1)log(-i). For -E_∞≤ u < 0 we use (<ref>): 1/Nlog𝔼C^h_N(Nu) ∼ -3/2log2 -3/2 -H-2/2HHu^2/2(H-1) + 3/2log(-1)+ 1/Nlog c_N,H ∼1/2log(H-1) - H-2/4(H-1)u^2 since 3/2log(-1) = log((-1)^1/2) =logi. Finally, for u≥ 0 the leading contribution comes from the critical point, so 1/Nlog𝔼C^h_N(Nu) ∼ -3/2log2 -3/2 + 3/2log(-1)+ 1/Nlog c_N,H ∼1/2log(H-1). We are in-fact able to obtain the exact leading order term in the expansion of 𝔼C^h_N(Nu) in the case u<-E_∞, namely Theorem <ref>. * We begin by deriving an alternative form for h. For v>2 h(v)^2 = |v - 2| + | v + 2| + 2|v^2 - 2|^1/2/|v^2 -2|^1/2 = 2( v + |v^2 - 2|^1/2)|v^2 - 2|^-1/2 h(v) = 2( v + |v^2 - 2|^1/2)^1/2|v^2 - 2|^-1/4 =2|-v + |v^2 - 2|^1/2|^-1/2|v^2 - 2|^-1/4. This proof now proceeds like that of Theorem <ref> except that we are required to keep track of the exact factors in (<ref>) and evaluate the 𝒪(1) integrals arising from the saddle point approximation. First note that (using primes to denote z derivatives) ψ^(±)_U,L”(z ;x; ϵ) = 1 + 1/2z^2 and so we abbreviate ψ^(±)_U,L”= ψ”. We get the following useful relation (now letting ϵ→ 0 implicitly for simplicity of exposition) ψ”(z^(±)_U,L) = (z^(±)_U,L)^-2(1 ∓ ix z^(±)_U,L) =1/2(z^(±)_U, L)^-2(2 - x^2 ± xx^2 - 2) = ix^2 - 2(z^(±)_U, L)^-1 where, using our branch choice shown in Figure <ref>, for x<-2 the saddle points are z_U,L^(±) = ∓ ix + ix^2 - 2/2. We recall the central expression (<ref>) from the proof of Theorem <ref>: C_N^h(Nu) = c_N,H∫_-∞^u_Ndx  lim_ϵ↘ 0 ∭_0^π/2 dθ dθ̂dθ'∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2 J_1(p_1, p_2, θ'; {s_j}_j=1^r, N-1)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂ exp{-(N-1)(2ψ^(+)_L(r_1; x; ϵcos2θcos2θ̂) +2ψ^(+)_U(r_2; x; ϵcos2θcos2θ̂) +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ') - H+1/Hx^2)} and we recall the expressions for J_1, J_2 from Lemma <ref>: J_1(p_1, p_2, θ'; {s_j}_j=1^r, N) =(1 + iN^-1/2s_2(p_1 + p_2) - N^-1s_2^2[1/4sin^22θ' (p_1^2 + p_2^2) + 1/4(3 + 4cos4θ')p_1p_2 ])^-1/2       ·(1 + is_1(p_1 + p_2) - s_1^2[1/4sin^22θ' (p_1^2 + p_2^2) + 1/4(3 + 4cos4θ')p_1p_2 ])^-1/2, J_2(r_1, r_2, p_1, p_2) = (r_1 + p_1)(r_2 + p_1)(r_1 + p_2)(r_2 + p_2)|r_1 - r_2|^4 |p_1-p_2| (r_1r_2)^-2 (p_1p_2)^-3/2. We begin by evaluating J_1 to leading order at the saddle points: 1/2sin^2 2θ' (z^(-))^2 + 1/4(3+4cos 4θ') (z^(-))^2 ≡ q(θ') (z^(-))^2 J_1(z^(-), z^(-), θ'; {s_j}_j=1^r, N) ∼(1 + 4iz^(-)s_1 - 2q(θ')(z^(-))^2s_1^2)^-1/2. Recalling x + x^2 - 2 = -2/-x + x^2 - 2 = -h(x)^2/2x^2 - 2,       (z^(-))^2 = -1/2x^2 - 2( x + x^2 - 2) we obtain J_1 ∼ 1 + 1/2s_1x^2 - 2h(x)^2 - s_1^2 q(θ')|x^2 - 2|h(x)^2 ≡ j(x, s_1, θ'). We see that J_2(z^(+), z^(+), z^(-), z^(-)) = 0 and so we are required to expand J_2 in the region of (r_1, r_2, p_1, p_2) = (z^(+), z^(+), z^(-), z^(-)). Following standard steepest descents practice, the integration variables r_1, r_2, p_1, p_2 are replaced by scaled variables in the region of the saddle point, i.e. r_i = z^(+) + (N-1)^-1/2|ψ^(+)”(z^(+))|^-1/2ρ_i p_i = z^(-) + (N-1)^-1/2|ψ^(-)”(z^(-))|^-1/2π_i and so J_2(r_1, r_2, p_1, p_2) = (N-1)^-5/2|x^2 - 2|^2 (z^(+))^-4(z^(-))^-3 |ψ^(-)”(z^(-))|^-1/2|ψ^(+)”(z^(+))|^-2|ρ_1 - ρ_2|^4 |π_1 -π_2| + o(N^-5/2). Piecing these components together gives J_2 J_1 dr_1 dr_2 dp_1dp_2 = (N-1)^-9/2j(x, s_1, θ') |x^2 - 2|^2            |ψ^(-)”(z^(-))|^-3/2 |ψ^(+)”(z^(+))|^-3(z^(+))^-4(z^(-))^-3            |ρ_1 - ρ_2|^4 |π_1 -π_2| dρ_1 dρ_2 dπ_1 dπ_2 = (N-1)^-9/2j(x, s_1, θ') |x^2 - 2|^-1/4 (z^(+))^-1(z^(-))^-3/2            |ρ_1 - ρ_2|^4 |π_1 -π_2| dρ_1 dρ_2 dπ_1 dπ_2 = 2(N-1)^-9/2j(x, s_1, θ') |x^2 - 2|^-1/4 (z^(-))^-1/2            |ρ_1 - ρ_2|^4 |π_1 -π_2| dρ_1 dρ_2 dπ_1 dπ_2 = 2^3/2(N-1)^-9/2j(x, s_1, θ') |x^2 - 2|^-1/4(x + x^2 - 2)^-1/2            |ρ_1 - ρ_2|^4 |π_1 -π_2| dρ_1 dρ_2 dπ_1 dπ_2. Recalling the expression (<ref>), we can then write J_2 J_1 dr_1 dr_2 dp_1dp_2 = 2^3/2(N-1)^-9/2j(x, s_1, θ') h(-x) 2^- 1|ρ_1 - ρ_2|^4 |π_1 -π_2| dρ_1 dρ_2 dπ_1 dπ_2 = 2^1/2(N-1)^-9/2j(x, s_1, θ') h(-x) |ρ_1 - ρ_2|^4 |π_1 -π_2| dρ_1 dρ_2 dπ_1 dπ_2 and so using (<ref>), we obtain C_N^h(Nu) ∼2^-3/2N^1/2/π^3He^-v⃗^2/2HY_2^(4)/8 Y_2^(1)∬_0^π/2dθ dθ̂ cos^2 2θsin 2θsin2θ̂    H-1∫_0^π/2 dθ' ∫_-∞^2u/E_∞N/N-1 dx   h(-x)j(x, s_1, θ')e^(N-1)Θ_H(2^-1/2E_∞x) where we have defined the integrals Y_n^(β) = ∫_ℝ^n dy⃗  e^-1/2y⃗^2 |Δ(y⃗)|^β and Δ is the Vandermonde determinant. Recall that, as in Theorem <ref>, the x integration contour in (<ref>) is a steepest descent contour and so the leading order term comes from the end point. Now (N-1) Θ_H(N/N-1 u) = (N-1)1/2log(H-1) - NH-2/4(H-1)u^2 - (N-1)I_1(N/N-1u; E_∞) = (N-1)1/2log(H-1) - NH-2/4(H-1)u^2 - (N-1)I_1(u; E_∞) - N-1/2NuI_1'(u; E_∞) + 𝒪(N^-1) = NΘ_H(u) - 1/2log(H-1) + I_1(u; E_∞) - 1/2uI_1'(u; E_∞) + 𝒪(N^-1) and so C_N^h(Nu) ∼2^-3/2N^-1/2/24π^3He^-v⃗^2/2H Y_2^(4) Y_2^(1)(∫_0^π/2 dθ'j(-v, s_1, θ')) h(v) e^NΘ_H(u)e^I_1(u; E_∞) - 1/2u I_1'(u; E_∞)/H-2/2(H-1)u + I_1'(u; E_∞) where we have defined (c.f. <cit.> Theorem 2.17) v = -2uE_∞^-1. It now remains only to evaluate the various constants in (<ref>) where possible. Firstly observe Y_2^(1) = 2π𝔼_X_1, X_2i.i.d.∼𝒩(0,1) |X_1 - X_2| = 2π𝔼_X∼𝒩(0, 2) |X| = 2π∫_0^∞ xe^-x^2/4 = 4π and similarly Y_2^(4) = 2π𝔼_X_1, X_2i.i.d.∼𝒩(0,1) (X_1 - X_2)^4 = 2π𝔼_X∼𝒩(0, 2) X^4 = 24 π. For convenience we define T(v, s_1) = 2/π∫_0^π/2j(-v, s_1, θ')dθ', and then collating our results: C_N^h(Nu) ∼N^-1/2/2π He^-v⃗^2/2HT(v, s_1) h(v) e^NΘ_H(u)e^I_1(u; E_∞) - 1/2u I_1'(u; E_∞)/H-2/2(H-1)u + I_1'(u; E_∞). Having completed the proof of Theorem <ref>, we can now explain why this result generalises only part (a) of the analogous Theorem (2.17) from <cit.>, namely only the case u<-E_∞. Recall that, following standard steepest descent practice, we introduced scaled integration variables in the region of the saddle point (<ref>)-(<ref>) and so arrived at (<ref>) with the constant factors Y_2^(1), Y_2^(4) resulting from the Laplace approximation integrals over the scaled variables. If we take -E_∞ < u < 0, say, then z^(+)_U + z^(-)_L = 0 and z^(+)_L + z^(-)_U = 0 and so it is the terms (r_1 + p_2), (r_2 + p_1) that vanish at the saddle point rather than |r_1 - r_2|^4 and |p_1 - p_2|. It follows that the terms Y_2^(1), Y_2^(4) are replaced by the integrals ∫_ℝ dπ_1 dπ_2dρ_1dρ_2   e^-1/2(π_1^2 + π_2^2 + ρ_1^2 + ρ_2^2) (ρ_1 + π_2)(ρ_2 + π_1) = 0. It is therefore necessary to keep terms to at least the first sub-leading order in the expansion of J_1J_2 around the saddle point, however we cannot do this owing the presence of the o(1) term in the constant c_N,H as defined in (<ref>) which we cannot evaluate. Note that setting all the ρ_ℓ^(N)=0 gives v⃗ = 0, S=0, hence s_1=0 and so T = 1. Consequently (<ref>) recovers the exact spherical H-spin glass expression in part (a) of Theorem 2.17 in <cit.>. The function h(v) shows up in <cit.> in the asymptotic evaluation of Hermite polynomials but arises here by an entirely different route. §.§ Complexity results with prescribed Hessian signature The next theorem again builds on Lemma <ref> to prove a generalisation of Theorem 2.5 from <cit.>. In fact, we will need a modified version of Lemma <ref> which we now prove. Let S be a rank 2 N× N symmetric matrix with non-zero eigenvalues {s_j}_j=1^2, where and s_j = 𝒪(1). Let x<-2 and let M denote an N× N GOE matrix with respect to whose law expectations are understood to be taken. Then [|(M - xI + S)| [x (M+S)∈{k-1, k, k+1}]] ≤   υ_U K_N e^2Nx^2(1 + o(1))e^-N(k-1)I_1(x;2)lim_ϵ↘ 0∭_0^π/2 dθ dθ̂dθ'∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2          J_1(p_1, p_2, θ'; {s_j}_j=1^r, N)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂         exp{-N(2ψ^(+)_L(r_1; x; ϵcos2θcos2θ̂) +2ψ^(+)_U(r_2; x; ϵcos2θcos2θ̂)               +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ'))} and [|(M - xI + S)| [x (M+S)∈{k-1, k, k+1}]] ≥   υ_L K_N e^2Nx^2(1 + o(1))e^-N(k+1)I_1(x;2)lim_ϵ↘ 0∭_0^π/2 dθ dθ̂dθ'∬_0^∞dp_1dp_2 ∬_Γ dr_1dr_2          J_1(p_1, p_2, θ'; {s_j}_j=1^r, N)J_2(r_1,r_2, p_1, p_2)cos^22θsin2θsin2θ̂         exp{-N(2ψ^(+)_L(r_1; x; ϵcos2θcosθ̂) +2ψ^(+)_U(r_2; x; ϵcos2θcosθ̂)               +ψ^(-)_L(p_1; x; ϵcos2θ')+ψ^(-)_U(p_2; x; ϵcos2θ'))} where the functions J_1, J_2, the constant K_N and the functions ψ^(±)_U,L are defined as in Lemma <ref>, and the υ_L, υ_U are some constants independent of N. A more general version of this lemma holds with S having any fixed rank r. In that case, one considers [|(M - xI + S)| [x (M+S)∈{k-(r-1),…, k, …, k+(r-1)}]] and the statement and proof of the result are immediate extensions of what is given here. We omit this generality, since it is not required here. This proof is largely the same as that of Lemma <ref>. The first difference arises at (<ref>), where we are required to compute [e^-i MA[x(M+S)=k]]. As will become apparent towards the end of this proof, we do not know how to maintain the exact equality constraint[See Remark <ref> below.] on index when S≠ 0, hence the slightly relaxed results that we are proving, however we will proceed by performing the calculation for S=0 and then show that S can be reintroduced one eigendirection at a time. As in the proof of Theorem A.1 in <cit.>, we split this expectation by fixing a bound, R, for the largest eigenvalue, i.e. [e^-i MA[x(M)=k]] = [e^-i MA[x(M)=k, max{|λ_i(M)|}_i=1^N ≤ R]] + [e^-i MA[x(M)=k, max{|λ_i(M)|}_i=1^N > R]] We will focus initially on the first expectation on the RHS of (<ref>) and deal with the second term later. Let us abbreviate the notation using ℐ_R(M) = {max{|λ_i(M)|}_i=1^N ≤ R}. Recall that A has finite rank and note that A is symmetric without loss of generality, since MA + A^T/2 = 1/2( MA + MA^T )= 1/2( MA + AM^T) = MA and hence A=diag(a_1, …, a_r_A, 0 …, 0) without loss of generality. We begin by factorising the symmetric matrix M in the GOE integral: _M[ e^-i MA[x(M)=k, ℐ_R(M)]] = ∫dμ_E(Λ)/Z_N[-R ≤λ_1 …≤λ_k ≤ x≤λ_k+1≤…λ_N≤ R] ∫ d μ_Haar(O) e^-i∑_j=1^r_A a_j o⃗_j^TΛo⃗_j where μ_E is the un-normalised joint density of ordered GOE eigenvalues, μ_Haar is the Haar measure on the orthogonal group O(N), o⃗_j are the rows of the orthogonal matrix O and Z_N is normalisation for the ordered GOE eigenvalues given by the Selberg integral: Z_N = 1/N! (22)^N N^-N(N+1)/4∏_i=1^NΓ(1 + i/2). Much like the proof of Theorem A.1 in <cit.>, we proceed by splitting the eigenvalues in (<ref>) to enforce the constraint given by the indicator function: _M[ e^-i MA[x(M)=k,ℐ_R(M)]] = ∫ dμ_Haar(O)1/Z_N∫_[-R, x]^k∏_i=1^k( dλ_i e^-Nλ^2_i/2)Δ({λ_i}_i=1^k)[λ_1 ≤…≤λ_k] ∫_(x,R]^N-k∏_i=k+1^N( dλ_i e^-Nλ^2_i/2)Δ({λ_i}_i=k+1^N)[λ_k+1≤…≤λ_N] e^-i∑_j=1^r_A a_j o⃗_j^TΛo⃗_jexp(∑_j=1^k∑_ℓ=k+1^N log|λ_j - λ_ℓ|) = ∫ dμ_Haar(O)∫_[-R, x]^k∏_i=1^k( dλ_i e^-Nλ^2_i/2)Δ({λ_i}_i=1^k) Z_N-k/k!Z_N 1/Z_N-k(N-k)!∫_(x,R]^N-k∏_i=k+1^N( dλ_i e^-Nλ^2_i/2)Δ({λ_i}_i=k+1^N) e^-i∑_j=1^r_A a_j o⃗_j^TΛo⃗_jexp(∑_j=1^k∑_ℓ=k+1^N log|λ_j - λ_ℓ|) = ∫_[-R_N, x_N]^k∏_i=1^k( dλ_i e^-(N-k)λ^2_i/2)Δ({λ_i}_i=1^k) ∫_(x_N,R_N]^N-k dμ̅_E(Λ_N-k) ∫ dμ_Haar(O) e^-i∑_j=1^r_AN-k/N a_j o⃗_j^TΛo⃗_j exp(∑_j=1^k∑_ℓ=k+1^N log|λ_j - λ_ℓ|) Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 where x_NN/N-kx, R_NN/N-kR and μ̅_E is the normalised joint density of un-ordered GOE eigenvalues. We will first need to deal with the Itzykson-Zuber integral in (<ref>) before dealing with the eigenvalue integrals. We follow <cit.>, in particular the proof of Theorem 7 therein. We have the well-known result (Fact 8 in <cit.>) that in the sense of distributions (o⃗_1, …, o⃗_r_A) ∼(g̃⃗̃_1/||g̃⃗̃_1||,…, g̃⃗̃_r_A/||g̃⃗̃_r_A||) where the (g̃⃗̃_j)_j=1^r_A are constructed via the Gram-Schmidt process from (g⃗_j)_j=1^r_Ai.i.d.∼𝒩(0, 1). (<ref>) exactly gives ∫ dμ_Haar(O) e^-i∑_j=1^r_AN-k/N a_j o⃗_j^TΛo⃗_j = ∫∏_j=1^r_Adg⃗_j/2π^N e^-g⃗_j^2/2exp(-iN-k/N∑_j=1^r_A a_j g̃⃗̃_j^TΛg̃⃗̃_j/ ||g̃⃗̃_j||^2) and we will now seek to replace the g̃⃗̃_j with g⃗_j via appropriate approximations. Introduce the event B_N(υ) {| N^-1⟨g⃗_i, g⃗_j⟩ - δ_ij| ≤ N^-υ,     1≤ i, j ≤ r_A} and then from <cit.> we immediately conclude that under the i.i.d Gaussian law of the (g⃗_j)_j=1^r_A the complementary event has low probability: ℙ(B_N(υ)^c) =𝒪( C(υ) e^-α N^1-2υ) where α, C(υ) > 0 and we take 0<υ < 1/2 to make this statement meaningful. This enables us to write ∫ dμ_Haar(O) e^-i∑_j=1^r_AN-k/N a_j o⃗_j^TΛo⃗_j = (1 + 𝒪(e^-α N^1-2υ))∫∏_j=1^r_Adg⃗_j/2π^N e^-g⃗_j^2/2exp(-iN-k/N∑_j=1^r_A a_j g̃⃗̃_j^TΛg̃⃗̃_j/ ||g̃⃗̃_j||^2){B_N(υ)}. Again, directly from <cit.>, given B_N(υ) we have ||g̃⃗̃_j - g⃗_j|| ≤ N^1/2 - υ/2 and therefore ||g̃⃗̃_j||^2 = N[ 1 + N^-1(||g̃⃗̃_j||^2 - ||g⃗_j||^2) + (N^-1||g⃗_j||^2 - 1)] = N( 1 + 𝒪(N^-υ) ) and g̃⃗̃_j^TΛg̃⃗̃_̃j̃ =g⃗^TΛg⃗ + ∑_i=1^N (g̃_i - g_i)^2λ_i + 2∑_i=1^N g_i(g̃_i - g_i)λ_i  |g̃⃗̃_j^TΛg̃⃗̃_j/||g̃⃗̃_j||^2 - g⃗_j^TΛg⃗_j/||g⃗_j||^2| ≲ N^-υ/2||Λ||_∞. We see therefore that, in approximating the {g̃⃗̃_j}_j by {g⃗_j}_j in (<ref>) we introduce an error term in the exponential that is uniformly small in the integration variables {g⃗_j}_j. Combining (<ref>), (<ref>) and (<ref>) and noting that ||Λ||_∞ = R_N ∼ R under the eigenvalue integral in (<ref>) gives ∫ dμ_Haar(O) e^-i∑_j=1^r_AN-k/N a_j o⃗_j^TΛo⃗_j = (1 + 𝒪(N^-υ/2))∫∏_j=1^r_Adg⃗_j/2π^N e^-g⃗_j^2/2exp(-iN-k/N∑_j=1^r_A a_j g⃗_j^TΛg⃗_j/N ( 1 + 𝒪(N^-υ))) = ∏_j=1^r_A∏_i=1^N(1 + 2iN^-1a_jλ_i)^-1/2(1 + 𝒪(N^-υ/2)) = exp{-N-k/2∑_j=1^r_A∫ dμ̂_N-k(z) log(1 + 2iN^-1a_j z)} exp{-1/2∑_j=1^r_A∑_i=1^klog(1+2iN^-1a_jλ_i) }(1 + 𝒪(N^-υ/2)) where we have defined μ̂_N-k = 1/N-k∑_i=k+1^N δ_λ_i. Following <cit.>, we now introduce the following function Φ(z, μ) = -z^2/2 + ∫ dμ(z') log|z-z'| and so and then (<ref>) and (<ref>) can be rewritten as _M[ e^-i MA[x(M)=k,ℐ_R(M)]] = ∫_[-R_N, x_N]^k∏_i=1^k dλ_i  Δ({λ_j}_j=1^k)exp{-1/2∑_j=1^r_A∑_i=1^klog(1+2iN^-1a_jλ_i) }(1 + 𝒪(N^-υ/2)) ∫_(x_N, R_N]^N-k dμ̅_E(Λ_N-k) exp{-N-k/2∑_j=1^r_A∫ dμ̂_N-k(z) log(1 + 2iN^-1a_j z)} exp((N-k)∑_j=1^k Φ(λ_j, μ̂_N-k)) Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2. We now appeal to the Coulomb gas method <cit.> and in particular the formulation found in <cit.>. We replace the joint integral of N-k eigenvalues in (<ref>) with a functional integral over the continuum eigenvalues density: ∫_(x_N, R_N]^N-k dμ̅_E(Λ_N-K)exp((N-k)∑_j=1^k Φ(λ_j, μ̂_N-k))exp{-N-k/2∑_j=1^r_A∫ dμ̂_N-k(z)log(1 + 2ia_j z/N)} = ∫𝒟[μ]e ^-N^2 𝒮_x[μ]exp((N-k)∑_j=1^k Φ(λ_j, μ))exp{-N-k/2∑_j=1^r_A∫ dμ(z) log(1 + 2ia_j z/N)} where the action is defined as 𝒮_x[μ] = 1/2∫ dz μ(z) z^2 - ∬_z≠ z dzdz'μ(z)μ(z') log|z-z'| + A_1(∫ dzθ(R_N - z)μ(z) - 1) + A_2(∫ dz μ(z)θ(z-x) - 1) - Ω where θ is the Heaviside step function, Ω is the constant resulting from the normalisation of the eigenvalue joint density and A_1,A_2 are Lagrange multipliers. Owing to the N^2 rate in (<ref>), the integral concentrates around the minimiser of the action. Since x< -2 and we have chosen R>|x|, it is clear following <cit.> that the semi-circle law μ_SC(z) = π^-12 - z^2 minimises this action and further that 𝒮_x[μ_SC] = 0, so we have ∫𝒟[μ]e ^-N^2 𝒮_x[μ]exp((N-k)∑_j=1^k Φ(λ_j, μ))exp{-N-k/2∑_j=1^r_A∫ dμ(z) log(1 + 2iN^-1a_j z)} = ∫_B_δ(μ_SC)𝒟[μ]e ^-N^2 𝒮_x[μ]exp((N-k)∑_j=1^k Φ(λ_j, μ))exp{-N-k/2∑_j=1^r_A∫ dμ(z) log(1 + 2iN^-1a_j z)}      + e^-N^2 c_δ𝒪(1) where δ=𝒪(N^-1) and c_δ>0 is some constant. Performing the usual Laplace method expansion of the action in (<ref>) and re-scaling the first non-vanishing derivative to be 𝒪(1), it is clear that the action only contributes a real factor of 𝒪(1) that is independent of the dummy integration variables x⃗_1, x⃗_2, ζ_1, ζ_1^†, ζ_2, ζ_2^† and the other eigenvalues λ_1,…, λ_k and can therefore be safely summarised as 𝒪(1). Whence ∫𝒟[μ]e ^-N^2 𝒮_x[μ]exp((N-k)∑_j=1^k Φ(λ_j, μ))exp{-N-k/2∑_j=1^r_A∫ dμ(z) log(1 + 2iN^-1a_j z)} = 𝒪(1)exp((N-k)∑_j=1^k Φ(λ_j, μ_SC))exp{-N-k/2∑_j=1^r_A∫ dμ_SC(z) log(1 + 2iN^-1a_j z)}      + e^-N^2 c_δ𝒪(1). Now elementary calculations give, noting that the integrand is uniformly convergent in N owing to the compact support of μ_SC, ∫ dμ_SC(z) log(1 + 2iN^-1a_j z) = -2ia_j/N∫ dμ_SC(z) z + 2a^2_j/N^2∫ dμ_SC(z) z^2 + 𝒪(a_j^3N^-3) = a^2_j/N^2 ( 1 + 𝒪(a_jN^-1)) N-k/2∑_j=1^r_A∫ dμ_SC(z) log(1 + 2iN^-1a_j z) = A^2/2N ( 1 + ||A||_∞𝒪(N^-1)) where we have implicitly assumed that the spectral radius ||A||_∞≪ N. This constraint can be introduced by restricting the domains of integration for x⃗_1 and x⃗_2 in the anaologue of (<ref>) from all of ℝ^N to balls of radius o(N). It is a standard result for Gaussian integrals that this can be achieved at the cost of an exponentially smaller term. Summarising (<ref>), (<ref>), (<ref>) and (<ref>): _M[ e^-i MA[x(M)=k,ℐ_R(M)]] = ∫_[-R_N, x_N]^k∏_i=1^k dλ_i  Δ({λ_j}_j=1^k)exp{-1/2∑_j=1^r_A∑_i=1^klog(1+2iN^-1a_jλ_i) }exp((N-k)∑_j=1^k Φ(λ_j, μ_SC)) e^- A^2/2N(𝒪(1) + 𝒪(N^-υ/2) + 𝒪(N^-1)||A||_∞) Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 = ∫_[-R_N, x_N]^k∏_i=1^k dλ_i  Δ({λ_j}_j=1^k)exp((N-k)∑_j=1^k Φ(λ_j, μ_SC)) e^- A^2/2N(𝒪(1) + 𝒪(N^-υ/2) + 𝒪(N^-1)||A||_∞) Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 = ∫_[-R_N, x_N]^k∏_i=1^k dλ_i  Δ({λ_j}_j=1^k)exp((N-k)∑_j=1^k Φ(λ_j, μ_SC)) e^- A^2/2N𝒪(1) Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 where in the second equality we have Taylor expanded the remaining logarithm and summarised the result with another factor of (1 + 𝒪(N^-1)||A||_∞). We now wish to follow the proof of Theorem A.1 in <cit.> and use Δ({λ_j}_j=1^k) ≤ (2R_N)^k ≤ (3R)^k for λ_j ∈ [-R_N, R_N] with bound (<ref>), however the expectation on the left hand side of (<ref>) is not necessarily real. We do however know that the 𝒪(1) term in (<ref>) is real to leading order and so we can write _M[ e^-i MA[x(M)=k,ℐ_R(M)]] = _M[ e^-i MA[x(M)=k,ℐ_R(M)]] ( 1 + io(1)) and thence focus on bounding the real part of the expectation to obtain _M[ e^-i MA[x(M)=k, ℐ_R(M)]] ≤ K(3R)^k Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 e^- A^2/2N(∫_-R_N^x_N dz e^(N-k)Φ(z, μ))^k where we have exchanged 𝒪(1) terms for some appropriate constant K. Continuing to bound (<ref>): _M[ e^-i MA[x(M)=k, ℐ_R(M)]] ≤ K(3R)^2kZ_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 e^- A^2/2Nexp(k(N-k)sup_z∈[-2R, x] ν∈ B_δ(μ_SC)Φ(z, ν)) ≤ K(3R)^2kZ_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 e^- A^2/2N e^-k(N-k)(1/2 + I_1(x; 2) where we have used the same result as used around (A.18) in <cit.> to take the supremum. Recalling (<ref>), we can now use (<ref>) and the GOE large deviations principle <cit.> as in <cit.> to obtain _M[ e^-i MA[x(M)=k]] ≤ K”(3R)^k Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 e^-k(N-k)(1/2 + I_1(x; 2)) e^-1/2N A^2 + e^-NR^2 We now seek to obtain a complementary lower bound and again follow <cit.> in choosing some y and R' such that y < x < R' < -2. We then, following a similar procedure as above, find _M[ e^-i MA[x(M)=k]] ≥ K̃Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 e^-1/2N A^2exp(k(N-k)sup_z∈[y, x] ν∈ B_δ(μ_SC)Φ(z, ν)) and taking y↗ x we obtain the complement to (<ref>): _M[ e^-i MA[x(M)=k]] ≥ K̃Z_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 e^-k(N-k)(1/2 + I_1(x; 2)) e^-1/2N A^2. Next we need the asymptotic beahviour of the Selberg term in (<ref>) and (<ref>) T_N,kZ_N-k/k!Z_N(N-k/N)^N + N(N+1)/2 = Z_N-k(N-k)!/Z_N N!(N-k/N)^(N-k)(N-k+1)/4_T_N,k' N!/(N-k)!k!(N-k/N)^N/2 + N(N+1)- (N-k)(N-k+1)/4. The term T_N,k' appears in <cit.> (defined in A.13) and it is shown there that lim_N→∞ N^-1log T_N,k' = k/2. Clearly lim_N→∞N^-1logN!/(N-k)!k! = 0 and it is simple to show that lim_N→∞(N-k/N)^N/2 + N(N+1)- (N-k)(N-k+1)/4 = e^-k(k+1)/2 and so we have overall lim_N→∞N^-1log T_N,k = k/2. So absorbing any 𝒪(1) terms into constants K_L and K_U we have K_L e^-kN(1 + o(1)) I_1(x; 2) e^-1/2N A^2≤_M[ e^-i MA[x(M)=k]] ≤ K_U e^-kN(1+o(1)) I_1(x; 2) e^-1/2N A^2 Set S=s_1e⃗_1e⃗_1^T + s_2e⃗_2e⃗_2^T and S_1 = s_1e⃗_1e⃗_1^T. Suppose s_1>0 and s_2>0. By the interlacing property of eigenvalues, we have λ^(M)_1 ≤λ^(M+S_1)_1 ≤λ^(M)_2 ≤…≤λ^(M)_k ≤λ^(M+S_1)_k≤λ^(M)_k+1≤λ^(M+S_1)_k+1≤…≤λ^(M)_N≤λ^(M+S_1)_N Therefore we have {x(M) =k }⊂{x(M+S_1) ∈{k-1, k}}⊂{x(M) ∈{k-1,k, k+1}}=_j=-1^1{x(M) = k + j }  for k>0, {x(M) =k }⊂{x(M+S_1) =k }⊂{x(M) ∈{k, k+1}}=_j=0^1{x(M) = k + j } for k=0, and so (<ref>) gives K_L e^-kN (1+o(1))I_1(x; 2) e^-1/2N A^2≤ _M[ e^-i MA[x(M+S_1)∈{k-1,k}]] ≤ 3K_U e^-(k-1)N (1+o(1))I_1(x; 2) e^-1/2N A^2, e^-1/2N A^2≤ _M[ e^-i MA[x(M+S_1)=0]] ≤ 2K_U e^-1/2N A^2. We can then extend to S likewise by observing that interlacing gives {x(M+S_1) ∈{k, k+1}} ⊂{x(M+S) ∈{k-1, k, k+1}}⊂{x(M+S_1) ∈{k-1,k, k+1, k+2}} and iterating using (<ref>) yields {x(M) = k+1 }⊂{x(M+S) ∈{k-1, k, k+1}}⊂_j=-1^3{x(M) = k + j },   for k>0 {x(M) = k+1 }⊂{x(M+S) ∈{ k, k+1}}⊂_j=0^3{x(M) = k + j }, for k=0 and (<ref>) then gives K_L e^-(k+1)N(1+o(1)) I_1(x; 2) e^-1/2N A^2 ≤_M[ e^-i MA[x(M+S)∈{k-1,k, k+1}]] ≤ 5K_U e^-(k-1)N(1+o(1)) I_1(x; 2) e^-1/2N A^2 K_L e^-N(1+o(1)) I_1(x; 2) e^-1/2N A^2 ≤_M[ e^-i MA[x(M+S)∈{0, 1}]] ≤ 4K_U e^-1/2N A^2. If instead the signs of s_1, s_2 are different, then the interlacing will be in the reverse orders, but the conclusion of (<ref>) will be unchanged. Finally using (<ref>) in the analogue of (<ref>) _M[|(M - xI + S)|[x(M+S)∈{k-1,k, k+1}] = _M[|(M - xI + S)|[x(M+S)∈{k-1,k, k+1}] = { K^(1)_Nlim_ϵ↘ 0∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^†exp{-ix⃗_1^T(M-(x + iϵ)I+S)x⃗_1 - ix⃗_2^T(M-(x-iϵ)I + S)x⃗_2}       exp{ i ζ_1^†(M-(x+iϵ) I+S)ζ_1 + i ζ_2^†(M-(x - iϵ)I+S)ζ_2}       _M[e^-i MA[x(M+S)∈{k-1,k, k+1}]]} = { K^(1)_Nlim_ϵ↘ 0∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^†exp{-ix⃗_1^T(M-(x + iϵ)I+S)x⃗_1 - ix⃗_2^T(M-(x-iϵ)I + S)x⃗_2}       exp{ i ζ_1^†(M-(x+iϵ) I+S)ζ_1 + i ζ_2^†(M-(x - iϵ)I+S)ζ_2}       _M[e^-i MA[x(M+S)∈{k-1,k, k+1}]](1+io(1))} = { K^(1)_Nlim_ϵ↘ 0∫ dx⃗_1 dx⃗_2 dζ_1 dζ_1^† dζ_2 dζ_2^†exp{-ix⃗_1^T(M-(x + iϵ)I+S)x⃗_1 - ix⃗_2^T(M-(x-iϵ)I + S)x⃗_2}       exp{ i ζ_1^†(M-(x+iϵ) I+S)ζ_1 + i ζ_2^†(M-(x - iϵ)I+S)ζ_2}       _M[e^-i MA[x(M+S)∈{k-1,k, k+1}]]}(1+io(1)) From this point on, the proof proceeds, mutatis mutandis, as that for Lemma <ref> but applied to the upper and lower bounds on (<ref>) obtained from (<ref>). The final range of integration for p_1 and p_2 will be some intervals (0, o(1)) owing to the change of variables used around (<ref>), but this does not affect the ensuing asymptotics in which the p_1,p_2 integration contours are deformed through the saddle point at z^(-)_U,L. We note that if an appropriate generating function for [x(M + S)=k] could be found, that would allow for a straightforward taking of the expectation in (<ref>), then the calculations of Lemma <ref> could be modified to include this extra term and then the desired expectation [|(M - xI + S)| [x (M+S)=k]] could be read-off in comparison with the result of Lemma <ref>. We have established all we need to prove Theorem <ref>. * First consider u < -E_∞. The proof proceeds just as that of Theorem <ref> but applying Lemma <ref> instead of Lemma <ref> and working identically on the upper and lower bounds from Lemma <ref>. Now consider u> -E_∞. By the interlacing property as used around (<ref>), x(M) and x(M+S) differ by no more than 2. Hence x(M+S) ∈𝒦x(M) = 𝒪(1) but for 0 > x > -2, and M∼ GOE_N, the large deviations principle for the GOE <cit.> gives ℙ(x(M) = 𝒪(1)) ≤ e^-cN^2 for some constant c, hence the x integral analogous to (<ref>) is exponentially suppressed with quadratic speed in N for x>-2. But we have already seen that the integral is only suppressed with linear speed in N for x < -2, and further that Θ_H,k(u) is increasing on (-∞, -E_∞) and so, by the Laplace principle, the leading order contribution is from around x=-2 and so lim_N→∞1/Nlog C_N,𝒦^h(Nu) = lim_N→∞1/Nlog C_N,𝒦^h(-NE_∞) for u > - E_∞, which completes the proof. We are clearly unable to provide an exact leading term for C^h_N, 𝒦(Nu) for any value of u as we did for C^h_N(Nu) for u< -E_∞ in Theorem <ref> because the presence of S in x(M+S) has forced us in Lemma <ref> to resort to upper and lower bounds on the leading order term. We note that in <cit.> the authors are also not able to obtain the exact leading term in this case by their rather different methods. Recalling Remark <ref>, we conjecture that this term could be obtained by variants of our methods if only a suitable (perhaps approximate) generating function for [x(M+S) = k] could be discovered. § LOW RANK PERTURBATION OF A MATRIX IDENTITY In this section we establish a modified version of Theorem I from <cit.> required in the proof in Lemma <ref>. In that Lemma, we are faced with an integral of the form ℐ_N(F; S) = ∬_ℝ^N dx⃗_1 dx⃗_2 F(Q_B) e^-iN SB where the N× N matrix B is defined as B=x⃗_1x⃗_1^T + x⃗_2x⃗_2^T, the 2× 2 matrix Q_B is given by Q_B = ([ x⃗_1^Tx⃗_1 x⃗_1^Tx⃗_2; x⃗_2^Tx⃗_1 x⃗_2^Tx⃗_2 ]), F is some suitably nice function and S is some real symmetric matrix of rank r=𝒪(1) as N→∞ and with non-zero eigenvalues {N^-1/2s_i}_i=1^r for s_i=𝒪(1). It is sufficient to be able to evaluate a leading order term of ℐ_N in an expansion for large N. <cit.> proves the following related result: Given m vectors in ℝ^N x⃗_1, …, x⃗_m, denote by Q(x⃗_1, …, x⃗_m) the m× m matrix whose entries are given by Q_ij = x⃗_i^Tx⃗_j. Let F be any function of an m× m matrix such that the integral ∫_ℝ^N…∫_ℝ^Ndx⃗_1… dx⃗_m |F(Q)| exists and define the integral 𝒥_N, m(F) ∫_ℝ^N…∫_ℝ^Ndx⃗_1… dx⃗_m F(Q). Then we have 𝒥_N,m(F) = π^m/2(N - m-1/2)/∏_k=0^m-1Γ(N-k/2)∫_Sym_≥ 0(m)dQ̂(Q̂)^N-m-1/2F(Q̂). We will prove the following perturbed version of this result and in greater generality than is required in the present work. * The proof of Lemma <ref> presented in Appendix D of <cit.> proceeds by induction on m and relies on writing the integration vector x⃗_m as x⃗_m = ρ_m O_me⃗_N where e⃗_N is the N-th basis vector in the chosen orthonormal basis, ρ_m>0 is a scalar variable and O_m is an orthogonal matrix. The proof proceeds by making a change of variables for the first m-1 integration vectors and then finding that the integrand does not depend on O_m and so the integral over O_m with respect to the Haar measure just contributes a volume factor of 2π^N/2/Γ(N/2). It is at this point where the e^-iN SB term in (<ref>) causes problems because a dependence on O_m remains. Indeed, we have x⃗_m^T Sx⃗_m = ρ_m e⃗_N^TO^T_m S O_m e⃗_N. Since S is real symmetric we may take, WLOG, S = N^αdiag(s_1, …, s_r, 0, …, 0). Then e^-iNx⃗_m^T Sx⃗_m = e^-iN^1 + αρ_m ∑_j=1^r s_j (o_Nj)^2 where o_Nj is the j-th component of the N-th column of O. Proceeding with an evaluation of an integral like (<ref>) then requires the evaluation of the integral ∫_O(N) dμ_Haar(O_m) e^-iN^1 + αρ_m ∑_j=1^r s_j (o_Nj)^2. We can now follow <cit.>, in particular the proof of Theorem 7 therein. We have the well-known result (Fact 8 in <cit.>) that in the sense of distributions (o⃗_1, …, o⃗_p) ∼(g̃⃗̃_1/||g̃⃗̃_1||,…, g̃⃗̃_p/||g̃⃗̃_p||) for any p=𝒪(1) and where the (g̃⃗̃_j)_j=1^p are constructed via the Gram-Schmidt process from (g⃗_j)_j=1^r_Ai.i.d.∼𝒩(0, 1). So in particular o⃗_N ∼g⃗/||g⃗||,   g⃗∼𝒩(0,1). (<ref>) then exactly gives ∫_O(N) dμ_Haar(O_m) e^-iN^1 + αρ_m ∑_j=1^r s_j (o_Nj)^2 = ∫_ℝ^Ndg⃗/(2π)^N/2 e^-g⃗^2/2exp(-iN^1+αρ_m ∑_j=1^r s_j g_j^2/||g⃗||^2) Introduce the event B_N(υ) {| N^-1⟨g⃗, g⃗⟩ - 1| ≤ N^-υ} and then from <cit.> we immediately conclude that under the i.i.d Gaussian law of g⃗ the complementary event has low probability: ℙ(B_N(υ)^c) =𝒪( C(υ) e^-β N^1-2υ) where β, C(υ) > 0 and we take 0<υ < 1/2 to make this statement meaningful. This enables us to write ∫_O(N) dμ_Haar(O_m) e^-iN^1 + αρ_m ∑_j=1^r s_j (o_Nj)^2 = (1 + 𝒪(e^-β N^1-2υ))∫_ℝ^N dg⃗/(2π)^N/2 e^-g⃗^2/2exp(-iN^1+αρ_m ∑_j=1^r s_j g_j^2/||g⃗||^2){B_N(υ)} = (1 + 𝒪(e^-β N^1-2υ))∫_ℝ^N dg⃗/(2π)^N/2{B_N(υ)} e^-g⃗^2/2exp(-iN^α(1+𝒪(N^-υ))ρ_m ∑_j=1^r s_j g_j^2) but given B_N(υ) we have g_j^2 ≲ N for all j=1,…, N and so we do not, as it stands, have uniformly small error terms. We can circumvent this by introducing the following event for 0<η<1/2: E_N^(r)(η) = {|g_j|≤ N^1/2 - η  for j=1,…, r}. Let us use ĝ⃗̂ to denote the N-r dimensional vector with components (g_r+1, …, g_N). Then we have | |N^-1||ĝ⃗̂||^2 - 1 | - N^-1∑_i=1^r g_j^2 | ≤ |N^-1||g⃗||^2 - 1 | ≤ |N^-1||ĝ⃗̂||^2 - 1 | + N^-1∑_i=1^r g_j^2 so if η > υ/2 then it follows that B_N(υ)  |  E_N^(r)(η) = B_N-r(υ). But we also have (e.g. <cit.> Appendix C) ℙ( E_N^(r)(η)) = [erf(N^1/2-η)]^r = [1 - 𝒪(N^1/2 - η e^-N^1-2η)]^r = 1 - 𝒪(N^1/2 - η e^-N^1-2η) and so (taking η> υ, say) ℙ(B_N(υ)∩ E^(r)_N(η)) = ℙ(B_N(υ)  |  E^(r)_N(η)) ℙ(E^(r)_N(η)) =1 - 𝒪(e^-α N^1-2υ) and thus we can replace (<ref>) with ∫_O(N) dμ_Haar(O_m) e^-iN^1 + αρ_m ∑_j=1^r s_j (o_Nj)^2 = (1 + 𝒪(e^-β N^1-2υ))∫_ℝ^N dg⃗/(2π)^N/2{B_N(υ)∩ E^(r)_N(η)} e^-g⃗^2/2exp(-iN^α(1+𝒪(N^-υ))ρ_m ∑_j=1^r s_j g_j^2) but now N^α-υ g_j^2 ≤ N^α + 1 -υ - 2η≤ N^α + 1 -3υ→ 0 as N→∞ so long as we choose υ > α + 1/3. Given that α< 1/2, this choice is always possible for 0< υ < 1/2. Thus the error term in the exponent of (<ref>) is in fact uniformly small in g⃗ and so we obtain ∫_O(N) dμ_Haar(O_m) e^-iN^1 + αρ_m ∑_j=1^r s_j (o_Nj)^2 = (1 + o(1))∫_ℝ^Ndg⃗/(2π)^N/2{B_N(υ)∩ E^(r)_N(η)}exp(-g⃗^2/2-iN^αρ_m ∑_j=1^r s_j g_j^2) = (1 + o(1))∫_ℝ^rdg_1… dg_r/(2π)^r/2exp(-1/2∑_j=1^r{1+2iN^αρ_m s_j} g_j^2) = (1 + o(1)) ∏_j=1^r ( 1+ 2iN^αρ_m s_j)^-1/2. In the induction step in the proof of <cit.>, ρ_m becomes the new diagonal entry of the expanded Q̂ matrix. Combining (<ref>) with that proof gives the result ℐ_N(F; S) =(1+ o(1))π^N - 1/2(1+ o(1))/Γ(N/2)Γ(N-1/2)∫_Sym_≥ 0(m)dQ̂(Q̂)^N-3/2F(Q̂) ∏_j=1^r∏_i=1^N ( 1+ 2iN^αQ̂_iis_j)^-1/2. We note a comparison between Lemma <ref> and the theorem in Appendix C of <cit.>. That result is exact and holds for general functions of projections x⃗_i^Ts⃗ onto some arbitrary fixed vector s⃗, so it is a generalisation of our Lemma <ref> for r=1, however it only applies to r=1. In <cit.>, the function F(Q_B) (in our notation) is replaced by the more general ℱ(Q_B; s⃗_B) where the vector s⃗_B has entries (s⃗_B)_i = s⃗^Tx⃗_i and s⃗ is an arbitrary vector. The result analogous Lemma <ref> is 𝒥_N,m(ℱ; s⃗) ∝∫_Sym_≥ 0(m)dQ̂∫_^mdt⃗(Q̂)^N-m-2/2ℱ(Q̂ + t⃗t⃗^T; s⃗t⃗), where we omit the constant multiplicative factor since we are content to verify that the functional form agrees with Lemma <ref>. To use this theorem in the case of Lemma <ref>, the vector s⃗ is chosen to have norm s⃗_2 = N^α/2s_1^1/2, where s_1 is the single non-zero eigenvalue of the rank 1 matrix S and ℱ(Q_B; s⃗_B) = F(Q_B)e^-iN∑_j=1^m (x⃗_j^Ts⃗)^2. With these choices 𝒥_N,m(ℱ; s⃗) ∝∫_Sym_≥ 0(m)dQ̂∫_^mdt⃗(Q̂)^N-m-2/2F(Q̂ + t⃗t⃗^T) e^-iN^1+αs_1 t⃗^2 = ∫_^mdt⃗∫_Sym_≥ 0(m)dQ̂{t⃗^TQ̂^-1t⃗ < 1}(Q̂- t⃗t⃗^T)^N-m-2/2F(Q̂) e^-iN^1+αs_1t⃗^2 = ∫_^mdt⃗∫_Sym_≥ 0(m)dQ̂{t⃗^TQ̂^-1t⃗ < 1}Q̂^N-m-2/2 (1 - t⃗^TQ̂^-1t⃗)^N-m-2/2F(Q̂) e^-iN^1+αs_1 t⃗^2 = ∫_t⃗_2<1dt⃗∫_Sym_≥ 0(m)dQ̂Q̂^N-m-1/2 (1 - t⃗^2)^N-m-2/2F(Q̂) e^-iN^1+αs_1t⃗^TQ̂t⃗. Now ∫_t⃗_2<1dt⃗ (1 - t⃗^2)^N-m-2/2e^-iN^1+αs_1t⃗^TQ̂t⃗ = ∫_t⃗_2<1dt⃗exp{-N(iN^αs_1 t⃗^TQ̂t⃗ - N-m-2/2Nlog(1 - t⃗^2))}, and so we can evaluate the integral over t⃗ asymptotically. The saddle point is clearly at t⃗ = 0, so the leading order contribution as N→∞ is from around this point. We proceed by expanding the logarithm and evaluating the integral one coordinate at a time. Also note that N-m-2/2N∼1/2 for large N. Thus, writing t⃗ = (t⃗', t_m), ∫_t⃗_2<1dt⃗ (1 - t⃗^2)^N-m-2/2e^-iN^1+αs_1t⃗^TQ̂t⃗ ∼∫_t⃗_2<1dt⃗exp{-N(iN^αs t⃗^TQ̂t⃗ + 1/2t⃗^2 )} ∼∫_t⃗'_2<1-ϵ^2dt⃗'∫_-ϵ^ϵdt_m exp{-N( 1/2 t_m^2 (2Q̂_mmiN^αs_1 + 1)             + 2t_m ∑_j≠ mQ̂_mjt_j' + t⃗'^TQ̂'t⃗' + 1/2t⃗'^2 )} where Q̂' is the m-1 × m-1 top left block of Q̂ and ϵ≪ 1. Completing the square and applying Laplace's method to the t_m integral gives ∫_t⃗_2<1dt⃗ (1 - t⃗^2)^N-m-2/2e^-iN^1+αs_1t⃗^TQ̂t⃗ ∼ N^-1/2∫_t⃗'_2<1dt⃗' (2Q̂_mmiN^αs_1 + 1)^-1/2exp{-N( t⃗'^TQ̂'t⃗' + 1/2t⃗'^2 )} and so one can clearly iterate to obtain ∫_t⃗_2<1dt⃗ (1 - t⃗^2)^N-m-2/2e^-iN^1+αs_1t⃗^TQ̂t⃗ ∼ N^-m/2∏_j=1^m(2Q̂_jjiN^αs + 1)^-1/2 and so, recalling (<ref>), we obtain the same expression as Lemma <ref> (up-to un-tracked constants). § CONCLUSION The interpretation of the results we have presented in this chapter is largely the same as that first given in <cit.>. Under the chosen modeling assumptions, the local optima of the the neural network loss surface are arranged so that, above a critical value -NE_∞, it is overwhelmingly likely that gradient descent will encounter high-index optima and so `escape' and descend to lower loss. Below -NE_∞, the low-index optima are arranged in a `banded' structure, however, due to the imprecision of Theorem <ref>, the bands are slightly blurred when compared with <cit.>. We display the differences in Table <ref>. Our results have plugged a gap in the analysis of <cit.> by demonstrating that the specific activation function required by the technicalities of their derivation is not, in fact, a requirement of the results themselves, which we have shown to hold for any reasonable choice of activation function. At the same time, experimental results imply that a sufficiently precise model for deep neural network loss surfaces should display some non-trivial dependence on the choice of activation function, but we have shown that no dependence at all is seen at the relevant level of logarithmic asymptotic complexity, but is visible in the sharp leading order complexity. In defense of <cit.>, we have reduced the scope for their results to be some spurious apparition of an intersection of several unrealistic simplifications. However, with the same result, we have demonstrated an important aspect of neural network architectural design to which the multi-spin glass correspondence is entirely insensitive, so limiting the precision of any statements about real neural networks that can be made using this analysis. In the pursuit of our aims, we have been forced to approximately reproduce the work of <cit.> by means of the supersymmetric method of Random Matrix Theory, which we believe is quite novel and have also demonstrated how various steps in these supersymmetric calculations can be adapted to the setting of a GOE matrix deformed by some low-rank fixed matrix including utilising Gaussian approximations to orthogonal matrices in ways we have not previously seen in the literature. We believe some of our intermediate results and methods may be of use in other contexts in Random Matrix Theory. As highlighted in the main text, there are a few areas for future work that stem immediately from our calculations. We list them here along with other possibilities. * Constructing an appropriate indicator function (or approximate indicator function) for the index of a matrix so that Theorem <ref> can be precised and to obtain exact leading order terms for C^h_N,k that could not be obtained in <cit.> (see Remark <ref>). * The `path-independence' assumption (Section <ref>, assumption <ref>) is the weakest link in this work (and that of <cit.>) and we have shed further light on its validity through experimentation (Section <ref>). The supersymmetric calculations used here have shown themselves to be powerful and quite adaptable. We therefore suggest that it may be possible to somehow encapsulate the failure of assumption <ref> as a first-order correlation term and repeat the presented analysis in an expansion when this term is small. * Further, this work and others mentioned in the introduction have shown that studying spin glass like objects in this context is a fruitful area of research and so we would like to study more exotic glassy objects inspired by different neural network architectures and applications and hope to be able to adapt the calculations presented here to such new scenarios. = CHAPTER: A SPIN GLASS MODEL FOR GENERATIVE ADVERSARIAL NETWORKS The content of this chapter was published first as a pre-print in January 2021 (<https://arxiv.org/abs/2101.02524>) and later as a journal article: “A spin glass model for the loss surfaces of generative adversarial networks”. Nicholas P Baskerville, Jonathan P Keating, Francesco Mezzadri and Joseph Najnudel. Journal of Statistical Physics, 186(2):1-45 2022. NPB suggested the topic, performed most of the calculations and experiments and wrote the paper. The other authors contributed ideas for possible approaches, provided feedback on results throughout and made small revisions to the drafts. NPB and JN collaborated on the proof of Lemma 4. Jonathan Hodgson helped considerably with the design of Figure 6. Anonymous reviewers spotted some minor errors, advised on changes of presentation and extra experiments and provided useful references. § AN INTERACTING SPIN GLASS MODEL We use multi-spin glasses in high dimensions as a toy model for neural network loss surfaces without any further justification, beyond that found in <cit.> and Chapter <ref>. GANs are composed of two networks: generator (G) and discriminator (D). G is a map ℝ^m→ℝ^d and D is a map ℝ^d→ℝ. G's purpose is to generate synthetic data samples by transforming random input noise, while D's is to distinguish between real data samples and those generated by G. Given some probability distribution ℙ_data on some ℝ^d, GANs have the following minimax training objective min_Θ_Gmax_Θ_D{𝔼_x⃗∼ℙ_datalog D(x⃗) + 𝔼_z⃗∼𝒩(0, σ_z^2)log(1 - D(G(z⃗)))}, where Θ_D, Θ_G are the parameters of the discriminator and generator respectively. With z⃗∼𝒩(0, σ_z^2), G(z⃗) has some probability distribution ℙ_gen. When successfully trained, the initially unstructured ℙ_gen examples are easily distinguished by D, this in turn drives improvements in G, bring ℙ_gen closer to ℙ_data. Ultimately, the process successfully terminates when ℙ_gen is very close to ℙ_data and D performs little better than random at the distinguishing task. To construct our model, we introduce two spin glasses: () = ∑_i_1,…, i_p=1^N_D X_i_1,…, i_p∏_k=1^p _i_k (, ) = ∑_i_1,…, i_p+q=1^N_D+N_G Z_i_1,…, i_p+q∏_k=1^p+q w_k where w⃗^T = (^T, ^T), all the X_i_1,…, i_p are i.i.d. 𝒩(0,1) and Z_j_1,…, j_p+q are similarly i.i.d. 𝒩(0,1). We then define the models for the discriminator and generator losses: (, ) = () - σ_z(, ), (, ) = σ_z (,). plays the role of the loss of the discriminator network when trying to classify genuine examples as such. plays the role of loss of the discriminator when applied to samples produced by the generator, hence the sign difference between and . are the weights of the discriminator, and the weights of the generator. The X_i⃗ are surrogates for the training data (i.e. samples from ℙ_data) and the Z_j⃗ are surrogates for the noise distribution of the generator. For convenience, we have chosen to pull the σ_z scale outside of the Z_j⃗ and include it as a constant multiplier in (<ref>)-(<ref>). In reality, we should like to keep Z_j⃗ as i.i.d. 𝒩(0,1) but take X_i⃗ to have some other more interesting distribution, e.g. normally or uniformly distributed on some manifold. Using [x] to denote the integer part of x, we take N_D = [κ N], N_G = [κ' N] for fixed κ∈(0,1), κ'=1-κ, and study the regime N→∞. Note that there is no need to distinguish between [κ N] and κ N in the N→∞ limit. Our model is not supposed to have any direct relationship to GANs. Rather, we have used two spin glasses as models for high-dimensional random surfaces. The spin glasses are related by sharing some of their variables, namely the , just as the two training objectives in GANs share the discriminator weights. In prior work modeling neural network loss surfaces as spin glasses, the number of spins corresponds to the number of layers in the network, therefore we have chosen p spins for and p+q for , corresponding to p layers in the discriminator and q layers in the generator, but the generator is only ever seen in the losses composed with the discriminator. One could make other choices of and to couple the two glasses and we consider one such example in the appendix Section <ref>. § KAC-RICE FORMULAE FOR COMPLEXITY Training GANs involves jointly minimising the losses of the discriminator and the generator. Therefore, rather than being interested simply in upper-bounding a single spin-glass and counting its stationary points, the complexity of interest comes from jointly upper bounding both L^(D) and L^(G) and counting points where both are stationary. Using S^M to denote the M-sphere[We use the convention of the M-sphere being the sphere embedded in ℝ^M.], we define the complexity C_N =|{∈ S^N_D, ∈ S^N_G  : ∇_D = 0, ∇_G = 0, ∈ B_D, ∈ B_G}| for some Borel sets B_D, B_G⊂ℝ and where ∇_D, ∇_G denote the Riemannian covariate derivatives on the hyperspheres with respect to the discriminator and generator weights respectively. Note: * We have chosen to treat the parameters of each network as somewhat separate by placing them on their own hyper-spheres. This reflects the minimax nature of GAN training, where there really are 2 networks being optimised in an adversarial manner rather than one network with some peculiar structure. * We could have taken ∇ = (∇_D, ∇_G) and required ∇ = ∇ = 0 but, as in the previous comment, our choice is more in keeping with the adversarial set-up, with each network seeking to optimize separately its own parameters in spite of the other. * We will only be interested in the case B_D = (-∞, N u_D) and B_G= (-∞, N u_G), for u_D, u_G∈ℝ. So that the finer structure of local minima and saddle points can be probed, we also define the corresponding complexity with Hessian index prescription C_N, k_D, k_G =|{∈ S^N_D , ∈ S^N_G  :  ∇_D = 0, ∇_G = 0, ∈ B_D, ∈ B_G i(∇_D^2 L^(D)) = k_D,  i(∇_G^2 L^(G)) = k_G }|, where i(M) is the index of M (i.e. the number of negative eigenvalues of M). We have chosen to consider the indices of the Hessians ∇_D^2 and ∇_G^2 separately, just as we chose to consider separately vanishing derivatives ∇_D and ∇_G. We believe this choice best reflects the standard training loop of GANs, where each iteration updates the discriminator and generator parameters in separate steps. To calculate the complexities, we follow the well-trodden route of Kac-Rice formulae as pioneered by <cit.>. For a fully rigorous treatment, we proceed as in <cit.> and Chapter <ref>. C_N = ∫_S^N_D× S^N_G d d  φ_(∇_D , ∇_G )(0) [ |([ ∇_D^2 ∇_GD; ∇_DG ∇^2_G ])|  | ∇_G =0, ∇_D = 0]1{∈ B_D, ∈ B_G} and therefore C_N = ∫_S^N_D× S^N_G d d  φ_(∇_D , ∇_G )(0)∫_B_Ddx_D ∫_B_G dx_G  φ_(x_D)φ_(x_G) [ |([ ∇_D^2 ∇_GD; ∇_DG ∇^2_G ])|  | ∇_G =0, ∇_D = 0, = x_D, = x_G]. where φ_(∇_D ,∇_G ) is the joint density of (∇_D ,∇_G )^T, φ_ the density of , and φ_ the density of , all implicitly evaluated at (, ). In the notation of Theorem <ref>, we make the following choices: ϕ = ([ ∇_D; ∇_G ]),    ψ = ([ ; ]) and so A = B_D× B_G,    u⃗ = 0. and the manifold ℳ is taken to be S^N_D× S^N_G with the product topology. It is sufficient to check the conditions of Theorem <ref> with the above choices. Conditions (a)-(f) are satisfied due to Gaussianity and the manifestly smooth definition of L^(D), L^(G). The moduli of continuity conditions as in (g) are satisfied separately for L^(D) and its derivatives on S^N_D and for L^(G) and its derivatives on S^N_G, as seen in the proof of the analogous result for a single spin glass in <cit.>. But since ℳ is just a direct product with product topology, it immediately follows that (g) is satisfied, so Theorem <ref> applies and we obtain (<ref>). (<ref>) follows simply, using the rules of conditional expectation. With Lemma <ref> in place, we can now establish the following Kac-Rice expression specialised to our model: For (N-2)× (N-2) GOE matrix M and independent (N_D - 1)× (N_D-1) GOE matrix M_1, define H(x, x_1) d= bM + b_1([ M_1 0; 0 0 ]) - x - x_1([ I_N_D 0; 0 0 ]). For u_G, u_D∈ℝ, define B = {(x, x_1)∈ℝ^2  :  x≤1/2(p+q)2^p+q u_G,    x_1 ≥ -(p+q)^-1 2^-(p+q)p x - p/2u_D}. Define the constant K_N =ω_κ Nω_κ'N (2(N-2))^N-2/2 (2π)^-N-2/2(p + σ_z^22^p+1(p+q))^-κ N-1/2(σ_z^2 2^p+q (p+q))^-κ' N-1/2 where the variances are s^2 = 1/2σ_z^2(p+q)^2 2^3(p+q),     s_1^2 = p^2/2. and ω_N = 2π^N/2/Γ(N/2) is the surface area of the N sphere. The expected complexity C_N is then 𝔼 C_N = K_N ∫_B N/2π s^2e^-N/2s^2x^2dx  N/2π s_1^2 e^ -N/2s_1^2 x_1^2dx_1  𝔼| H(x, x_1)|. Define the matrix H̃ = ([ ∇_D^2 ∇_GD; ∇_DG ∇^2_G ]) appearing in the expression for C_N in Lemma <ref>. Note that H̃ takes the place of a Hessian (though it is not symmetric). We begin with the distribution of H̃ | {(, ) = (x_D, x_G),   (∇_D, ∇) = (0, 0)}. Note that the integrand in (<ref>) is jointly spherically symmetric in both and . It is therefore sufficient to consider H̃ in the region of a single point on each sphere. We choose the north poles and coordinate bases on both spheres in the region of their north poles. The remaining calculations are routine Gaussian manipulations, very similar in character to those in the previous chapter, so they are given at the end of this chapter (section <ref>). One finds H̃d=2p(p-1)([ N_D -1M^(D)_2 0; 0 0 ]) + σ_z2^p+q+1(p+q)(p+q-1)([ N_D -1M^(D)_1 -2^-1/2G; 2^-1/2 G^T N_G - 1M^(G) ]) - σ_z(p+q)x_G2^p+q([ -I_N_D 0; 0 I_N_G ]) - px_D([ I_N_D 0; 0 0 ]) where M^(D)_1, M^(D)_2 are independent GOE^N_D - 1 matrices, M^(G) is an independent GOE^N_G - 1 matrix and G is an independent (N_D - 1)×(N_G - 1) Ginibre matrix. Note that the dimensions are N_D - 1 and N_G - 1 rather N_D and N_G. This is simply because the hypersphere S^N_D is an N_D - 1 dimensional manifold, and similarly S^N_G. We can simplify by summing independent Gaussians to obtain H̃ = ([ σ_DN_D -1M^(D) -2^-1/2σ_GG,; 2^-1/2σ_G G^T σ_GN_G - 1M^(G) ]) -σ_z(p+q)x_G2^p+q([ -I_N_D 0; 0 I_N_G ]) - px_D([ I_N_D 0; 0 0 ]) where σ_G = σ_z2^p+q+1(p+q)(p+q-1) σ_D = σ_G^2 + 2p(p-1) and M^(D)∼ GOE^N_D - 1 is a GOE matrix independent of M^(G) and G. There is an alternative reformulation of H̃ that will also be useful. Indeed, because M^(D)_1,2d= -M^(D)_1,2, let us write H̃ as H̃ = σ_zJ(2^p+q+1(p+q)(p+q-1)(N_D + N_G - 2)M_1 - (p+q)x_G2^p+qI) + (2p(p-1)(N_D - 1)([ M_2 0; 0 0 ]) - px_D([ I_N_D 0; 0 0 ])) d= J[ σ_z2^p+q+1(p+q)(p+q-1)(N_D + N_G - 2)M_1 - σ_z(p+q)x_G2^p+qI + 2p(p-1)(N_D - 1)([ M_2 0; 0 0 ]) + px_D([ I_N_D 0; 0 0 ])] where M_1∼ GOE^N_D + N_G - 2 is a GOE matrix of size N_D + N_G-2, M_2∼ GOE^N_D - 1 is an independent GOE matrix of size N_D-1 and J = ([ -I_N_D 0; 0 I_N_G ]). If follows that |H̃| d= |[ σ_z2^p+q+1(p+q)(p+q-1)(N_D + N_G - 2)M_1 - σ_z(p+q)x_G2^p+qI + 2p(p-1)(N_D - 1)([ M_2 0; 0 0 ]) + px_D([ I_N_D 0; 0 0 ])]|. Now define the constants b = 2^p+q(p+q)(p+q-1)σ_z,       b_1= p(p-1)κ x = σ_z(p+q)2^p+q/Nx_G,       x_1= -p/Nx_D, and then we arrive at |H̃| d=(2(N-2))^N-2/2 | H(x, x_1)|. The variances of and are derived from those of , computed in Section <ref> (see (<ref>), (<ref>)): Var() = 1,    Var() = 2^p+q. Similarly the density φ_(∇_D ,∇_G ) is found in (<ref>): φ_(∇_D L^(D), ∇_G L^(G))(0) = (2π)^-N-2/2(p + σ_z^22^p+1(p+q))^-N_D - 1/2(σ_z^2 2^p+q (p+q))^-N_G-1/2. We have now collected all the inputs required for Lemma <ref>. The domain of integration B arises from the constraints L^(D)∈ (-∞, N u_D) and L^(G)∈ (-∞, N u_G) and the re-scaled variables (<ref>). This completes the proof. We will need the asymptotic behaviour of the constant K_N, which we now record in a small lemma. As N→∞, K_N ∼ 2^N/2π^N/2(κ^κκ'^κ')^-N/2κκ'(p + σ_z^22^p+1(p+q))^-κ N-1/2(σ_z^2 2^p+q (p+q))^-κ' N-1/2 By Stirling's formula K_N ∼ 4 π^N(4π/κ N)^-1/2(4π/κ' N)^-1/2(κ N/2 e)^-κ N/2(κ' N/2 e)^-κ' N/2(2(N-2))^N-2/2(2π)^-N-2/2       (p + σ_z^22^p+1(p+q))^-κ N-1/2(σ_z^2 2^p+q (p+q))^-κ' N-1/2 ∼ 2^N/2π^N/2(κ^κκ'^κ')^-N/2κκ'(p + σ_z^22^p+1(p+q))^-κ N-1/2(σ_z^2 2^p+q (p+q))^-κ' N-1/2 where we have used (N-2)^N-2/2 = N^N-2/2(1- 2/N)^N-2/2∼ N^N-2/2 e^-N/2. § LIMITING SPECTRAL DENSITY OF THE HESSIAN Our intention now is to compute the the expected complexity C_N via the Coulomb gas method. The first step in this calculation is to obtain the limiting spectral density of the random matrix H' = bM + b_1([ M_1 0; 0 0 ]) - x_1([ I 0; 0 0 ]), where, note, H' = H + xI is just a shifted version of H as defined in Lemma <ref>. Here the upper-left block is of dimension κ N, and the overall dimension is N. Let μ_eq be the limiting spectral measure of H' and ρ_eq its density. The supersymmetric method provides a way of calculating the expected Stieltjes transforms of ρ_eq <cit.>: ⟨ G(z) ⟩ = 1/N∂/∂ J|_J=0 Z(J) Z(J) 𝔼_H'(z - H' + J)/(z - H'). Recall that a density and its Stieltjes transform are related by the Stieltjes inversion formula ρ_eq(z) = 1/πlim_ϵ→ 0⟨ G(z + iϵ)⟩. The function Z(J) can be computed using a supersymmetric representation of the ratio of determinants. Firstly, we recall an elementary result from multivariate calculus, where M is a real matrix: ∫∏_i=1^N dϕ_i dϕ_i^*/2π e ^-iϕ^†Mϕ = 1/ M. By introducing Grassmann varibables χ_i, χ_i* and a Berezin integral, we obtain a complimentary expression: ∫1/-i∏_i=1^N dχ_i dχ_i^* e^-iχ^†Mχ = M, Using the integral results (<ref>), (<ref>) we can then write (z - H' + J)/(z - H') = ∫ dΨexp{ -iϕ^†(z-H') ϕ - i χ^†(z+J - H')χ} where the measure is dΨ = 1/-i (2π)^N∏_t=1^2 dϕ[t]dϕ^*[t] dχ[t] dχ^*[t], ϕ is a vector of N complex commuting variables, χ and χ^* are vectors of N Grassmann variables, and we use the [t] notation to denote the splitting of each of the vectors into the first κ N and last (1-κ)N components, as seen in <cit.>: ϕ = ([ ϕ[1]; ϕ[2] ]). We then split the quadratic form expressions in (<ref>) -ϕ^†(z-H') ϕ - χ^†(z+J - H')χ = -ϕ[1]^†(x_1 - b_1M_1) ϕ[1] -ϕ^†(z - bM) ϕ - χ[1]^†(x_1 - b_1 M_1)χ[1] - χ^†(z + J - b M)χ. Taking the GOE averages is now simple <cit.>: 𝔼_M exp{-ibϕ^†M ϕ - ibχ^†Mχ} = exp{- b^2/4N Q^2}, 𝔼_M exp{-ib_1ϕ[1]^†M_1 ϕ[1] - ib_1χ[1]^†M_1χ[1]} = exp{- b_1^2/4κ N Q[1]^2}, where the supersymmetric matrices are given by Q = ([ ϕ^†ϕ ϕ^†χ; χ^†ϕ χ^†χ ]),     Q[1] = ([ ϕ[1]^†ϕ[1] ϕ[1]^†χ[1]; χ[1]^†ϕ[1] χ[1]^†χ[1] ]). Introducing the tensor notation ψ = ϕ⊗([ 1; 0 ]) + χ⊗([ 0; 1 ]),   ψ[1] = ϕ[1] ⊗([ 1; 0 ]) + χ[1] ⊗([ 0; 1 ]) and ζ = ([ z 0; 0 z + J ]) we can compactly write Z(J) = ∫ dΨexp{ - b^2/4N Q^2 - b_1^2/4κ N Q[1]^2 - i ψ[1]^†ψ[1]x_1 - i ψ^†ζψ}. We now perform two Hubbard-Stratonovich transformations <cit.> Z(J) = ∫ dΨ dσ dσ[1]exp{ - N/b^2σ^2 - κ N/b_1^2σ[1]^2 - i ψ[1]^†(x_1 + σ[1])ψ[1] - i ψ^†(σ + ζ )ψ}, where σ and σ[1] inherit their form from Q, Q[1] σ = ( [ σ_BB σ_BF; σ_FB iσ_FF ]),    σ[1] = ( [ σ_BB[1] σ_BF[1]; σ_FB[1] iσ_FF[1] ]) with σ_BB, σ_FF, σ_BB[1], σ_FF[1] real commuting variables, and σ_BF, σ_FB, σ_BF[1], σ_FB[1] Grassmanns; the factor i is introduced to ensure convergence. Integrating out over dΨ is now a straightforward Gaussian integral in superspace, giving Z(J) = ∫ dΨ dσ dσ[1] exp{ - N/b^2σ^2 - κ N/b_1^2σ[1]^2 - i ψ[1]^†(x_1 + ζ + σ + σ[1])ψ[1] - i ψ[2]^†(σ + ζ )ψ[2]} = ∫ dσ dσ[1] exp{ - N/b^2σ^2 - κ N/b_1^2σ[1]^2 - κ Nlog(x_1 + ζ + σ + σ[1]) - κ' N log (σ + ζ )} =∫ dσ dσ[1] exp{ - N/b^2 (σ - ζ)^2 - κ N/b_1^2σ[1]^2 - κ Nlog(x_1 + σ + σ[1]) - κ' N logσ}. Recalling the definition of ζ, we have (σ - ζ)^2 = (σ_BB - z)^2 - (iσ_FF - z- J)^2 and so one immediately obtains 1/N∂/∂ J|_J=0 Z(J) = 2/b^2∫ dσ dσ[1] (z - iσ_FF) exp{ -N/b^2 (σ - z)^2 - κ N/b_1^2σ[1]^2 -κ Nlog(x_1 + σ + σ[1]) - κ' N logσ} = 2/b^2∫ dσ dσ[1] (z - iσ_FF) exp{ -N/b^2σ ^2 - κ N/b_1^2σ[1]^2 -κ Nlog(x_1 + z + σ + σ[1]) - κ' N log ( z+σ)} To obtain the limiting spectral density (LSD), or rather its Stieltjes transform, one must find the leading order term in the N→∞ expansion for (<ref>). This can be done by using the saddle point method on the σ,σ[1] manifolds. We know that the contents of the exponential must vanish at the saddle point, since the LSD is 𝒪(1), so we in fact need only compute σ_FF at the saddle point. We can diagonalise σ within the integrand of (<ref>) and absorb the diagonalising graded U(1/1) matrix into σ[1]. The resulting saddle point equations for the off-diagonal entries of the new (rotated) σ[1] dummy variable are trivial and immediately give that σ[1] is also diagonal at the saddle point. The saddle point equations are then 2/b_1^2σ_BB[1] + 1/σ_BB[1] + σ_BB + x_1 + z = 0 2/b^2σ_BB + κ/σ_BB[1] + σ_BB + x_1 + z + κ'/σ_BB + x = 0 2/b_1^2σ_FF[1] - 1/σ_FF[1] + σ_FF - ix_1 - iz = 0 2/b^2σ_FF - κ/σ_FF[1] + σ_FF - ix_1 -iz - κ'/σ_FF - iz = 0. (<ref>) and (<ref>) combine to give an explicit expression for σ_FF[1]: σ_FF[1] = b_1^2/2κ(2/b^2σ_FF - κ'(σ_FF - iz)^-1). With a view to simplifying the numerical solution of the coming quartic, we define t = i(σ_FF - iz) and then a line of manipulation with (<ref>) and (<ref>) gives (t^2 - zt - κ' b^2)((1 + κ^-1b^-2 b_1^2)t^2 - (κ^-1b_1^2 b^-2z - x_1)t - κ'κ^-1b_1^2) + b^2κ t^2= 0. By solving (<ref>) numerically for fixed values of κ, b, b_1, x_1, we can obtain the four solutions t_1(z), t_2(z), t_3(z), t_4(z). These four solution functions arise from choices of branch for (z, x_1)∈ℂ^2 and determining the correct branch directly is highly non-trivial. However, for any z∈ℝ, at most one of the t_i will lead to a positive LSD, which gives a simple way to compute ρ_eq numerically using (<ref>) and (<ref>): ρ_eq(z) = max_i{-2/b^2π t_i(z)}. Plots generated using (<ref>) and eigendecompositions of matrices sampled from the distribution of H' are given in Figure <ref> and show good agreement between the two. Note the three different forms: single component support, two component support and the transition point between the two, according to the various parameters. In these plots, the larger lobes on the left correspond to the upper left block, which is much larger than the lower-right block (since κ=0.9 here). One can see this by considering large x_1, for which there must be a body of eigenvalues in the region of -x_1 owing to the upper left block. Since x_1 only features in the upper-left block, not all of the eigenvalues can be located around -x_1, and the remainder are found in the other lobe of the density which is around 0 in Figure <ref>. § THE ASYMPTOTIC COMPLEXITY In the previous section, we have found the equilibrium measure, μ_eq, of the ensemble of random matrices H' = bM + b_1([ M_1 0; 0 0 ]) - x_1([ I 0; 0 0 ]),    M∼ GOE^N,   M_1∼ GOE^κ N. The Coulomb gas approximation gives us a method of computing 𝔼 |(H'-x)|: 𝔼 |(H'-x)| ≈exp{N∫log|z - x| dμ_eq(z)}. We have access to the density of μ_eq pointwise (in x and x_1) numerically, and so (<ref>) is a matter of one-dimensional quadrature. Recalling (<ref>), we then have 𝔼 C_N ≈ K_N'∬_B dxdx_1  exp{-(N-2)( 1/2s^2x^2 + 1/2s_1^2 (x_1)^2 - ∫log|z - x| dμ_eq(z) )}≡ K_N'∬_B dxdx_1   e^-(N-2)Φ(x, x_1) where K_N' = K_N N-2/2π s_1^2N-2/2π s^2. Due to Lemma <ref>, the constant term has asymptotic form 1/Nlog K_N' ∼ 1/2log2 + 1/2logπ - κ/2log(p + σ_z^22^p+q(p+q)) - κ'/2log(σ_z^2(p+q) 2^p+q) - κ/2logκ - κ'/2logκ' ≡ K We then define the desired Θ(u_D, u_G) as lim1/Nlog𝔼C_N = Θ(u_D, u_G) and we have Θ(u_D, u_G) = K - min_B Φ. Using these numerical methods, we obtain the plot of Φ in B and a plot of Θ for some example p,q,σ_z, κ values, shown in Figures <ref>, <ref>. Numerically obtaining the maximum of Φ on B is not as onerous as it may appear, since -Φ grows quadratically in |x|, |x_1| at moderate distances from the origin. We numerically verify the legitimacy of this Coulomb point approximation with Monte Carlo integration 𝔼|(H'-x)| ≈1/n∑_i=1^n ∏_j=1^N |λ_j^(i) - x|, where λ^(i)_j is the j-th eigenvalues of the i-th i.i.d. sample from the distribution of H'. The results, comparing N^-1log𝔼|(H'-x)| at N=50 for a variety of x,x_1 are show in Figure <ref>. Note the strong agreement even at such modest N, however to rigorously substantiate the Coulomb gas approximation in (<ref>), we must prove a concentration result. Let (H_N)_N=1^∞ be a sequence of random matrices, where for each N H_N d= bM + b_1([ M_1 0; 0 0 ]) - x_1([ I 0; 0 0 ]) and M∼ GOE^N, M_1∼ GOE^κ N. Let μ_N be the empirical spectral measure of H_N and say μ_N→μ_eq weakly almost surely. Then for any (x, x_1)∈ℝ^2 𝔼 |(H_N-xI)| = exp{N (1 + o(1))∫log|z - x| dμ_eq(z)} as N→∞. We begin by establishing an upper bound. Take any β>0, then ∫log|z-x| dμ_N(z) = ∫log|z-x| {|x-z|≥ e^β} dμ_N(z) +∫log|z-x| {log|x-z| < β} dμ_N(z) ≤ ∫log|z-x| {|x-z|≥ e^β} dμ_N(z) +∫min(log|x-z|, β) dμ_N(z). Take also any α>0, then trivially ∫min(log|x-z|, β) dμ_N(z) ≤∫max(-α, min(log|x-z|, β)) dμ_N(z). Overall we have, for any α, β > 0, exp{N∫log|z-x| dμ_N(z) } ≤ exp{ N∫log|z-x| {|x-z|≥ e^β} dμ_N(z) } exp{ N ∫max(-α, min(log|x-z|, β)) dμ_N(z) }. Thence an application of Hölder's inequality gives | (H_N-xI)| = [exp{N∫log|z-x| dμ_N(z) }] ≤([exp{2N∫max(-α, min(log|x-z|, β)) dμ_N(z)}])^1/2_A_N        ([exp{2N∫log|x-z|{|x-z|≥ e^β}dμ_N(z)}])^1/2_B_N. Considering B_N, we have log|x-z| {|x-z| ≥ e^β}≤ |x-z|^1/2{|x-z| ≥ e^β}≤ e^-β/2|x-z| and so [exp{2N∫log|x-z|{|x-z|≥ e^β}}] ≤[exp{2N e^-β/2 |H_N - xI|/N}] = [exp{2e^-β/2 |H_N - xI|}]. The entries of H_N are Gaussians with variance 1/Nb^2, 1/2Nb^2, 1/N(b^2 + b_1^2) or 1/2N(b^2+b_1^2) and all the diagonal and upper diagonal entries are independent. All of these variances are 𝒪(N^-1), so |H_N - x|_ij≤ |x| + |x_1| + 𝒪(N^-1/2)|X_ij| where the X_ij are i.i.d. standard Gaussians for i≤ j. It follows that [exp{2e^-β/2 |H_N- xI|}] ≤ e^2e^-β/2N(|x| + |x_1|)_X∼𝒩(0,1) e^2e^-β/2𝒪(N^1/2)|X|. Elementary calculations give _X∼𝒩(0,1) e^c |X|≤1/2(e^-c^2 + e^c^2) ≤ e^c^2 and so [exp{2e^-β/2 |H_N- xI|}] ≤ e^2e^-β/2N(|x| + |x_1|) e^4e^-β𝒪(N) = exp{2N(e^-β/2(|x| + |x_1|) + e^-β𝒪(1))} thus when we take β→∞, we have B_N ≤ e^o(N). Considering A_N, it is sufficient now to show [exp{ 2N ∫ f(z) dμ_N(z)}] = exp{2N(∫ f(z) dμ_eq(z) + o(1))} where f(z) = 2max(min(log|x-z|, β), -α), a continuous and bounded function. For any ϵ>0, we have [exp{2N∫ f(z) dμ_N(z)}] ≤ exp{2N(∫ f(z) dμ_eq(z) + ϵ)} + e^2N||f||_∞ℙ(∫ f(z) dμ_N(z) ≥∫ f(z)dμ_eq(z) + ϵ). The entries of H_N are Gaussian with 𝒪(N^-1) variance and so obey a log-Sobolev inequality as required by Theorem 1.5 from <cit.>. The constant, c, in the inequality is independent of N, x, x_1, so we need not compute it exactly. The theorem from <cit.> then gives ℙ(∫ f(z) dμ_N(z) ≥∫ f(z)dμ_eq(z) + ϵ) ≤exp{-N^2/8cϵ^2}. We have shown |(H_N - xI)| ≤ A_NB_N ≤exp{N(1 + o(1))(∫ f(z)dμ_eq(z))} ≤exp{N(1 + o(1))(∫log|x-z|dμ_eq(z))}. We now need to establish a complimentary lower bound to complete the proof. By Jensen's inequality |(H_N-x)| ≥exp(N𝔼[∫log|z-x| dμ_N(z)]) ≥exp(N𝔼[∫max(-α, log|z-x|) dμ_N(z)]) exp(N[∫log|z-x| {|z-x| ≤ e^-α}dμ_N(z)]) ≥exp(N𝔼[∫min(β, max(-α, log|z-x|)) dμ_N(z)])      exp(N[∫log|z-x| {|z-x| ≤ e^-α}dμ_N(z)]) for any α, β >0. Convergence in law of μ_N to μ_eq and the dominated convergence theorem give exp(N𝔼[∫min(β, max(-α, log|z-x|)) dμ_N(z)]) ≥exp{N (∫log|x-z| dμ_eq(z) + o(1))} for large enough β, because μ_eq has compact support. It remains to show that the expectation inside the exponent in the second term of (<ref>) converges to zero uniformly in N in the limit α→∞. By (<ref>), it is sufficient to consider ⟨ G_N(z)⟩, which is computed via (<ref>). Let us define the function Ψ so that ⟨ G_N(z) ⟩ = 2/b^2∫ dσ dσ[1] (z-iσ_FF) e^-NΨ(σ, σ[1]). Henceforth, σ_FF^*, σ_FF[1]^*, σ_BB^*, σ_BB[1]^* are the solution to the saddle point equations (<ref>-<ref>) and σ̃_FF, σ̃_FF[1], σ̃_BB, σ̃_BB[1] are integration variables. Around the saddle point z - iσ_FF = z - iσ_FF^* - iN^-1/rσ̃_FF for some r≥ 2. We use the notation σ⃗ for (σ_BB, σ_BB[1], σ_FF, σ_FF[1]) and similarly σ⃗_BB, σ⃗_FF. A superscript asterisk on Ψ or any of its derivatives is short hand for evaluation at the saddle point. While the Hessian of Ψ may not in general vanish at the saddle point, ∫ dσ̃dσ̃[1] σ̃_FF e^-N σ̃⃗̃^T ∇^2 Ψ^* σ̃⃗̃ = 0 and so we must go to at least the cubic term in the expansion of Ψ around the saddle point, i.e. ⟨ G_N(z) ⟩ = G(z) - 2i/b^2 N^5/3∫_-∞^∞ dσ̃⃗̃_BB dσ̃⃗̃_FFσ̃_FF e^-1/6σ̃^i σ̃^j σ̃^k ∂_ijkΨ^*_E(z; x_1) + exponentially smaller terms. The bosonic (BB) and fermionic (FF) coordinates do not interact, so we can consider derivatives of Φ as block tensors. Simple differentiation gives (∇Ψ)_B = ([ 2/b^2σ_BB - κ(σ_BB + σ_BB[1] + z + x_1)^-1 - κ'(σ_BB + z)^-1; 2/b_1^2σ_BB[1] - (σ_BB + σ_BB[1] + z + x_1)^-1 ]) (∇^2Ψ)_B = ([ κ(σ_BB + σ_BB[1] + z + x_1)^-2 + κ'(σ_BB + z)^-2 κ(σ_BB + σ_BB[1] + z + x_1)^-2; (σ_BB + σ_BB[1] + z + x_1)^-2 (σ_BB + σ_BB[1] + z + x_1)^-2 ]) (∇^3Ψ)_B^* = (([ A_Bκ + B_Bκ' A_Bκ; A_B A_B ]), A_B([ κ κ; 1 1 ]) ), where A_B = -2/(σ_BB^* + σ_BB^*[1] + z + x_1)^3,     B_B= -2/(σ_BB^* + z)^3. (∇^3Ψ)_F^* follows similarly with A_F = -2/(σ_FF^* + σ_FF^*[1] - iz - ix_1)^3,     B_F= -2/(σ_FF^* - iz)^3. By the saddle point equations (<ref>)-(<ref>) we have A_B = 2(σ_BB[1]^*)^3,    B_B = 2/(κ')^3(2κ/b_1^2σ_BB[1]^* - 2/b^2σ_BB^*)^3 A_F = 2(σ_FF[1]^*)^3,    B_F = 2/(κ')^3(2κ/b_1^2σ_FF[1]^* - 2/b^2σ_FF^*)^3. Let ξ_1= σ̃_BB, ξ_2 =σ̃_BB[1]. Then ( σ̃^i σ̃^j σ̃^k ∂_ijkΦ^*)_B = (A_Bκ + B_Bκ')ξ_1^3 + A_B(2κ +1 ) ξ_1^2ξ_2[1] + A_B(κ +2 ) ξ_1ξ_2^2 + A_Bξ_2^3 = A_B[ξ_2^3 + (2κ +1 )ξ_2ξ_1^2 +(2+ κ)ξ_1ξ_2^2 + Cξ_1^3] + (B_Bκ' + A_Bκ - CA_B)ξ_1^3 for any C. Let ξ_1 = a_1ξ_1' and then choose C = a_1^-3 and a_1 = (2+κ)(2κ + 1)^-1 to give ( σ̃^i σ̃^j σ̃^k ∂_ijkΦ^*)_B = A_B(ξ_1' + ξ_2)^3 + (B_Bκ' + A_Bκ - CA_B)a_1^3(ξ_1')^3 ≡ A_Bη^3 + D_Bξ^3 with η = ξ_1' + ξ_2, ξ=ξ_1', D_B=B_Bκ' + A_Bκ - a_1^-3A_B. The expressions for ( σ̃^i σ̃^j σ̃^k ∂_ijkΦ^*)_F follow identically. We thus have E(z;x_1) ∝(∫_0^∞ dξ ξ∫_ξ^∞dη  e^A_Fη^3 + D_Fξ^3)(∫_0^∞ dξ ∫_ξ^∞dη  e^A_Bη^3 + D_Bξ^3) or perhaps with the the integration ranges reversed depending on the signs of A_F, A_B, D_F, D_B. We have |E(z; x_1)| ≤|∫_0^∞ dξ ξ∫_ξ^∞dη  e^A_Fη^3 + D_Fξ^3|·|∫_0^∞ dξ ∫_ξ^∞dη  e^A_Bη^3 + D_Bξ^3| ≤∫_0^∞ dξ ξ∫_ξ^∞dη | e^A_Fη^3 + D_Fξ^3|·∫_0^∞ dξ ∫_ξ^∞dη | e^A_Bη^3 + D_Bξ^3| ≤∫_0^∞ dξ ξ∫_0^∞dη | e^A_Fη^3 + D_Fξ^3|·∫_0^∞ dξ ∫_0^∞dη | e^A_Bη^3 + D_Bξ^3| ≤(|𝔐 D_F|)^-2/3(|𝔐 A_F|)^-1/3(|𝔐 D_B|)^-1/3(|𝔐 A_B|)^-1/3(∫_0^∞ e^-ξ^3dξ)^3 (∫_0^∞ ξ e^-ξ^3dξ) where we have defined 𝔐 y = y    if y ≠ 0, y    if y = 0. This last bound follows from a standard Cauchy rotation of integration contour if any of D_F, A_F, D_B, A_B has vanishing real part. (<ref>) is valid for D_B, A_B, D_F, A_F ≠ 0, but if D_B=0 and A_B≠ 0, then the preceding calculations are simplified and we still obtain an upper bound but proportional to (|𝔐 A_B|)^-1/3. Similarly with A_B=0 and D_B≠ 0 and similarly for A_F, D_F. The only remaining cases are A_B = D_B =0 or A_F = D_F =0. But recall (<ref>) and (<ref>)-(<ref>). We immediately see that A_F=D_F if and only if σ_FF=σ_FF[1]=0, which occurs for no finite z, x_1. Therefore, for fixed (x, x_1)∈ℝ^2, α > 0 and any z∈ (x-e^-α, x + e^-α) |𝔼μ_N(z) - μ_eq(z; x_1) | ≲ N^-5/3 C(x_1, |x| + e^-α) where C(|x_1|, |x| + e^-α) is positive and is decreasing in α. Since μ_eq is bounded, it follows that 𝔼μ_N is bounded, and therefore 𝔼∫log|z-x| {|z-x| ≤ e^-α} dμ_N(z) → 0 as α→∞ uniformly in N, and so the lower bound is completed. Equipped with this result, we can now prove the legitimacy of the Coulomb gas approximation in our complexity calculation. The proof will require an elementary intermediate result which has undoubtedly appeared in various places before, but we prove it here anyway for the avoidance of doubt. Let M_N be a random N× N symmetric real matrix with independent centred Gaussian upper-diagonal and diagonal entries. Suppose that the variances of the entries are bounded above by cN^-1 for some constant c>0. Then there exists some constant c_e such that ||M_N||_max^N ≲ e^c_eN. Let σ_ij^2 denote the variance of M_ij. Then 𝔼||M||_max^N ≤∑_i,j|M_i,j|^N = ∑_i,j |𝒩(0, σ_ij^2)|^N = ∑_i,jσ_ij^N 𝔼|𝒩(0,1)|^N ≤ N^2c^N/2 N^-N/2 |𝒩(0,1)|^N. Simple integration with a change of variables gives |𝒩(0,1)|^N = 2^N+1/2Γ(N+1/2) and then, for large enough N, Stirling's formula gives |𝒩(0,1)|^N ∼ 2^N+1/2π(N+1)(N+1/2e)^N-1/2 ∼ 2π e^-N-1/2 N^N/2(N+1/N)^N/2 ∼ 2π e N^N/2. So finally 𝔼||M||_max^N ≲ N^2c^N/2 = e^1/2Nlogc+ 2logN≤ e^(1/2logc + 2)N, so defining c_e = 1/2log2 + 2 gives the result. For any x_1∈ℝ, let H_N be a random N× N matrix distributed as in the statement of Lemma <ref>. Then as N→∞ ∬_B dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)| = ∬_B dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2 - ∫log|z - x| dμ_eq(z) + o(1) )} +o(1). Let R > 0 be some constant, independent of N. Introduce the notation B_≤ R = B∩{z⃗∈ℝ^2 | |z|≤ R}, and then |∬_B dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)| - ∬_B_≤ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)|| ≤ ∬_||x⃗||≥ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)|. We have the upper bound (<ref>) of Lemma <ref> but this cannot be directly applied to (<ref>) since the bound relies on uniformity in x, x_1 which can only be established for bounded x, x_1. We use a much cruder bound instead. First, let J_N = H_N + x_1 ([ I 0; 0 0 ]) and then |(H_N - xI)| ≤ ||J_N||_max^N max{|x|, |x_1|}^N = ||J_N||_max^N exp(Nmax{log|x|, log|x_1|}). J_N has centred Gaussian entries with variance 𝒪(N^-1), so Lemma <ref> applies, and we find |(H_N - xI)| ≲exp(Nmax{log|x|, log|x_1|}) e^c_e N for some constant c_e>0 which is independent of x, x_1 and N, but we need not compute it. Now we have |∬_B dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)| - ∬_B_≤ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)|| ≲ ∬_||x⃗||≥ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2 - max{log|x|, log|x_1|} - c_e)}. But, since μ_eq is bounded and has compact support, we can choose R large enough (independent of N) so that 1/2s^2x^2 + 1/2s_1^2 (x_1)^2 - max{log|x|, log|x_1|} - c_e> L > 0 for all (x, x_1) with x^2 + x_1^2 > R and for some fixed L independent of N. Whence |∬_B dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)| - ∬_B_≤ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)|| ≲ N^-1e^-NL→ 0 as N→∞. Finally, for x, x_1 in B_≤ R, the result of the Lemma <ref> holds uniformly in x, x_1, so ∬_B_≤ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2)}𝔼|(H_N(x_1) - x)| = ∬_B_≤ R dxdx_1  exp{-N(1/2s^2x^2 + 1/2s_1^2 (x_1)^2 - ∫log|z-x| dμ_eq(z; x_1)+ o(1))}. The result follows from (<ref>), (<ref>) and the triangle inequality. §.§ Asymptotic complexity with prescribed Hessian index Recall the complexity defined in (<ref>): C_N, k_D, k_G =|{∈ S^N_D , ∈ S^N_G  :  ∇_D = 0, ∇_G = 0, ∈ B_D, ∈ B_G i(∇_D^2 L^(D)) = k_D,  i(∇_G^2 L^(G)) = k_G }|.<ref> The extra Hessian signature conditions in (<ref>) enforce that both generator and discriminator are at low-index saddle points. Our method for computing the complexity C_N in the previous subsection relies on the Coulomb gas approximation applied to the spectrum of H'. However, the Hessian index constraints are formulated in the natural Hessian matrix (<ref>), but our spectral calculations proceed from the rewritten form (<ref>). We find however that we can indeed proceed much as in Chapter <ref>. Recall the key Hessian matrix H̃ given in (<ref>) by H̃= ([ 2(N_D - 1)b^2 + b_1^2M^(D) -bG; b G^T 2(N_G-1)bM^(G) ]) -N-2x ([ -I_N_D 0; 0 I_N_G ]) + N-2x_1([ I_N_D 0; 0 0 ]) where M^(D)∼ GOE^N_D -1, M^(G)∼ GOE^N_G-1, G is N_D - 1 × N_G - 1 Ginibre, and all are independent. Note that we have used (<ref>) to slightly rewrite (<ref>). We must address the problem of computing 𝔼|H̃|1{ i(κ(1 + 𝒪(N^-1))b^2 + b_1^2M_D + x+x_1/2) = k_D,   i(κ'(1 + 𝒪(N^-1))bM_G - x/2) = k_G}. Indeed, we introduce integration variables y⃗_1, y⃗_2, ζ_1, ζ_1^*, ζ_2,ζ_2^*, being (N-2)-vectors of commuting and anti-commuting variables respectively. Use [t] notation to split all vectors into the first κ N -1 and last κ'N-1 components. Let A[t] = y⃗_1y⃗_1^T + y⃗_2y⃗_2^T + ζ_1ζ_1^† + ζ_2ζ_2^†. With these definitions, we have (recalling Chapter <ref>) |H̃| = (2(N-2))^N-2/2lim_ϵ↘ 0∫ dΞ exp{-iκ(1 + 𝒪(N^-1)) b^2 + b_1^2 M^(D) A[1] -iκ'(1 + 𝒪(N^-1)) b M^(G) A[2] } exp{𝒪(ϵ)}exp{…} where dΞ is the normalised measure of the y⃗_1, y⃗_2, ζ_1, ζ_1^*, ζ_2,ζ_2^* and the ellipsis represents terms with no dependence on M^(D) or M^(G), which we need not write down. The crux of the matter is that we must compute 𝔼_M^(D)e^-iκb^2 + b_1^2 M^(D) A[1]1{i(M_D + x + x_1/κb^2 + b_1^2(1 + 𝒪(N^-1))) = k_D}, 𝔼_M^(G)e^-iκ' b M^(G) A[2]1{i(M_G - x/κ'b(1 + 𝒪(N^-1))) = k_G}, but in Chapter <ref> we performed exactly these calculations (see around (<ref>)) and so there exist constants K^(D)_U,K^(D)_L, K^(G)_U,K^(G)_L such that K^(D)_L e^-Nk_Dκ(1 + o(1)) I_1(x̂_D; 2) e^-1/2N(b^2 + b_1^2) A[1]^2 ≤ 𝔼_M^(D)e^-iκb^2 + b_1^2 M^(D) A[1]1{i(M_D + x + x_1/κb^2 + b_1^2(1 + 𝒪(N^-1))) = k_D} ≤ K^(D)_U e^-Nk_Dκ(1 + o(1)) I_1(x̂_D; 2) e^-1/2N(b^2 + b_1^2) A[1]^2 and K^(G)_L e^-Nk_Gκ'(1 + o(1)) I_1(x̂_G; 2) e^-1/2Nb^2 A[2]^2 ≤ 𝔼_M^(G)e^-iκ' b M^(G) A[2]1{i(M_G - x/κ'b(1 + 𝒪(N^-1))) = k_G} ≤ K^(G)_U e^-Nk_Gκ'(1 + o(1)) I_1(x̂_G; 2) e^-1/2Nb^2 A[2]^2 where x̂_D = -x + x_1/κb^2 + b_1^2,   x̂_G = x/κ'b. Here I_1 is the rate function of the largest eigenvalue of the GOE as obtained in <cit.> and used in <cit.> and Chapter <ref>: I_1(u; E) = 2/E^2∫_u^-Ez^2 - E^2dz   for u < -E, 2/E^2∫_E^u z^2 - E^2dz   for u > E, ∞ for |u| < E. Note that for u< -E I_1(u; E) = -u/Eu^2 - E^2 - log(-u + u^2 - E^2) + logE and for u>E we simply have I_1(u; E) = I_1(-u; E). Note also that I_1(ru; E) = I_1(u, E/r). We have successfully dealt with the Hessian index indicators inside the expectation, however we need some way of returning to the form of H̃ in (<ref>) so the complexity calculations using the Coulomb gas approach can proceed as before. We can achieve this with inverse Fourier transforms: e^-1/2N(b^2 + b_1^2) A[1]^2 = 𝔼_M_De^-iκb^2 + b_1^2 M_DA[1] e^-1/2Nb^2 A[2]^2 = 𝔼_M_Ge^-iκ'b M_GA[2] from which we obtain K_Le^-Nk_Dκ(1 + o(1)) I_1(x̂_D; 2) e^-Nk_Gκ'(1 + o(1)) I_1(x̂_G; 2)𝔼|H̃| ≤ 𝔼|H̃|1{ i(κ(1 + 𝒪(N^-1))b^2 + b_1^2M_D + x+x_1/2) = k_D,   i(κ'(1 + 𝒪(N^-1))bM_G - x/2) = k_G} ≤ K_Ue^-Nk_Dκ(1 + o(1)) I_1(x̂_D; 2) e^-Nk_Gκ'(1 + o(1)) I_1(x̂_G; 2)𝔼|H̃|. It follows that K_N'∬_B dx dx_1 e^-(N-2)[ Φ(x, x_1) + k_Gκ' I_1(x; 2κ'b) + k_Dκ I_1(( - (x + x_1); 2κ(b^2 + b_1^2))](1 + o(1)) ≲ C_N, k_D, k_G ≲ K_N' ∬_B dx dx_1 e^-(N-2)[ Φ(x, x_1) + k_Gκ' I_1(x; 2κ'b) + k_Dκ I_1(( - (x + x_1); 2κ(b^2 + b_1^2))](1 + o(1)). So we see that the relevant exponent in this case is the same as for C_N but with additional GOE eigenvalue large deviation terms, giving the complexity limit lim1/Nlog𝔼C_N, k_D, k_G = Θ_k_D, k_G(u_D, u_G) = K - min_B {Φ + k_Gκ' I_1(x; 2κ'b) + k_Dκ I_1( - (x + x_1); 2κ(b^2 + b_1^2))}. Plots of Θ_k_D, k_G for a few values of k_D, k_G are shown in Figure <ref>. Recall that the limiting spectral measure of the Hessian displays a transition as the support splits from one component to two, as shown in Figure <ref>. Let us comment on the relevance of this feature to the complexity. The spectral measure appears in one place in the above complexity calculations: the Coulomb gas integral ∫ dμ_eq(z) log|z - x|. The effect of integrating against the measure μ_eq is to smooth out the transition point. In other words, if μ_eq has two components or is at the transition point, one expects to be able to construct another measure ν supported on a single component such that ∫ dν(z) log|z - x| = ∫ dμ_eq(z) log|z - x|. We interpret this to mean that the Coulomb gas integral term does not display any features that can be unambiguously attributed to the transition behaviour of the spectral measure. § IMPLICATIONS §.§ Structure of low-index critical points We examine the fine structure of the low-index critical points for both spin glasses. <cit.> used the `banded structure' of low-index critical points to explain the effectiveness of gradient descent in large multi-layer perceptron neural networks. We undertake to uncover the analogous structure in our dual spin-glass model and thence offer explanations for GAN training dynamics with gradient descent. For a range of (k_D, k_G) values, starting at (0, 0), we compute Θ_k_D, k_G on an appropriate domain. In the (u_D, u_G) plane, we then find the maximum k_D, and separately k_G, such that Θ_k_D, k_G(u_D, u_G)>0. In the large N limit, this procedure reveals the regions in the (u_D, u_G) plane where critical points of each index of the two spin glasses are found. Figure <ref> plots these maximum k_D, k_G values as contours on a shared (u_D, u_G) plane. The grey region in the plot clearly shows the `ground state' boundary beyond which no critical points exist. We use some fixed values of the various parameters: p=q=3, σ_z=1, κ=0.9. These plots reveal, unsurprisingly perhaps, that something resembling the banded structure of <cit.> is present, with the higher index critical points being limited to higher loss values for each network. The 2-dimensional analogues of the E_∞ boundary of <cit.> are evident in the bunching of the k_D, k_G contours at higher values. There is, however further structure not present in the single spin-glass multi-layer perceptron model. Consider the contour of k_D = 0 at the bottom of the full contour plot in Figure <ref>. Imagine traversing a path near this contour from right to left (decreasing u_D values); an example path is approximately indicated by a black arrow on the figure. At all points along such a path, the only critical points present are exact local minima for both networks, however the losses range over * low generator loss, high discriminator loss; * some balance between generator and discriminator loss; * high generator loss, low discriminator loss. These three states correspond qualitatively to known GAN phenomena: * discriminator collapses to predicting `real' for all items; * successfully trained model; * generator collapses to producing garbage samples which the discriminator trivially identifies. Overall, the analysis of our model reveals a loss surface that favours convergence to states of low loss for at least one of the networks, but not necessarily both. Moreover, our plots of Θ and Θ_k_D, k_G in Figures <ref>, <ref> demonstrate clearly the competition between the two networks, with the minimum attainable discriminator loss increasing as the generator loss decreases and vice-versa. We thus have a qualitative similarity between the minimax dynamics of real GANs and our model, but also a new two-dimensional banded critical points structure. We can further illuminate the structure by plotting, for each (u_D, u_G), the approximate proportion of minima with both L_D ≤ u_D and L_G≤ u_G out of all points where at at least one of those conditions holds. The expression is Θ(u_D, u_G) - max{Θ(u_D, ∞), Θ(∞, u_G)} which gives the log of the ratio in units of N. We show the plot in Figure <ref>. Note that, for large N, any region of the plot away from a value of zero contains exponentially more bad minima – where one of the networks has collapsed – than good minima, with equilibrium between the networks. The model therefore predicts the existence of good local minima (in the bottom left of Figure <ref>) that are effectively inaccessible due to their being exponentially outnumbered by bad local minima. The structure revealed by our analysis offers the following explanation of large GAN training dynamics with gradient descent: * As with single feed-forward networks, the loss surface geometry encourages convergence to globally low values of at least one of the network losses. * The same favourable geometry encourages convergence to successful states, where both networks achieve reasonably low loss, but also encourages convergence to failure states, where the generator's samples are too easily distinguished by the discriminator, or the discriminator has entirely failed thus providing no useful training signal to the generator. A natural question in the context of our analysis of low-index critical points is: do such points reflect the points typically reached by gradient descent algorithms used to train real GANs? There has been much discussion in the literature of the analogous question for single networks and spin glasses <cit.>. It is not clear how to settle this question in our case, but we believe our model and its low-index critical points give a description of the baseline properties to be expected of high-dimensional adversarial optimisation problems late in the optimisation procedure. In addition, the unstructured random noise present in spin glasses may be more appropriate in our model for GANs than it is for single spin-glass models of single networks, as GAN generators do genuinely contain unstructured latent noise, rather than just the highly-structured data distributions seen on real data. The issue of meta-stability is also worth mentioning. In single spin glasses, the boundary E_∞ between fixed index and unbounded index critical points is meta-stable <cit.>. From the random matrix theory perspective, the E_∞ boundary corresponds to the left edge of the Wigner semi-circle <cit.>. There are O(N) eigenvalues in any finite interval at the left of the Wigner semi-circle, corresponding to O(N) Hessian eigenvalues in any neighbourhood around zero. The 2D analogue of the E_∞ boundary in our double spin-glass model is expected to possess the same meta-stability: the Wigner semi-circle is replaced by the measure studied in Section <ref>, to which the preceding arguments apply. In the context of deep neural networks, there is a related discussion concerning “wide and flat local optima” of the loss surface, i.e. local optima for which many of the Hessian eigenvalues are close to zero. There are strong indications that deep neural networks converge under gradient-based optimisation to such optima <cit.> and that they are perhaps better for generalisation (i.e. test set loss) than other local optima, however some authors have challenged this view <cit.>. It is beyond the scope of the present work to analyse the role of meta-stability further, however we note that the indications from machine learning are that it is most significant when considering generalisation, however our work simplifies to the case of a single loss rather than separately considering training and test loss. §.§ Hyperparameter effects Our proposed model for GANs includes a few fixed hyperparameters that we expect to control features of the model, namely σ_z and κ. Based on the results of <cit.> and Chapter <ref>, and the form of our analytical results above, we do not expect p and q (the number of layers in the discriminator and generator) to have any interesting effect beyond p, q ≥ 3; this is clearly a limitation of the model. We would expect there to exist an optimal value of σ_z that would result in minimum loss, in some sense. The effect of κ is less clear, though we guess that, in the studied N→∞ limit, all κ∈(0, 1) are effectively equivalent. Intuitively, choosing κ=0, 1 corresponds to one network having a negligible number of parameters when compared with the other and we would expect the much larger network to prevail in the minimax game, however our theoretical results above are valid strictly for κ∈ (0,1). In the following two subsections we examine effect of σ_z and κ in our theoretical and in real experiments with a DCGAN <cit.>. Additional supporting plots are given in the appendix. §.§.§ Effect of variance ratio In the definition of complexity, u_D and u_G are upper bounds on the loss of the discriminator and generator, respectively. We are interested in the region of the u_D,u_G plane such that Θ(u_D, u_G)>0, this being the region where gradient descent algorithms are expected to become trapped. We therefore investigate the minimum loss such that Θ > 0, this being, for a given σ_z, the theoretical minimum loss attainable by the GAN. We consider two natural notions of loss: * ϑ_D = min{u_D∈ℝ|∃ u_G∈ℝ : Θ(u_D, u_G) > 0 }; * ϑ_G = min{u_G∈ℝ|∃ u_D∈ℝ : Θ(u_D, u_G) > 0 }. We vary σ_z over a range of values in (10^-5, 10^2) and compute ϑ_D, ϑ_G. To compare the theoretical predictions of the effect of σ_z to real GANs, we perform a simple set of experiments. We use a DCGAN architecture <cit.> with 5 layers in each network, using the reference PyTorch implementation from <cit.>, however we introduce the generator noise scale σ_z. That is, the latent input noise vector z⃗ for the generator is sampled from 𝒩(0, σ_z^2I). For a given σ_z, we train the GANs for 10 epochs on CIFAR10 <cit.> and record the generator and discriminator losses. For each σ_z, we repeat the experiment 30 times and average the minimum attained generator and discriminator losses to account for random variations between runs with the same σ_z. We note that the sample variances of the loss were typically very high, despite the PyTorch random seed being fixed across all runs. We plot the sample means, smoothed with rolling averaging over a short window, in the interest of clearly visualising whatever trends are present. The results are shown in Figure <ref>. There is a striking similarity between the generator plots, with a sharp decline between σ_z=10^-5 and around 10^-3, after which the minimum loss is approximately constant. The picture for the discriminator is less clear. Focusing on the sections σ_z > 10^-3, both plots show a clear minimum, at around σ_z=10^-1 in experiments and σ_z=10^-2 in theory. Note that the scales on the y-axes of these plots should not be considered meaningful. Though there is not precise correspondence between the discriminator curves, we claim that both theory and experiment tell the same qualitative story: increasing σ_z to at least around 10^-3 gives the lowest theoretical generator loss, and then further increasing to, tentatively, some value in (10^-2, 10^-1) gives the lowest possible discriminator loss at no detriment to the generator. We are not aware of σ_z tuning being widely used in practice for real GANs, rather it is typically taken to be unity. We have chosen this parameter, as it can be directly paralleled in our spin glass model, therefore allowing for the above experimental comparison. Naturally there are other parameters of real GANs that one might wish to study (such as learning rates and batch sizes) however these are much less readily mirrored in the spin glass model and complexity analysis, precluding comparisons between theory and experiment. Nevertheless, the experimental results in Figure <ref> do demonstrate that tuning σ_z in real GANs could be of benefit, as σ_z=1 does not appear to be the optimal value. §.§.§ Effect of size ratio Similarly to the previous section, we can investigate the effect of κ using ϑ_D, ϑ_G while varying κ over (0,1). To achieve this variation in the DCGAN, we vary the number of convolutional filters in each network. The generator and discriminator are essentially mirror images of each other and the number of filters in each intermediate layer are defined as increasing functions[Number of filters in a layer is either proportional to n_D or n_D^2 depending on the layer (and similarly with n_G).] of some positive integers n_G, n_D. We fix n_D + n_G=128 and vary n_D to obtain a range of κ values, with κ = n_d/n_d + n_g. The results are shown in Figure <ref>. The theoretical model predicts a a broad range of equivalently optimal κ values centred on κ=0.5 from the perspective of the discriminator loss, and no effect of κ on the generator loss. The experimental results similarly show a broad range of equivalently optimal κ centred around κ=0.5, however there appear to be deficiencies in our model, particularly for higher κ values. The results of the experiments are intuitively sensible: the generator loss deteriorates for κ closer to 1, i.e. when the discriminator has very many more parameters than the generator, and vice-versa for small κ. § GAUSSIAN HESSIAN CALCULATIONS In this section we give the full details of the Gaussian calculations for the distribution of the Hessian: ([ ∇_D^2 ∇_GD; ∇_DG ∇^2_G ])  | ∇_G =0, ∇_D = 0, ∈ B_D, ∈ B_G. These calculations are routine and consist of repeated application of standard results for conditioning multivariate Gaussians, but the details are nevertheless intricate. Recall the definitions (, ) = () - σ_z(, ) (, ) = σ_z (,) and () = ∑_i_1,…, i_p=1^N_D X_i_1,…, i_p∏_k=1^p _i_k (, ) = ∑_i_1,…, i_p+q=1^N_D + N_G Z_i_1,…, i_p+q∏_k=1^p+q w_i_k for i.i.d. Gaussian X and Z, where w⃗^T = (^T, ^T). As mentioned in the main text, we have spherical symmetry in both and , so it sufficient to consider the distribution (<ref>) around some fixed specific points on the spheres S^N_D and S^N_G. Following <cit.>, we choose the north poles. We can select a coordinate basis around both poles, e.g. with = (1 - u⃗^2, u⃗),     = (1 - v⃗^2, v⃗), for u⃗∈ℝ^N_D-1, v⃗∈ℝ^N_G - 1 with u⃗^2 ≤ 1, v⃗^2 ≤ 1. We need the joint distributions (, _i , _jk),   (, _i , _jk, _l , _mn) where the two groups are independent from of each other. The derivatives , are now Euclidean derivatives with respect to the coordinates u⃗, v⃗. behaves just like a single spin glass, and so we have <cit.>: Var() = 1, Cov(_i , _jk) = 0, _ij | {=x_D} ∼(N_D-1)p(p-1)GOE^N_D - 1 - x_DpI. To find the joint and thence conditional distributions for , we first note that is simply a spin glass on a partitioned vector w⃗^T = (^T, ^T), so Cov((, ), (', ')) = (·' + ·')^p+q from which, by comparing with <cit.>, one can obtain the necessary expressions, at the north poles in a coordinate basis. Practically, one writes ^T = (1 - ∑_ju_j^2, u_1, …, u_N_D -1), and similarly for . Then one takes derivatives of (<ref>) with respect to these new variables around the north poles. Finally, one sets =' and takes u_j=0  ∀ j, and similarly for . The resulting expressions are largely familiar from the standard spin glass in <cit.>, except there are extra cross terms between and : Var() = 2^p+q, Cov(_ij, ) = - (p+q)2^p+qδ_ij, Cov(_ij, ) = - (p+q)2^p+qδ_ij, Cov(_ij, _kl) = 2^p+q[ (p+q)(p+q-1)(δ_ikδ_jl + δ_ilδ_jk) + (p+q)^2 δ_ijδ_kl], Cov(_ij, _kl) = 2^p+q (p+q)^2 δ_ijδ_kl, Cov(_i_j, _k_l) = 2^p+q (p+q)(p+q-1) δ_ikδ_jl, Cov(_ij, _k_l) = 0 Cov(_ij, _k_l) = 0, Cov(_i_j , ) = 0. Also, all first derivatives of are clearly independent of and its second derivatives by the same reasoning as in <cit.>. Note that Cov(∂^(D)_i L^(D), ∂^(D)_j L^(D)) = (p + σ_z^2 2^p+q(p+q))δ_ij Cov(∂^(G)_iL^(G), ∂^(G)_j L^(G)) = σ^2_z 2^p+q(p+q)δ_ij Cov(∂^(D)_iL^(D), ∂^(G)_j L^(G)) = 0 and so φ_(∇_D L^(D), ∇_G L^(G))(0) = (2π)^-N-2/2(p + σ_z^22^p+1(p+q))^-N_D - 1/2(σ_z^2 2^p+q (p+q))^-N_G-1/2. We need now to calculate the joint distribution of (_ij, _kl) conditional on { = x_G}. Denote the covariance matrix for (_ij, _kl, ) by Σ = ([ Σ_11 Σ_12; Σ_21 Σ_22 ]) where Σ_11 = 2^p+q([ (p+1)(p+q-1)(1 + δ_ij) + (p+q)^2δ_ij (p+q)^2 δ_ijδ_kl; (p+q)^2 δ_ijδ_kl (p+1)(p+q-1)(1 + δ_kl) + (p+q)^2δ_kl ]), Σ_12 = -2^p+q (p+q)([ δ_ij; δ_kl ]), Σ_21 = -2^p+q (p+q)([ δ_ij δ_kl ]), Σ_22 = 2^p+q. The conditional covariance is then Σ̅ = Σ_11 - Σ_12Σ_22^-1Σ_21 = 2^p+q(p+1)(p+q-1)([ 1 + δ_ij 0; 0 1 + δ_kl ]). Identical reasoning applied to (_ij, _kl, ) and (_ij, _kl, ) shows that, conditional on { = x_G}, ∇_G^2 and ∇_D^2 have independent entries up-to symmetry, so <ref> demonstrates they are independent GOEs and we have: ([ -∇_D^2 -∇_G∇_D; ∇_D∇_G ∇^2 ])  | { = x_G} d=2^p+q+1(p+q)(p+q-1)([ N_D -1M^(D)_1 -2^-1/2G; 2^-1/2G^T N_G - 1M^(G) ])       - (p+q)x_G2^p+1([ -I_N_D 0; 0 I_N_G ]) where M^(D)_1∼ GOE^N_D - 1 and M^(G)∼ GOE^N_G - 1 are independent GOEs and G is an independent N_D - 1 × N_G - 1 Ginibre matrix with entries of unit variance. § CONCLUSION We have contributed a novel model for the study of large neural network gradient descent dynamics with statistical physics techniques, namely an interacting spin-glass model for generative adversarial neural networks. We believe this is the first attempt in the literature to incorporate advanced architectural features of modern neural networks, beyond basic single network multi-layer perceptrons, into such statistical physics style models. We have conducted an asymptotic complexity analysis via Kac-Rice formulae and Random Matrix Theory calculations of the energy surface of this model, acting as a proxy for GAN training loss surfaces of large networks. Our analysis has revealed a banded critical point structure as seen previously for simpler models, explaining the surprising success of gradient descent in such complicated loss surfaces, but with added structural features that offer explanations for the greater difficulty of training GANs compared to single networks. We have used our model to study the effect of some elementary GAN hyper-parameters and compared with experiments training real GANs on a standard computer vision dataset. We believe that the interesting features of our model, and their correspondence with real GANs, are yet further compelling evidence for the role of statistical physics effects in deep learning and the value of studying such models as proxies for real deep learning models, and in particular the value of concocting more sophisticated models that reflect aspects of modern neural network design and practice. Our analysis has focused on the annealed complexity of our spin glass model (i.e. taking the logarithm after the expectation) rather than the quenched complexity (i.e. taking the expectation after the logarithm). Ideally one would compute both, as the quenched complexity is often considered to reflect the typical number of stationary points and is bounded above by the annealed complexity. Computing the quenched complexity is typically more challenging than the annealed and such a calculation for our model could be the subject of a further work requiring considerable technical innovations. Even the elegant and very general methods presented recently in <cit.> are restricted only to the annealed case. Agreement between annealed and quenched is known only in a few special cases closely related to spherical spin glasses <cit.> and is not expected in general <cit.>. It is conceivable that quenched and annealed complexity agree in the case of our model, as it closely related to spin glasses and possesses no distinguished directions (i.e. spikes) such as are present in <cit.>. Establishing agreement by existing methods requires analysis of pairs of correlated GOE-like matrices. Such an approach for our model may well require analysis of at least 4 correlated matrices (2 per diagonal block), and quite possibly more, including correlations between blocks. We leave this considerable challenge for future work. From a mathematical perspective, we have extensively studied the limiting spectral density of a novel random matrix ensemble using supersymmetric methods. During the initial explorations for this work, we made considerable efforts to complete the average absolute value determinant calculations directly using a supersymmetric representation, as seen in Chapter <ref>, however this was found to be analytically intractable (as expected), but also extremely troublesome numerically (essentially due to analytically intractable and highly complicated Riemann sheet structure in ℂ^2). We were able to sidestep these issues by instead using a Coulomb gas approximation, whose validity we have rigorously proved using a novel combination of concentration arguments and supersymmetric asymptotic expansions. We have verified with numerical simulations our derived mean spectral density for the relevant Random Matrix Theory ensemble and also the accuracy of the Coulomb gas approximation. We hope that future work will be inspired to further study models of neural networks such as we have considered here. Practically, it would be exciting to explore the possibility of using our insights into GAN loss surfaces to devise algorithmic methods of avoiding training failure. Mathematically, the local spectral statistics of our random matrix ensemble may be interesting to study, particularly around the cusp where the two disjoint components of the limiting spectral density merge. CHAPTER: GENERALISED LOSS SURFACE MODELS AND IMPLICATIONS The content of this chapter was published first as a pre-print in July 2021 (<https://arxiv.org/abs/2003.01247v5>) and was accepted in January 2023 as an article in Journal of Machine Learning Research: “Iterate Averaging in the quest for best test error”, Diego Granziol, Nicholas P. Baskerville, Xingchen Wan, Samuel Albanie and Stephen Roberts. The experimental ideas behind this paper were conceived and explored by the other authors before NPB joined the project. NPB developed much of the mathematical theory, including constructing all the proofs. In this chapter, we include only the mathematical sections of direct relevance to this thesis, all of which are overwhelmingly NPB's work. § INTRODUCTION The iterate average <cit.> is the arithmetic mean of the model parameters over the optimisation trajectory _avg = 1/n∑_i^n_i. It is a classical variance reducing technique in optimisation and offers optimal asymptotic convergence rates and greater robustness to the choice of learning rate <cit.>. Indeed, popular regret bounds that form the basis of gradient-based convergence proofs <cit.> often consider convergence for the iterate average <cit.>. Further, theoretical extensions have shown that the rate of convergence can be improved by a factor of log T (where T is the iteration number) by suffix averaging <cit.>, which considers a fraction of the last iterates, polynomial decay averaging <cit.> which decays the influence of the previous iterates, or weighted averaging <cit.> which weights the iterate by its iteration number. That the final iterate of SGD is sub-optimal in terms of its convergence rate, by this logarithmic factor, has been proved by <cit.>. For networks with batch normalisation <cit.>, a naïve application of IA (in which we simply average the batch normalisation statistics) is known to lead to poor results <cit.>. However, by computing the batch normalisation statistics for the iterate average using a forward pass of the data at the IA point, <cit.> show that the performance of small-scale image experiments such as CIFAR-10/100 and pretrained ImageNet can be significantly improved. Even for small experiments this computation is expensive, so they further approximate IA by taking the average at the end of each epoch instead of each iteration, referred to as stochastic weight averaging (SWA). In this chapter we examine the variance reducing effect of IA in the context of a quadratic approximation to the true loss combined with additive perturbation models for the batch training loss. The theory we present is high-dimensional (i.e. large number of parameters, P) and considers the small batch size (small B) regime, which we term the “deep learning limit”. Intuitively, any given example from the training set j ∈𝒟, will contain general features, which hold over the data generating distribution and instance specific features (which are relevant only to the training sample in question). For example, for a training image of a dog, we may have that: ∇ L_sample()_training set example^dog j =∇ L_true()_general features^4 legs, snout + ()._instance-specific features^black pixel in top corner, green grass Under a quadratic approximation to the true loss[The loss under the expectation of the data generating distribution, rather than the loss over the dataset L_emp(_k).] L_true()=^T, where = ∇^2 L is the Hessian of the true loss with respect to the weights and we sample a mini-batch gradient of size B at point ∈ℝ^P× 1. The observed gradient is perturbed by () from the true loss gradient (due to instance specific features). Under this model the component of the _t'th iterate along the j'th eigenvector _j of the true loss when running SGD with learning rate α can be written: _t^T_j = (1-αλ_j)^t_0^T_j - α(1-αλ_j)^t-1(_1)^T_j⋯ , in which λ_j are the eigenvalues of . The simplest tractable model for the gradient noise (_t) is to assume samples from i.i.d. an isotropic, multivariate Normal. In particular, this assumption removes any dependence on _t and precludes the existence of any distinguished directions in the gradient noise. Using this assumption, we obtain Theorem <ref> below, which relies on an intermediate result, found in <cit.>. Let R be an m × n matrix, and let X=(X_1, …, X_n)∈ℝ^n be a random vector with independent mean-zero unit-variance sub-Gaussian coordinates. Then ℙ( |RX_2 - R_F| > t) ≤ 2exp(-ct^2/K^4 R^2) where K=max_iX_i_ψ_2 and c>0 is a constant. Assume the quadratic loss model L_true()=^T, where has eigenvalues {λ_i}_i=1^P and assume the {ϵ_t}_t=0^n are all i.i.d. Gaussian vectors in ℝ^P with distribution 𝒩(0, σ^2 B^-1I) where B is the batch size. Assume the weights are updated according to the rule from (<ref>) _t^T_j = (1-αλ_j)^t_0^T_j - α(1-αλ_j)^t-1(_1)^T_j. Assume further that αλ_i ≪ 1 for all i and λ_i >0 for all i. Then there exists a constant c>0 such that for all ξ>0, as n→∞ ℙ(|∑_i^P(w_n,i - w_0,ie^-nαλ_i ( 1 + o(1)))^2-√(Pασ^2/B⟨1/λ(2-αλ) ⟩)|≥ξ) ≤ν(ξ), ℙ(|∑_i^P(w_avg,i - w_0,i/λ_inα ( 1 + o(1)))^2-√(Pσ^2/Bn⟨1/λ⟩)| ≥ξ) ≤ν(ξ), where ν(ξ) = 2exp(-cξ^2). Let Y = (Y_1, …, Y_P) be a random sub-Gaussian vector with independent components. Let X_i = Y_i - 𝔼Y_i/ Y_i,   R = diag( Y_1, …, Y_P). Lemma <ref> then applies, to give ℙ( |Y - 𝔼Y_2 - ∑_i=1^P Y_i| > ξ) ≤ 2exp(-cξ^2/K^4 R^2). We have K≤ C max_i Y_i for some constant C>0 (<cit.>, exercise 2.5.8), and R^2 = (max_i Y_i)^2 = max_i Y_i. Hence we obtain ℙ( |Y - 𝔼Y_2 - ∑_i=1^P Y_i| > ξ) ≤ 2exp(-cξ^2/(max_i Y_i)^2) for some new constant c>0. The proof is then completed if we compute the means and variances of _n and _avg. To that end, with =diag(λ_1, …, λ_P), the update rule (<ref>) gives _t = (1-α)^n _0 + α∑_i=0^t-1 (1-α)^t-i-1_i, for any 1 ≤ t≥ n. Since is diagonal, each component of w⃗_n can be treated independently when we sum to obtain w⃗_avg, so for any vector v⃗ ∑_t=1^n (1 - α)^tv⃗ = 1 - (1 - α)^t/α^-1 (1-α)v⃗ So averaging (<ref>) over t gives _avg = 1 - (1 - α)^n/α n^-1(1-α)_0 + ∑_t=0^n-11 - (1-α)^n-t/n^-1_t . Since the _i are all i.i.d. centred Gaussians, obtaining the distributions of _n and _avg amounts to computing the covariances (α∑_i=0^n-1 (1-α)^n-i-1_i ) = σ^2 B^-1 I ∑_i=1^n-1α^2(1-α)^2(n-i-1) = σ^2 B^-1 I α^2(1 - (1 - α)^2n)(1 - (1-α)^2)^-1 and similarly (∑_t=0^n-11 - (1-α)^n-t/n^-1_t ) = ∑_t=0^n-1(1 - (1-α)^n-t/n^-1)^2 = ^-2/ n^2(n - 2(1 - (1-α)^n)/α^-1 + (1 - (1-α)^2n)(1 - (1-α)^2)^-1). Now using αλ_i < 1 for all i=1,2…, P, and taking n→∞, (<ref>) and (<ref>) give (_n) ∼σ^2α^2B^-1(1 - (1-α)^2)^-1 = σ^2α B^-1(2 - α^2)^-1 and similarly (<ref>) and (<ref>) give (_avg) ∼1/ n^-2. Thus it follows from (<ref>) and (<ref>) that 𝔼w_n, i = (1-αλ_i)^n w_0, i∼ e^-nαλ_iw_0, i,   (w_n, i) ∼σ^2/Bα/2λ_i(1 - αλ_i) and from (<ref>) and (<ref>) it follows 𝔼w_avg, i∼w_0,i/λ_iα n,    (w_avg, i) = σ^2/B1/nλ_i^2 where in both cases we have used αλ_i ≪ 1 to simplify the expected values for large n. To complete the proof for _n, we apply (<ref>) using (<ref>) and noting that ∑_i=1^P Var(w_n,i)∼σ^2 Pα/2B⟨1/λ(1 - αλ)⟩ and 0 < max_i w_n,i < ∞ since λ_i >0 and αλ_i < 1. The results for _avg follows similarly by using (<ref>) with (<ref>). This produces two different constants c>0 in the statement of (<ref>), but we can simply take the smaller of the two constants to produce the desired statement. The final iterate attains exponential convergence in the mean of _n, but does not control the variance term. Whereas for _avg, although the convergence in the mean is worse (linear), the variance vanishes asymptotically – this motivates tail averaging, to get the best of both worlds. Another key implication of Theorem <ref> lies in its dependence on P. P is a gauge of the model size and appears as a simple linear multiplier of the variances of _n and _avg, so increasing over-parametrisation implies increasing variance of the final iterate and the IA, however IA provides a counterbalancing variance reduction effect that is entirely absent from the final iterate. This implies that in more complex, over-parameterised models, we expect the benefit of IA over the final iterate to be greater, as IA provides a mechanism to control the weight variance even as it grows with P. § A DEPENDENT MODEL FOR THE PERTURBATION We proceed now to propose a relaxation of the gradient perturbation independence assumption. (<ref>) can be written equivalently as L_batch() = L_true() + η() where η is a scalar field with ∇η =. Note that we have neglected an irrelevant arbitrary constant in Equation (<ref>) and also that we have L_batch rather than L_sample, but this amounts to scaling the per-sample noise variance σ^2 by the inverse batch size B^-1. We model η as a Gaussian process 𝒢𝒫(m, k), where k is some kernel function ℝ^P×ℝ^P→ℝ and m is some mean function[It is natural to take m=0 in a model for the sample perturbation, however retaining fully general m does not affect our arguments.] ℝ^P→ℝ. As an example, taking k(, ') ∝ (^T ')^p and restricting to a hypersphere results in taking the exact form of a spherical p-spin glass, studied previously for DNNs <cit.> and in Chapters <ref> and <ref> <cit.>. We are not proposing to model the loss surface (batch or true) as a spin glass (or more generally, a Gaussian process), rather we are modelling the perturbation between the loss surfaces in this way. We emphasise that this model is a strict generalisation of the i.i.d. assumption above, and presents a rich, but tractable, model of isotropic Gaussian gradient perturbations in which the noise for different iterates is neither independent nor identically distributed. Following from our Gaussian process definition, the covariance of gradient perturbations can be computed using a well-known result (see <cit.> equation 5.5.4): (ϵ_i(), ϵ_j(') ) = ∂_w_i∂_w'_j k(, '). Further assuming a stationary kernel k(, ') = k(-1/2|| - '||_2^2) (ϵ_i(), ϵ_j(') ) = (w_i - w'_i)(w'_j - w_j) k”(-1/2|| - '||_2^2) + δ_ijk'(-1/2|| - '||_2^2). Thus we have a non-trivial covariance between gradient perturbation at different points in weight-space. This covariance structure can be used to prove the upcoming variance reduction result, but first we require some intermediate lemmas. §.§ Intermediate results In this section we establish some intermediate lemmas that will be required later in the chapter. Define the function r(a; x) = γ(a; x)/Γ(a), where γ is the lower incomplete gamma function. Assume that x≪ a, where x may or may not diverge with a, then as a→∞, r(a; x)→ 0, and more precisely r(a; x) ∼1/2πexp(-x + alogx - a - aloga - 1/2loga). We have γ(a; x) = a^-1x^a _1F_1(a; 1+a; -x), where _1F_1 is the confluent hypergeometric function of the first kind <cit.>. Then r(a; x) = a^-1 x^a _1F_1(a; 1+a; -x)/Γ(a)=a^-1 x^a Γ(a+1)/Γ(a)^2∫_0^1 e^xt t^a-1 dt where we have used a result of <cit.>. The integral in (<ref>) can be evaluated asymptotically in the limit x→∞ with x ≪ a. Writing the integrand as e^xt + (a-1)logt it is plainly seen to have no saddle points in [0, 1] given the condition x≪ a. The leading order term therefore originates at the right edge t=1. A simple application of Laplace's method leads to r(a; x) ∼a^-1 x^a Γ(a+1) e^-x/Γ(a)^2(a- 1 -x) ∼ x^a e^-x/aΓ(a) ∼ x^a e^-x/a 2π a^-1 (ae^-1)^a = 1/2πexp(-x + alogx - a - aloga - 1/2loga) where the penultimate line makes uses of Stirling's approximation <cit.>. Since a≫ x, -x + alogx - a - aloga - 1/2loga∼ -aloga→-∞ which completes the proof. Take any _0,…, _n-1∈ℝ^P let ∼𝒩(μ, Σ), for any μ∈ℝ^P and Σ such that Σ≥ Aσ^2P for some constants A, σ>0. Consider P→∞ with P≫logn and let δ >0 be o(P^1/2) (note that δ and n need not diverge with P, but they can). Define B_i = {∈ℝ^P | ||-_i|| < δ}, then as P →∞ ℙ(∈⋃_i B_i) → 0 and moreover as P, n→∞ n^lℙ(∈⋃_i B_i) → 0, for any fixed l>0. With the Euclidean volume measure, we have Vol(⋃_i B_i) ≤ n V_P δ^P = V_P (δ n^1/P)^P where V_P is the volume of the unit sphere in P dimensions. Therefore a sphere of radius δ n^1/P is large enough to enclose all of the B_i and so the probability that lies in any of the B_i is bounded above by the probability that it lies inside the sphere of radius δ n^1/P centred on its mean μ. Note that with σ̂^2 = (Σ)^1/P, changing variables = σ̂^-1Σ^1/2 gives ∫_ℝ^P d e^-^T Σ^-1/2 = ∫_ℝ^Pd e^-^2/2 σ̂^2 since the Jacobian is 1. Thus we can reduce to a single dimensional Gaussian integral ℙ(∈⋃_i B_i) ≤1/(2πσ̂^2)^P/22π^P/2/Γ(P/2)∫_0^δ n^1/Pdr   e^-r^2/2σ̂^2 r^P-1 = 2/Γ(P/2)∫_0^δ n^1/P/√(2)σ̂ dr   e^-r^2 r^P-1 = 1/Γ(P/2)∫_0^δ n^2/P/2σ̂^2 dr   e^-r r^P/2 - 1 ≤1/Γ(P/2)∫_0^δ n^2/P/2A^1/Pσ^2 dr   e^-r r^P/2 - 1using σ̂^2 ≥ A^1/Pσ^2 ≤1/Γ(P/2)∫_0^δ n^2/P/2ασ^2 dr   e^-r r^P/2 - 1with α≡inf_P A^1/P >0 ≡1/Γ(P/2)γ(P/2; n^2/Pδ^2/2σ^2α) where γ is the lower incomplete gamma function. Since P≫logn and δ = o(P^1/2), it follows that x ≡n^2/Pδ^2/2σ^2α = o(P) and so Lemma <ref> can be applied to yield the result. Indeed, recalling that n≪ e^P, we have n^l ℙ(∈⋃_i B_i) ≤ e^lP r(P/2, x) ∼1/2πexp(lP - x + P/2logx - P/2 - P/2logP/2 - 1/2logP/2) for any l>0. But x = o(P) so for P large enough, the term inside the exponential is negative and diverging with P, as required. The previous two lemmas are required to prove the next lemma, which will form the foundation of our argument in the next section. Let _1,…, _n be a sequence of jointly multivariate Gaussian random variables in ℝ^P such that _i  | {_1,…, _i-1}∼𝒩(μ_i, Σ_i) where there exists a σ>0 and a constant A>0 such that Σ_i ≥ Aσ^P for all P and i. Let also _0 be any deterministic element of ℝ^P. For 1≤ m ≤ n, define the events A_m(δ) = {||_i - _j||_2 > δ| 0≤ i < j ≤ m}. Consider P→∞ with P≫logn and let δ >0 be o(P^1/2) (note that δ and n need not diverge with P, but they can). Then ℙ(A_n(δ))→ 1 as P→∞. Let us use the definitions of B_i from Lemma <ref>, i.e. let B_i = {∈ℝ^P | ||-_i|| < δ} for 0 < j < n. Since A_i(δ) ⊂ A_i-1(δ) for any i, the chain rule of probability gives ℙ(A_n(δ)) = ℙ(⋂_i≤ nA_i(δ)) = ℙ(A_1(δ))∏_i=2^n-1ℙ(A_i | A_i-1) but ℙ(A_i(δ) | A_i-1(δ))= 1 - ℙ(_i∈⋃_j< i B_j) and so ℙ(A_n(δ)) = ℙ(A_1(δ))∏_i=2^n-1(1 - ℙ(_i∈⋃_j< i B_j)) = ℙ(_1 ∈ B_0) ∏_i=2^n-1(1 - ℙ(_i∈⋃_j< i B_j)). For fixed n, the result is now immediate from (<ref>) in Lemma <ref>, since all the probabilities in (<ref>) converge to 1 as P→∞ and there are only a finite number of terms. Now consider the case that n also diverges. For any n define s_n = sup_2≤ i ≤ nℙ( _i ∈⋃_j < i B_j), and then ℙ(A_n(δ)) ≥ℙ(_1 ∈ B_0) ∏_i=2^n-1(1 - s_i-2). But, by Lemma <ref> we can write s_n = (n+1)^-2 f_n, P where f_n, P→ 0 as P→∞, say, hence ℙ(A_n(δ)) ≥ℙ(_1 ∈ B_0) ∏_i=2^n-1(1 - (i-1)^-2 f_i-2, P) ≥ℙ(_1 ∈ B_0) ∏_i=2^∞(1 - (i-1)^-2f_i-2, P) for large n, since |f_n-2,P|<1 and all the extra terms added are strictly between 0 and 1. But log∏_i=2^∞(1 - (i-1)^-2f_i-2, P) ≥ -∑_i=2^∞ (i-1)^-2f_i-2, P≥ -sup_j f_j-2, P∑_i=2^∞ (i-1)^-2 = -π^2/6sup_j f_j-2, P and so ℙ(A_n(δ)) ≥ e^-sup_j f_j-2, Pπ^2/6ℙ(_1 ∈ B_0) but f_j-2, P→ 0 for any j, so as P→∞, ℙ(A_n(δ)) is lower bounded by a term converging to ℙ(_1 ∈ B_0) which, in turn, converges to 1 by Lemma <ref>. Recall the Gaussian process covariance structure from above (<ref>): (ϵ_i(), ϵ_j(') ) = (w_i - w'_i)(w'_j - w_j) k”(-1/2|| - '||_2^2) + δ_ijk'(-1/2|| - '||_2^2)<ref> Assume the covariance structure (<ref>). Take any a_i∈ℝ and define = ∑_i=1^n a_i _i. Then () = k'(0)P∑_i=1^na_i^2 + 2P∑_1≤ i<j≤ na_ia_j[k'(-d_ij^2/2)+ P^-1k”(-d_ij^2/2)d_ij^2] where we define d_ij = ||_i - _j||_2. Each of the _i is Gaussian distributed with covariance matrix (_i) given by (<ref>) and the covariance between different gradients (_i, _j) is similarly given by (<ref>). By standard multivariate Gaussian properties () = ∑_i=1^na_i^2 (_i)+ ∑_i≠ ja_ia_j (_i, _j), then taking the trace () = ∑_i=1^na_i^2((_i))+ 2∑_1≤ i<j≤ na_ia_j ((_i, _j)). Using the covariance structure from (<ref>) gives () = k'(0)∑_i=1^na_i^2 I + 2∑_1≤ i<j≤ na_ia_j[ k'(-d_ij^2/2)I + k”(-d_ij^2/2)(_i - _j)(_j - _i)^T] from which the result follows. §.§ Main results for dependent noise models Let _n and _avg be defined as in Theorem <ref> and let the gradient perturbation be given by the covariance structure in (<ref>). Assume that the kernel function k is such that k(-x) and its derivatives decay at least as fast as |x|^M e^-x, for some M>0, as x→∞ and define σ^2B^-1 = k'(0). Assume further that P^1-θ≫log n for some θ∈(0,1). Let δ=o(P^1/2). Then _n and _avg are multivariate Gaussian random variables and, with probability which approaches unity as P, n→∞ the iterates _t are all mutually at least δ apart and 𝔼w_n,i ∼e^-αλ_i nw_0,i , 1/P(_n) ∼ασ^2/B⟨1/λ(2-αλ)⟩, 𝔼w_avg, i ∼1-αλ_i/αλ_i n w_0,i, 1/P(_avg) ≤σ^2/Bn⟨1/ λ⟩+ 𝒪(1)(k'(-δ^2/2) + P^-1δ^2k”(-δ^2/2)). We will prove the result in the case λ_i = λ ∀ i for the sake of clarity. The same reasoning can be repeated in the more general case; where one gets P^-1 f(λ) I below, one need only replace it with ⟨ f(λ)⟩, exploiting linearity of the trace. We will also vacuously replace σ^2B^-1 with σ^2 to save on notation. For weight iterates _i, we have the recurrence _i = (1-αλ)_i-1 + α(_i-1) which leads to _n = (1-αλ)^n _0 + α∑_i=0^n-1 (1-αλ_i)^n-i-1(_i) and then _avg = 1 - (1 - αλ)^n/αλ n (1-αλ)_0 + ∑_i=0^n-1(_i) 1 - (1-αλ)^n-i/λ n. As above, define _i = (_i), for convenience. Now define a_i = α(1-αλ)^n-1-i,    a̅_i = 1 - (1-αλ)^n-i/λ n. Next we will apply Lemma <ref> and utilise Lemma <ref> to bound the variance of _avg and _n. We first gather the following facts, which were also computed and used in the proof of Theorem 1: ∑_i=1^n-1 a_i^2 = α^2(1 - (1 - αλ)^2n)/1 - (1-αλ)^2 ∑_i<ja_ia_j = α/λ(1 - (1-αλ)^n/αλ - 1 - (1-αλ)^2n/1 - (1-αλ)^2). The sum of squares for the a̅_i is simple to obtain similarly ∑_i=0^n-1a̅_i^2 = 1/λ^2 n^2(n - 2(1 - (1-αλ)^n)/αλ + 1 - (1-αλ)^2n/1 - (1-αλ)^2). We now use the assumption that 0 < αλ < 1 (required for the convergence of gradient descent) which gives, as n→∞, ∑_i=1^n-1 a_i^2 ∼α^2/1 - (1 -αλ)^2 ∑_i<ja_ia_j ∼α/λ(1/αλ - 1/1 - (1-αλ)^2) ∑_i=1^n-1a̅_i^2 ∼1/λ^2 n Summing ∑_i< ja̅_ia̅_j explicitly is possible but unhelpfully complicated. Instead, some elementary bounds give ∑_i< ja̅_ia̅_j ≤(∑_i=0^n-1a̅_i)^2 = 1/λ^2n^2(n - 1 - (1-αλ)^n/αλ)^2 ∼1/λ^2 and ∑_i< ja̅_ia̅_j ≥∑_i<j(1 - (1-αλ)^n-1/λ n)^2 ∼1/2λ^2 so in particular ∑_i < ja̅_ia̅_j = 𝒪(1). Now define the events A_n(δ) as in Lemma <ref> using _i in place of _i. Further, choose δ large enough so that k'(-x^2/2) and x^2k”(-x^2/2) are decreasing for x>δ. Define k'(0) = σ^2. Lemma <ref> gives 1/P(_n) | A_n(δ) ≤σ^2∑_i=1^n a_i^2 + 2∑_i<j a_ia_j(k'(-δ^2/2) + P^-1δ^2k”(-δ^2/2)) where we note that we have only upper-bounded the second term in (<ref>), so using (<ref>) and (<ref>) and taking δ large enough we obtain 1/P(_n) | A_n(δ) = σ^2α^2/1 - (1-αλ)^2 + o(1). Turning now to _avg we similarly obtain 1/P(_avg) | A_n(δ) ≤σ^2/n1/λ^2+ 𝒪(1)(k'(-δ^2/2) + P^-1δ^2k”(-δ^2/2)) and, as before, taking δ large enough we can obtain 1/P(_avg) | A_n(δ) = o(1). Finally recalling (<ref>) and (<ref>) and writing (1-αλ)^n = e^-αλ n+ o(1) for large n, we obtain the results in the statement of the theorem but conditional on the event A_n(δ). To complete the proof, we need only to establish that ℙ(A_n(δ)) → 1 P,n→∞, which we will do with an application of Lemma <ref>. Since the loss noise term is a Gaussian process, the (w⃗_i) are all jointly Gaussian with the covariance structure (<ref>), but to apply Lemma <ref> we must further establish a lower bound on the covariance of the conditional _i. Let Σ_n be the P× P covariance matrix of _n  | {_1, …, _n-1}, then we are required to show that there exists some n-independent A, σ > 0 such that Σ_n > A σ^2P for all n (subject to log n ≪ P). Define S_n to be the nP × nP covariance matrix of all of the {_i}_i=1^n, i.e. (S_n)_iP + j, kP + l = (ϵ_j(_j), ϵ_l(_k)),    0≤ i,k < n,   1≤ j,l ≤ P, and for convenience define k'(0) = s^2. The rules of standard Gaussian conditioning give Σ_n = s^2 I - X_n S_n-1^-1 X_n^T, where X_n is the P × (n-1)P matrix such that S_n has the following block structure S_n = ([ ; S_n-1 X_n^T; ; X_n s^2 I_P ]), so, concretely, from (<ref>) (X_n)_i, Pj + l = ( (_n)_i - (_j)_i)( (_j)_l - (_n)_l)k”(-1/2d_jn^2) + δ_il k'(-1/2d_jn^2), for 1≤ i,l ≤ P,   0 ≤ j < n - 1. We can now Taylor expand the determinant Σ_n = s^2P(1 - s^-2 X_n S_n-1^-1 X_n^T ) = s^2P(1 - s^-2 X_nS_n-1^-1X_n^T) + … which is valid provided that the trace term is small compared with 1. We have | X_n S_n-1^-1 X_n^T| ≤ X_nX_n^T S_n-1^-1_op = X_n_F S_n-1^-1_op where ·_F, ·_op are the Frobenius and operator matrix norms respectively. Hence, it suffices to prove n,P-independent bounds S_n-1^-1_op < q for some q>0 and X_n_F < c for some 0< c < s^2 / 10, say, valid for all n large enough, to thence obtain Σ_n ≥ c' s^2P for some constant c' > 0. Strictly speaking, one must use a bounded form of the remainder in Taylor's theorem to make precise all of these constants, but in reality we will see that we can make c as small as necessary, so that certainly c'>0 exists and the bound Σ_n ≥ c' s^2P holds. Proceeding directly X_n_F = X_n X_n^T = ∑_i,l=1^P ∑_j=0^n-2 (X_n)_i, Pj +l^2 = ∑_j=0^n-2{ P k'(- d_jn^2/2) -2 d_jn^2 k'(- d_jn^2/2)k”(- d_jn^2/2) + [d_jn^2 k”(- d_jn^2/2)]^2} ≤ (n-1)(Pk'(- δ^2/2) - 2δ^2 k'(- δ^2/2)k”(- δ^2/2) + [δ^2 k”(- δ^2/2)]^2), but recall that we require δ = o(P^1/2), so take for example δ = a P^1/2 - φ/2 for some 0< φ < 1, so X_n_F ≤ (n-1)(Pk'(- P^1-φ/2) - 2P^1-φ k'(- P^1-φ/2)k”(- P^1-φ/2) + [P^1-φ k”(- P^1-φ/2)]^2). Now recall that x k'(-x) and xk”(-x) are decaying for large enough x, and log n ≪ P^1-θ, hence X_n_F ≤ (n-1)(2log^1/1-θ n k'(- log^1-φ/1-θn/2)+ [log^1-φ/1-θn   k”(- log^1-φ/1-θn/2)]^2). Since θ > 0, we can take some 0 < φ < θ so that there exists χ∈(0,1) such that log^1 - φ/1 - θ n > log^1 + χ n for large enough n, and so X_n_F ≤ (n-1)(2log^1/1-θ n k'(- log^1 + χn/2)+ [log^1 + χn   k”(- log^1 + χn/2)]^2). We assume that k'(x), k”(x) decay at least as fast as x^M e^-x for some M>0 as x→∞, i.e. k'(x)x^-M e^x→ 0 (and similarly k”(x)). Writing n-1 ≤ n = e^log n, we have X_n_F ≤ 2log^1/1-θ n k'(log n - log^1 + χn/2)+ [log^1 + χn   k”(log n- log^1 + χn/2)]^2, but for large n, log^1+χ n ≫log n and so this last expression clearly converges to 0 as n→∞. Indeed, e^-log^1+χn /2 decays faster than any fixed power of n, so the same is true of X_n_F. Hence we can find the constant c>0 such that, for large enough n>n_0, say, X_n_F< c, as required. Now we turn to bounding S_n-1^-1_op, which is done by induction on n. Define the upper bounds S_n^-1_op≤ q_n for all n. Recalling the block structure (<ref>), we get the inverse S_n^-1 = ( [ (S_n-1 - s^-2X_n^TX_n)^-1 0; 0 Σ_n^-1 ])( [ I -s^-2 X_n^T; -s^-2 X_n I ])≡ YZ. S_n^-1_op is bounded above by X_op, Y_op and so we now bound these norms in turn. Since the off diagonals are zero, we have Y_op≤max{Σ_n^-1_op, (S_n-1 - s^-2X_n^TX_n)^-1_op}. Recalling the expression for Σ_n above and expanding the matrix inverse Σ_n^-1_op = s^-2 (I - s^-2X_n S_n-1^-1X_n^T)^-1_op = s^-2 (I +s^-2 X_nS_n-1^-1X_n^T + s^-4(X_nS_n-1^-1X_n^T)^2 + …_op ≤ s^-2(1 + s^-2X_nS_n-1^-1X_n^T _op+ s^-4_op(X_nS_n-1^-1X_n^T)^2 _op + …) ≤ s^-2(1 + s^-2X_n_FS_n-1^-1_op + s^-4X_n_F^2S_n-1^-1_op^2 + …) ≤ s^-2(1 + s^-2X_n_Fq_n-1 + s^-4X_n_F^2q_n-1^2 + …) ≤ s^-2(1 + α s^-2 q_n-1X_n_F) for some constant α>0, since we have already demonstrated that X_n_F→ 0 as n→∞. For the other term (S_n-1 - s^-2X_n^TX_n)^-1_op ≤S_n-1^-1_op(I - s^-2S_n-1^-1X_n^TX_n)^-1_op from which point, one proceeds just as for Σ_n^-1_op to obtain (S_n-1 - s^-2X_n^TX_n)^-1_op ≤ q_n-1(1+α s^-2 q X_n_F), hence overall Y_op≤max{s^-2 (1+α s^-2 q_n-1X_n_F), q_n-1 (1+α s^-2 q_n-1X_n_F)}. We can always relax the bound on S_n-1^-1_op so that q_n-1>s^-2, so we simply have Y_op≤ q_n-1 (1 + α s^-2 q_n-1X_n_F). To bound Z_op, we split it into a sum of two matrices Z_op = ( [ I 0; 0 I ]) + ( [ 0 -s^-2 X_n^T; -s^-2 X_n 0 ])_op≤ 1 + 2s^-2X_op≤ 1 + 2s^-2X_n_F, but X_n_F → 0 as n→∞, so overall we can say S_n^-1_op≤ q_n-1(1 + r_n),    r_n ≡ s^-2X_n_F (α q_n-1 + 2 + 2α q_n-1X_n_F), which we can simplify to S_n^-1_op≤ q_n-1(1 + r_n'),    r_n' ≡ s^-2X_n_F (α' q_n-1 + 2) and so can say q_n = q_n-1 + 2s^-2X_n_Fq_n-1 + s^-2α'X_n_F q_n-1^2. For large enough n, we seek a stability solution to this recurrence, i.e. using the ansatz q_n = q + h_n for h_n small q + h_n = q + h_n-1 + 2s^-2X_n_F q + 2s^-2X_n_F h_n-1 + s^-2α' X_n_F (q^2 + 2qh_n-1 + h_n-1^2). Gathering the leading order terms gives h_n = h_n-1 + 2s^-2qX_n_F + s^-2α'X_n_F q^2 h_n = h_n_0 + s^-2q (2 + qα')∑_j=n_0+1^nX_j_F. Recall that X_n_F decays faster than any fixed power of n, so the sum ∑_j≥ 2X_j_F converges, hence for ε > 0 we can take some fixed n_0 large enough so that ∑_j=n_0 + 1^n X_j_F < ε for all n>n_0. We are free to choose h_n_0 = 0 and then for large enough n_0, we can guarantee |h_n| < 1, say, thus q_n ≤max{max_1≤ m ≤ n_0 q_m, q_n_0 + 1}≡ q^*. Hence we have succeeded in bounding S_n^-1_op≤ q^* for all n. Combined with the earlier bound on X_n_F, we have now established the bound Σ_n ≥ c' s^2P, so we have satisfied the conditions of Lemma <ref> and completed the proof. Note that Theorem <ref> is a generalisation of Theorem <ref> to the context of our dependent perturbation model. Let us make some clarifying remarks about the theorem and its proof: * The bound (<ref>) in the statement of the theorem relies on all iterates being separated by a distance at least δ. Moreover, the bound is only useful if δ is large enough to ensure the k' and k” terms are small. * Just as in the independent case of Theorem <ref>, the first term in the bound in (<ref>) decays only in the case that the number of iterates n→∞. * The remaining conditions on P, n , δ are required for the high-dimensional probability argument which we use to ensure that all iterates are separated by at least δ. * P≫logn is a perfectly reasonable condition in the context of deep learning. E.g. for a ResNet-50 with P≈ 25× 10^6, violation of this condition would require n > 10^10^7. A typical ResNet schedule on ImageNet has < 10^6 total steps. Consequently, our result points to the importance of good separation between weight iterates in IA to retain the independence benefit and variance reduction in a non-independent noise setting, hence one would expect large learning rates to play a crucial role in successful IA. At the same time, our result is particularly adapted to the deep learning limit of very many model parameters (P→∞), since this is the only regime in which we can argue probabilistically for good separation of weight iterates (otherwise one may simply have to assume such separation). Furthermore, the importance of P ≫log n indicates that perhaps averaging less frequently than every iteration could be beneficial to generalisation. The following corollary makes this intuition precise. Let _avg now be a strided iterate average with stride κ, i.e. _avg = κ/n∑_i=1^⌊ n/κ⌋_i. Then, under the same conditions as Theorem <ref> 𝔼w_avg, i = κ(1-αλ_i)^κ/n(1 - (1-αλ_i)^κ ) (1 + o(1))w_0,i, 1/P(_avg) ≤σ^2α^2κ/Bn⟨1/(1 - (1-αλ)^κ)^21 - (1-αλ)^2κ/1 - (1-αλ)^2⟩+ 𝒪(1)(k'(-δ^2/2) + P^-1δ^2k”(-δ^2/2)) where the constant 𝒪(1) coefficient of the second term in (<ref>) is independent of κ. The proof is just as in Theorem 2 (or Theorems 3 or 4), differing only in the values of the a̅_i. Indeed, a little thought reveals that the generalisation of a̅_i to the case κ > 1 is a̅_i = ακ/n(1 - αλ)^κ(1 + ⌊i/κ⌋) - 1 - i1 - (1-αλ)^κ(⌊n/κ⌋ - ⌊i/κ⌋)/1 - (1-αλ)^κ. Note that κ⌊i/κ⌋ - i is just the (negative) remainder after division of i by κ. Then for large n ∑_ia̅_i^2 ∼α^2κ^2/n^2(1-αλ)^2(κ - 1)/(1 - (1-αλ)^κ)^2⌊n/κ⌋∑_i=0^κ-1(1-αλ)^-2i ≤α^2κ/n(1-αλ)^2(κ - 1)/(1 - (1-αλ)^κ)^2∑_i=0^κ-1(1-αλ)^-2i = α^2κ/n(1-αλ)^2(κ - 1)/(1 - (1-αλ)^κ)^21 - (1-αλ)^-2κ/1 - (1-αλ)^-2 = α^2κ/n1/(1 - (1-αλ)^κ)^21 - (1-αλ)^2κ/1 - (1-αλ)^2. and similarly ∑_i< ja̅_i a̅_j ∼α^2κ^2/n^2(1-αλ)^2(κ-1)/(1 - (1-αλ)^κ)^2∑_i<j (1-αλ)^κ⌊ i/κ⌋ - i + κ⌊ j/κ⌋ - j ∼α^2κ^2/n^2(1-αλ)^2(κ-1)/(1 - (1-αλ)^κ)^2∑_j (1-αλ)^κ⌊ j/κ⌋ - j⌊j/κ⌋1 - (1-αλ)^-κ/1 - (1-αλ)^-1 ∼α^2κ^2/n^2(1-αλ)^2(κ-1)/(1 - (1-αλ)^κ)^2(1 - (1-αλ)^-κ/1 - (1-αλ)^-1)^2∑_j=0^⌊ n/κ⌋j ∼α^2/2(1-αλ)^2(κ-1)/(1 - (1-αλ)^κ)^2(1 - (1-αλ)^-κ/1 - (1-αλ)^-1)^2 = α^2/2(1-αλ)^-2/(1 - (1-αλ)^-1)^2. Intuitively, the first term in the covariance in (<ref>) is an “independence term”, i.e. it is common between Theorems <ref> and <ref> and represents the simple variance reducing effect of averaging. The second variance term in (<ref>) comes from dependence between the iterate gradient perturbations. We see from the corollary that an independent model for gradient perturbation would predict an unambiguous inflationary effect of strided IA on variance (the first term in (<ref>)). However introducing dependence in the manner that we have predicts a more nuanced picture, where increased distance between weight iterates can counteract the simple “independent term” inflationary effect of striding, leaving open the possibility for striding to improve on standard IA for the purposes of generalisation. § EXTENSION OF THEORETICAL FRAMEWORK TO WEIGHT DECAY AND ADAPTIVE METHODS To make a closer connection with the new optimisation algorithms proposed in this work we consider decoupled weight decay (strength γ) and gradient preconditioning: _t = (1 - αγ)_t-1 - α_t^-1∇ L_batch(_t-1) where _t^-1 is some approximation to the true loss Hessian used at iteration t. In the presence of weight decay, we move the true loss minimum away from the origin for the analysis, i.e. L_true() = (-^*)^T(-^*). The update rule is then _t = (1-αγ - α_t^-1) _t-1 + α^* - α(_t-1). We take _t^-1 to be diagonal in the eigenbasis of , with eigenvalues λ̃_i^(t)+ε, where ε is the standard tolerance parameter <cit.>. One could try to construct the _t^-1 from the Gaussian process loss model, so making them stochastic and covarying with the gradient noise, however we do not believe this is tractable. Instead, let us heuristically assume that, with high probability, λ̃_i^(t) is close to λ_i, say within a distance ζ, for large enough t and all i. If we take a large enough ζ this is true even for SGD and we expect Adam to better approximate the local curvature matrix than SGD, since this is precisely what it is designed to do. This results in the following theorem. Fix some ζ > 0 and assume that |λ̃_i^(t) - λ_i| < ζ for all t ≥ n_0, for some fixed n_0(ζ), with high probability. Use the update rule (<ref>). Assume that the λ_i are bounded away from zero and min_iλ_i > ζ. Further assume c(γ + ε + ζ) < 1, where c is a constant independent of ε, ζ, γ and is defined in the proof. Let everything else be as in Theorem <ref>. Then there exist constants c_1, c_2, c_3, c_4>0 such that, with high probability, |𝔼w_n,i-w^*_i| ≤ e^-α(1 +γ -c(ε+ζ))nw_0,i + c_1(ε +ζ + γ) |1/P(_n)- ασ^2/B(2-α)| ≤ c_2(ε + ζ + γ) + o(1), |𝔼w_avg, i - w^*_i| ≤1-α(1 +γ - c(ε + ζ ))/α(1 + γ - c(ε+ζ)) n (1 + o(1))w_0,i + c_3(ε + ζ + γ) |1/P(_avg) - σ^2/Bn - 𝒪(1)(k'(-δ^2/2) + P^-1δ^2k”(-δ^2/2))| ≤ c_4 (γ, + ζ + ϵ). We begin with the equivalent of (<ref>) for update rule (<ref>): _n = ∏_i=0^n-1(1-αγ -α_i^-1Λ)_0 + ∑_i=^n-1α_i^-1Λ∏_j=i+1^n-1(1 - αγ -α_j^-1Λ)^* - ∑_i=^n-1α_i^-1Λ[∏_j=i+1^n-1(1 - αγ -α_j^-1Λ)](_i). To make progress, we need the following bounds valid for all t≥ n_0 λ_i/λ̃_i^(t)+ ε = λ_i/λ_i + λ̃_i^(t) -λ_i + ε<λ_i/λ_i + ε - ζ < 1 + |ε-ζ|λ_i^-1 and λ_i/λ̃_i^(t)+ ε = λ_i/λ_i + λ̃_i^(t) -λ_i + ε>λ_i/λ_i + ε + ζ > 1 - (ε + ζ)λ_i^-1 where the final inequality in each case can be derived from Taylor's theorem with Lagrange's form of the remainder <cit.>. Since the λ_i are bounded away from zero, we have established |λ_i/λ̃_i^(t)+ ε - 1| < c(ε + ζ) where the constant c= 1+(min_j{λ_j})^-1, say. From this bound we can in turn obtain 1 - α(γ + 1 + c(ε + ζ))< 1 - α(γ + (̃λ̃_i^(t) + ε)^-1λ_i) < 1 - α(γ + 1 - c(ε + ζ)) 1 - α( 1 + c(ε + ζ + γ))< 1 - α(γ + (̃λ̃_i^(t) + ε)^-1λ_i) < 1 - α(1 - c(ε + ζ + γ)) where the second line exploits the assumption c(γ + ε + ζ) < 1 and our choice c > 1. Thus ∑_t=0^n-1αλ_k/λ̃_k^(t)∏_j=t+1^n-1(1-αγ -α(λ̃_k^(j) + ε)λ_k) < ∑_t=0^n-1α (1 + c(ε + ζ))( 1 - α(γ + 1 - c(ε + ζ)))^n-1-t < 1 + c_1(ζ + ε + γ) where the second inequality follows, for large n, by summing the geometric series and again using Lagrange's form of the remainder in Taylor's theorem. c_1 is some constant, derived from c that we need not determine explicitly. A complementary lower bound is obtained similarly (for large n). We have thus shown that |𝔼w_n,i - w^*_i| < c_1(ε + ζ + γ) + ∏_t=0^n-1(1-αγ -α (λ̃_i^(t) + ε)^-1λ_i)w_0,i. Reusing the bound (<ref>) then yields (<ref>). The remaining results, (<ref>)-(<ref>) follow similarly using the same bounds and ideas as above, but applied to the corresponding steps from the proof of Theorem 2. Theorem <ref> demonstrates the same IA variance reduction as seen previously, but in the more general context of weight decay and adaptive optimisation. As expected, improved estimation of the true Hessian eigenvalues (i.e. smaller ζ) reduces the error in recovery of ^*. Moreover, increasing the weight decay strength γ decreases the leading order error bounds in (<ref>) and (<ref>), but only up to a point, as the other error terms are valid and small only if γ is not too large. § CONCLUSION We have proposed a Gaussian Process perturbation between the batch and true risk surfaces and derive the phenomenon of improved generalisation for large learning rates and larger weight decay when combined with iterate averaging observed in practice. We have extended this formalism to include adaptive methods and showed that we expect further improvement when using adaptive algorithms. CHAPTER: A RANDOM MATRIX APPROACH TO DAMPING IN DEEP LEARNING The content of this chapter was published first as a pre-print in March 2022 (<https://arxiv.org/abs/2011.08181v5>) and later as a journal article: “A random matrix theory approach to damping in deep learning”. Diego Granziol, Nicholas P Baskerville. Journal of Physics: Complexity, 3.2 (2022): 024001. DG conceived of the main idea behind this work and published it as a pre-print, along with other collaborators, before NPB joined the project. NPB introduced the random matrix model and derived the adaptive damping algorithm. NPB also overhauled the existing mathematical content, only some of which is included in this chapter. All the experiments in this chapter were actually executed by DG but NPB contributed equally to their design, analysis and write-up. § THE SPIKED MODEL FOR THE HESSIAN OF THE LOSS We conjecture that a key driver of the adaptive generalisation gap is the fact that adaptive methods fail to account for the greater levels of noise associated with their estimates of flat directions in the loss landscape. The fundamental principle underpinning this conjecture, that sharp directions contain information from the underlying process and that flat directions are largely dominated by noise, is theoretically motivated from the spiked covariance model <cit.>. This model has been successfully applied in Principal Component Analysis (PCA), covariance matrix estimation and finance <cit.>. We revisit this idea in the context of deep neural network optimisation. In particular, we consider a spiked additive signal-plus-noise random matrix model for the batch Hessian of deep neural network loss surfaces. In this model, results from random matrix theory suggest several practical implications for adaptive optimisation. We use linear shrinkage theory <cit.> to illuminate the role of damping in adaptive optimisers and use our insights to construct an adaptive damping scheme that greatly accelerates optimisation. We further demonstrate that typical hyper-parameter settings for adaptive methods produce a systematic bias in favour flat directions in the loss landscape and that the adaptive generalisation gap can be closed by redressing the balance in favour of sharp directions. To track the bias towards flat vs sharp directions we define := α_flat/α_sharp, where α_flat and α_sharp are the learning rates along the flat and sharp directions, respectively and this ratio encapsulates the noise-to-signal ratio as motivated by our conjecture (the terms flat and sharp are defined more precisely below). §.§ Sharp directions from the true loss surface survive, others wash out We can rewrite the (random) batch hessian _batch as the combination of the (deterministic) true hessian _true plus some fluctuations matrix: _batch() = _true() + (). In <cit.> the authors consider the difference between the batch and empirical Hessian, although this is not of interest for generalisation, the framework can be extended to consider the true Hessian. The authors further show, under the assumptions of Lipschitz loss continuity, almost everywhere double differentiable loss and that the data are drawn i.i.d from the data generating distribution that the elements of () converge to normal random variables[Note that although a given batch Hessian is a fixed deterministic property, we are interested in generic properties of batches drawn at random from the data generating distribution for which we make statements and can hence model the fluctuations matrix as a random matrix.]. Under the assumptions of limited dependence between and limited variation in the variance of the elements of the fluctuations matrix, the spectrum of the fluctuations matrix converges to the Wigner semi-circle law <cit.>, i.e. weakly almost surely 1/P∑_i=1^P δ_λ_i()→μ_SC, where the λ_i() are the eigenvalues of and dμ_SC(x) ∝√(2P^2 - x^2)dx. The key intuition in this chapter is that sharp directions of the true loss surfaces, that is directions in which the true Hessian has its largest eigenvalues, are more reliably estimated by the batch loss than are the flat directions (those with small Hessian eigenvalues). This intuition is natural in random matrix theory and is supported by results such as the following. Let {_i}_i=1^P, {}_i=1^P be the orthonormal eigenbasis of the true Hessian ∇^2 L_true and batch Hessian ∇^2 L_batch respectively. Let also ν≥…≥ν_P be the eigenvalues of ∇^2 L_true. Assume that ν_i = 0 for all i > r, for some fixed r. Assume that is a generalised Wigner matrix. Then as P→∞ the following limit holds almost surely |_i^T_i|^2→ 1-Pσ^2/Bνi^2 |ν_i| > √(P/B)σ, 0 , where σ is the sampling noise per Hessian element. This is a direct application of a result of <cit.> which is given more explicitly in the case of GOE Wigner matrices by <cit.>. In particular, we use a scaling of such that the right edge of the support of its spectral semi-circle is roughly at P^1/2B^-1/2σ. The expression in Section 3.1 of <cit.> can then be applied to P^-1/2_batch and re-scaled in √(P) to give the result. Note that the substantiation of the expression from <cit.> in the case of quite general Wigner matrices is given by Theorem 16 of <cit.>. Results like Theorem <ref> are available for matrix models other than Wigner, such as rotationally invariant models <cit.>, and are conjectured to hold for quite general[Roughly speaking, models for which a local law can be established <cit.>.] models <cit.>. Convergence of the spectral measure of P^-1/2 to the semi-circle is necessary to obtain (<ref>), but not sufficient. The technicalities to rigorously prove Theorem <ref> without assuming a Wigner matrix for are out of scope for the present work, requiring as they would something like an optimal local semi-circle law for <cit.>. We require only the general heuristic principle from random matrix theory encoded in (<ref>), namely that only sharp directions retain information from the true loss surface. It is expected that this principle will hold for a much wider class of random matrices than those for which it has been rigorously proven. This is acutely important for adaptive methods which rely on curvature estimation, either explicitly for stochastic second order methods or implicitly for adaptive gradient methods. The spectrum of the noise matrix occupies a continuous region that is sharp in the asymptotic limit <cit.> known as bulk supported between [λ_-,λ_+] <cit.> and observed in DNNs <cit.>. Within this bulk eigenvectors are uniformly distributed on the unit sphere <cit.> and all information about the original eigenvalue/eigenvector pairs is lost <cit.>. Hence from a theoretical perspective it makes no sense to estimate these directions and move along them accordingly. An eigenvalue, λ_i, corresponds to a flat direction if λ_i≤λ_+. For finite-size samples and network size, there exists a region beyond the predicted asymptotic support of the noise matrix, called the Tracy–Widom region <cit.>, where there may be isolated eigenvalues which are part of the noise matrix spectrum (also shown in Figure <ref>). The width of the Tracy–Widom region is very much less than that of the bulk. Anything beyond the Tracy–Widom region λ_i≫λ_+, λ_i≪λ_- is considered an outlier and corresponds to a sharp direction. Such directions represent underlying structure from the data. The eigenvectors corresponding to these eigenvalues can be shown to lie in a cone around their true values <cit.> (see Theorem <ref>). In Figure <ref>, we show the Hessian of a VGG-16 network at the 300th epoch on CIFAR-100. Here, similar to our hypothetical example, we see a continuous region, followed by a number of eigenvalues which are close to (but not within) the bulk, and finally, several clear outliers. § DETAILED EXPERIMENTAL INVESTIGATION OF HESSIAN DIRECTIONS In this section we seek to validate our conjecture that movements in the sharp direction of the loss landscape are inherently vital to generalisation by studying a convex non-stochastic example. For such a landscape there is only a single global minimum and hence discussions of bad minima are not pertinent. We implement a second-order optimiser based on the Lanczos iterative algorithm <cit.> (LanczosOPT) against a gradient descent (GD) baseline. Note on Lanczos The Lanczos algorithm is an iterative algorithm for learning approximations to the eigenvalues/eigenvectors of any Hermitian matrix, requiring only matrix–vector products. The values and vectors learned by Lanczos are known as Ritz values/vectors, which are related to the eigenvalue/eigenvector pairs of the matrix. For example, when using a random vector in the matrix vector product, the Ritz values with a weight given by the first element squared of the corresponding Ritz vector, can be shown to give a moment matched approximation to the spectral density of the underlying matrix. In the same way that the power iteration algorithm converges to the largest eigenvalue (with a rate of convergence depending on the size of the spectral gap λ_1-λ_2/λ_1) the Lanczos Ritz values converge to well separated outliers[Intuitively once the largest outlier has been learned, since Lanczos maintains an orthogonal search space, it converges to the next largest outlier]. Similar to the power iteration algorithm, this convergence is irrespective of the original seed vector as long as it is not orthogonal to the associated eigenvectors. We employ a training set of 1K MNIST <cit.> examples using logistic regression and validate on a held out test set of 10K examples. Each optimiser is run for 500 epochs. Since the number of well-separated outliers from the spectral bulk is at most the number of classes <cit.> (which is n_c=10 for this dataset), we expect the Lanczos algorithm to pick out these well-separated outliers when the number of iterations k ≫ n_c <cit.> and therefore use k=50. To investigate the impact of scaling steps in the Krylov subspace given by the sharpest directions, we consider the update _k+1 of the form: _k -α(1/η∑_i=1^k1/λ_i+δ_i_i^T∇ L(_k)+∑_i=k+1^P1/δ_i_i^T∇ L(_k)) where P=7850 (the number of model parameters) and hence the vast majority of flat directions remain unperturbed. Note that in the case that k=P=7850 we would have a fully second order method, whereas in the case where k=0, by resolution of the identity, we would have gradient descent with learning rate α/δ. Hence equation <ref> can be seen as scaling the k Ritz eigenvectors by their respective Ritz values, whilst leaving the remaining directions (which by the previous argument are typically the "flatter" directions) unchanged from their gradient descent counterpart. Whilst Equation <ref> would naively require 𝒪(P^3) operations, i.e a full eigendecomposition, it can in fact equivalently be implemented in the following manner _k -α(1/η∑_i=1^k[1/λ_i+δ-1/δ]_i_i^T∇ L(_k)+1/δ∇ L(_k)), which requires only k Hessian vector products and hence is of computational complexity 𝒪(kP). To explore the effect of the sharp directions explicitly as opposed to implicitly, we have introduced perturbations to the optimiser (denoted LOPT[η]), in which we reduce the first term in the parenthesis of Equation <ref> by a factor of η (we explore scaling factors of 3 and 10). This reduces movement in sharp directions, consequently increases reliance on flat directions during the optimisation trajectory (we increase ). This differs from simply increasing δ, which while reducing the movement in all directions, actually relatively increases movement in the sharper directions (decreases ). To see this consider the case where λ_i≫δ, in such an instance, increasing δ does not appreciably change movement in the sharp directions, whereas it massively decreases movement in flat directions. For a fixed α, δ controls the . Experimental Results We show the training and validation curves for various values of damping δ and specific sharpness reduction factor η in Figures <ref> and <ref>. For ease of exposition we only show curves of adjacent values of damping and in order to focus on the speed of convergence we only show the first 100 epochs of training. We have the full 500 epochs of training, along with all curves colour coded on the same graph in <ref>. We use colours to distinguish δ values and dashing/opacity to indicate η values (dashed is larger than solid, and dashed with lower opacity is larger still). Note that as given by our central hypothesis, increasing δ increases generalisation (we decrease ), whereas increasing η decreases generalisation (we increase ). We see in Figure <ref> that despite an initial instability in training for η=3,10, the red lines with lowest value of damping δ=0.001, all converge quickly to 0 training error (See <ref>). However the generalisation as measured by the validation error decreases as we increase η. This can be seen as the lighter dashed lines (denoting a decrease in movement in the sharpest directions only) increase in validation error. For the blue lines with δ=0.01, whilst increasing δ decreases the rate of convergence, η=3 attains a final training error of 0, yet differs markedly in validation error for its η=1 counterpart. Similarly so the change in validation error for η=10 from η=3 is much larger than the change in training error. For larger values of δ as shown in Figure <ref>, whilst we see an effect on both training and validation, the effect on validation is much more stark. To show this in an intuitive way, in Figure <ref>, we use a heat map to show the difference from the best training and testing error as a function of δ and η. The best training error was 0 and attained at η=1, δ=0.001, whereas the best testing error was 0.13 and attained at η=1, δ=1.0. It is the difference from these values that is shown in Figure <ref> (so the top left square is 0 for training and similarly the bottom left for testing). As we increase (by decreasing the value of δ for a fixed α value of 0.01), the generalisation of the model suffers correspondingly. For each fixed value of δ, we see clearly that perturbations of greater magnitude cause greater harm to generalisation than training. We also note that for larger values of δ the perturbed optimisers suffer more gravely in terms of the effect on both training and validation. It is of course possible that for such large values of δ we have not converged even after 500 epochs. We show the full training curves in Figure <ref>. We observe that the generalisation of all algorithms is worsened by explicit limitation of movement in the sharp directions (and an increase of ), however for extremely low damping measures (which are typical in adaptive optimiser settings) there is no or very minimal impact in training performance (upper region of Figure <ref>(a). A consequence of this which is already employed in practical machine learning is the use of δ tuning. Essentially using larger than default values of δ (decreasing ) so as to not simply avoid problems of numerical stability but also generalise better. Fashion MNIST: We repeat the experimental procedure for the FashionMNIST dataset  <cit.>, which paints an identical picture (at slightly higher testing error) The full training curves are given in Figure <ref>. § THE ROLE OF DAMPING Consider a general iterative optimiser that seeks to minimise the scalar loss L() for a set of model parameters ∈ℝ^P. Recall the k+1-th iteration of such an optimiser can be written[Ignoring additional features such as momentum and explicit regularisations.] as follows: _k+1←_k - α_k ^-1∇ L_batch(_k) where α_k is the global learning rate. For SGD, = whereas for adaptive methods, typically comprises some form of approximation to the Hessian i.e. ≈∇^2L_batch(_k). Writing this update in the eigenbasis of the Hessian[We assume this to be positive definite or that we are working with a positive definite approximation thereof.] ∇^2L_batch(_k) = ∑_i^Pλ_i_i_i^T∈ℝ^P× P, where λ_1≥λ_2≥…≥λ_P≥ 0 represent the ordered scalar eigenvalues, the parameter step takes the form: _k+1 = _k - ∑_i=1^Pα/λ_i+δ_i_i^T∇ L_batch(_k). Here, δ is a damping (or numerical stability) term. This damping term (which is typically grid searched <cit.> or adapted during training <cit.>) can be interpreted as a trust region <cit.> that is required to stop the optimiser moving too far in directions deemed flat (λ_i≈ 0), known to dominate the spectrum in practice <cit.>, and hence diverging. In the common adaptive optimiser Adam <cit.>, it is set to 10^-8. For small values of δ, α must also be small to avoid optimisation instability, hence global learning rates and damping are coupled in adaptive optimisers. §.§ Adaptive updates and damping The learning rate in the flattest (λ≈ 0) directions is approximately α/δ, which is larger than the learning rate in the sharpest (λ_i≫δ) directions α/δ+λ_i. This difference in per direction effective learning rate makes the best possible (damped) training loss reduction under the assumption that the loss function can be effectively modelled by a quadratic <cit.>. Crucially, however, it is agnostic to how accurately each eigenvector component of the update estimates the true underlying loss surface, which is described in Theorem <ref>. Assuming that the smallest eigenvalue λ_P≪δ, we see that = 1+ λ_1-λ_P/δ. This is in contrast to SGD where _k+1 = _k - ∑_i=1^Pα_i_i^T∇ L_batch(_k) and hence = 1. Note that we can ignore the effect of the overlap between the gradient and the eigenvectors of the batch Hessian because we can rewrite the SGD update in the basis of the batch Hessian eigenvectors and hence reduce the problem to one of the relative learning rates. The crucial point to note here is that the difference in is primarily controlled by the damping parameter: smaller values yield a larger , skewing the parameter updates towards flatter directions. To further explore our central conjecture for modern deep learning architectures (where a large number of matrix–vector products is infeasible) we employ the KFAC <cit.> and Adam <cit.> optimisers on the VGG-16 <cit.> network on the CIFAR-100 <cit.> dataset. The VGG-16 allows us to isolate the effect of , as opposed to the effect of different regularisation implementations for adaptive and non-adaptive methods as discussed by <cit.>. §.§ VGG16: a laboratory for adaptive optimisation The deep learning literature contains very many architectural variants of deep neural networks and a large number of engineering “tricks” which are employed to obtain state of the art results on a great variety of different tasks. The theory supporting the efficacy of such tricks and architectural designs is often wanting and sometimes entirely absent. Our primary objective in this work is to illuminate some theoretical aspects of adaptive optimisers such as appropriate damping and Hessian estimation, so we require a simple and clean experimental environment free from, where possible, interference from as many different competing effects. To this end, the VGG architecture <cit.> for computer vision is particularly appropriate. With 16 layers, the VGG has over 16 million parameters and is capable of achieving competitive test error on a variety of standard computer vision datasets while being trained without batch normalisation <cit.> or weight decay. Indeed, features such as weight decay and batch normalisation obscure the effect of learning rate and damping, meaning that even quite poor choices can ultimately give reasonable results given sufficient training iterations<cit.>. In contrast the VGG clearly exposes the effects of learning rate and damping, with training being liable to fail completely or diverge if inappropriate values are used. Furthermore as shown in <cit.> the VGG is highly unstable if too large a learning rate is used. This allows us to very explicitly test whether amendments provided by theory are helpful in certain contexts, such as training stability, as unstable training very quickly leads to divergence. Learning Rate Schedule For all experiments unless specified, we use the following learning rate schedule for the learning rate at the t-th epoch: α_t = α_0, if t/T≤ 0.5 α_0[1 - (1 - r)(t/T - 0.5)/0.4] if 0.5 < t/T≤ 0.9 α_0r, otherwise where α_0 is the initial learning rate. T is the total number of epochs budgeted for all CIFAR experiments. We set r = 0.01 for all experiments. §.§ KFAC with VGG-16 on CIFAR-100: By decreasing the global learning rate α whilst keeping the damping-to-learning-rate ratio κ = δ/α constant, we increase the , , which is determined by λ_i/κα+1. As shown in Tab. <ref> and in Figure <ref> we observe that as we increase the training performance is effectively unchanged, but generalisation suffers (35%→37.8%). Whilst decreasing the damping results in poor training for large learning rates, for very low learning rates the network efficiently trains with a lower damping coefficient. Such regimes further increase and we observe that they generalise more poorly. For α = 0.0001 dropping the damping coefficient δ from 0.0003 to 0.0001 drops the generalisation further to 60.2% and then 56% respectively. Similar to logistic regression, for both cases the drop in generalisation is significantly larger than the drop in training accuracy. Adam with VGG-16 on CIFAR-100: We employ Adam with a variety of learning rate and damping coefficients with results as shown in Tab. <ref> and in Figure <ref> and compare against a baseline SGD with α = 0.01 (corresponding to optimal performance). For the largest learning rate with which Adam trains (α = 0.0004) with the standard damping coefficient δ = 10^-8, we see that Adam under-performs SGD, but that this gap is reduced by simply increasing the damping coefficient without harming performance. Over-damping decreases the performance. For larger global learning rates enabled by a significantly larger than default damping parameter, when the damping is set too low, the training is unstable (corresponding to the dotted lines). Nevertheless, many of these curves with poor training out-perform the traditional setting on testing. We find that for larger damping coefficients δ = 0.005, 0.0075 Adam is able to match or even beat the SGD baseline, whilst converging faster. We show that this effect is statistically significant in Tab. <ref>. This provides further evidence that for real problems of interest, adaptive methods are not worse than their non-adaptive counterparts as argued by <cit.>. We note as shown in Tab. <ref>, that whilst increasing δ always leads to smaller spectral norm, this does not always coincide with better generalisation performance. We extend this experimental setup to include both batch normalisation <cit.> and decoupled weight decay <cit.>. We use a learning rate of 0.001 and a decoupled weight decay of [0,0.25]. For this experiment using a larger damping constant slightly assists training and improves generalisation, both with and without weight decay. ResNet-50 ImageNet. As shown in Figure <ref>,<ref>, these procedures have practical impact on large scale problems. Here we show that under a typical 90 epoch ImageNet setup <cit.>, with decoupled weight decay 0.01 for AdamW and 0.0001 for SGD, that by increasing the numerical stability constant δ the generalisation performance can match and even surpass that of SGD, which is considered state-of-the-art and beats AdamW without δ tuning by a significant margin. § OPTIMAL ADAPTIVE DAMPING FROM RANDOM MATRIX THEORY Recall the scaling applied in the direction of the i^th eigenvector in (<ref>). We make the following observation 1/λ_i+δ = 1/βλ_i+(1-β)·1/κ where κ = β^-1, β = (1+δ)^-1. Hence, using a damping δ is formally equivalent to applying linear shrinkage with factor β=(1+δ)^-1 to the estimated Hessian and using a learning rate of αβ. Shrinkage estimators are widely used in finance and data science, with linear shrinkage being a common simple method applied to improve covariance matrix estimation <cit.>. The practice of shrinking the eigenvalues while leaving the eigenvectors unchanged is well-established in the fields of sparse component analysis and finance <cit.>. In the shrinkage literature, the typically considered models are additive and multiplicative <cit.>, i.e. = + ,     = ^1/2^1/2 where is the observed matrix, is the non-corrupted (or signal) matrix, and is the noise matrix. White Wishart is the simplest example in the multiplicative case, and Wigner matrices are the simplest choice in the additive case. In generality, shrinkage estimators are estimators of given , and it is common to consider rotationally invariant (or, more precisely, equivariant) estimators which reduce the problem to computing the eigenvalues and eigenvectors of and then correcting, or shrinking, the eigenvalues while keeping the eigenvectors fixed to obtain improved estimation of . Optimal[Optimality is commonly defined in terms of Frobenuis norm, but some authors have considered the minimum variance loss <cit.>.] estimators are constructed in <cit.> and most recently <cit.>. We note in passing that such estimators are only possible in the large matrix limit, where functions of the inaccessible matrix can be replaced by equivalent quantities depending only on . The optimal shrinkage estimators are generally non-linear functions of the eigenvalues of and depend on integral transforms of the limiting spectral measure of and also on the noise matrix . In some very special cases, the optimal shrinkage estimators simplify greatly, for example, in the multiplicative case, if is an inverse Wishart matrix, the linear shrinkage estimator = β + (1-β) = _^*||^*-_true||_2 is optimal and an explicit expression for the optimal β is found depending only on the dimensionality of the model and the noise variance <cit.>. In our optimisation context, the additive noise model is perhaps the most natural with being the true loss Hessian and the batch loss Hessian, however we cannot expect any special forms on or that will produce closed form expressions for the optimal rotational invariant estimator and the linear shrinkage estimator is almost certainly not optimal. We suggest that there is no particular reason to break with rotational invariance in this work, as intuitively any distinguished directions of H_batch are those of H_true. However linear shrinkage has the great advantage of being simple to integrate into existing adaptive optimisers and it acts intuitively to reduce the movement of the optimiser in pure-noise directions. In fact, it is known that general non-linear shrinkage estimators retain the property of increasing the smallest eigenvalues and decreasing the largest <cit.>. Our interpretation reveals that the damping parameter should not be viewed as a mere numerical convenience to mollify the effect of very small estimate eigenvalues, but rather that an optimal δ should be expected, representing the best linear approximation to the true Hessian and an optimal balancing of variance (the empirical Hessian) and bias (the identity matrix). This optimal choice of δ will produce an optimiser that more accurately descends the directions of the true loss. The linear shrinkage interpretation given by (<ref>) is an elementary algebraic relation but does not by itself establish any meaningful link between damping of adaptive optimisers and linear shrinkage estimators. To that end, we return to the random matrix model (<ref>) for the estimated Hessian: Let us write the Hessian as _batch = _true + where is a random matrix with 𝔼 = 0. Note that this model is entirely general, we have simply defined = _batch - 𝔼_batch and 𝔼_batch = _true. We then seek a linear shrinkage estimator (β) = β_batch + (1-β) such that E(β)= P^-1 ( - _true)^2 is minimised. Note that this is the same objective optimised by <cit.> to obtain optimal estimators for various models. In this context, we are not finding the optimal estimator for _true but rather the optimal linear shrinkage estimator. We have E(β) = 1/P[(β-1) _true + β + (1-β)]^2 ≡1/P[ (β - 1)_true+ _β]^2 where _β = β + (1-β). A natural assumption in the case of deep learning is that _true is low-rank, i.e. for P→∞ either rank(_true) = r is fixed or rank(_true) = o(P). Empirical evidence for this assumption is found in <cit.>. In this case the bulk of the spectrum of _β is the same as that of (β-1) _true + _β <cit.>. We will also assume that admits a deterministic limiting spectral measure μ_X such that 1/P∑_j=1^P δ_λ(X)_i→μ weakly almost surely. Say ω_X(x) dx = dμ(x). Then _β has limiting spectral density ω_Y(y) = β^-1ω_X(β^-1(y - 1 + β)). Then for large P E(β) ≈β^-1∫ y^2 ω_X(β^-1(y - 1 + β))  dy = ∫ (β x + 1 - β)^2 ω_X(x)   dx = β^2 μ_X(x^2) + (1-β)^2 as the centred assumption on means that ∫ xω_X(x)  dx = 0. μ_X(x^2) is shorthand for ∫ x^2ω_X(x)  dx. E(β) is thus minimised to leading order at β = (1 + μ_X(x^2))^-1. Recalling that β^-1 = (1+δ)^-1, this yields δ = μ_X(x^2) i.e. the optimal level of damping at large finite P is approximately δ=P^-1^2. Note that the value (<ref>) is a very natural measure of the Hessian noise variance. Therefore if the random matrix model described above is appropriate and the linear shrinkage interpretation (<ref>) is meaningful we should expect it to result in close to optimal performance of a given adaptive optimiser. The purpose of adaptive optimisers is to accelerate training, in part by allowing for larger stable learning rates. As discussed throughout this chapter, such optimisation speed often comes at the cost of degraded generalisation. In this context, `optimal performance' of adaptive optimisers should be taken to mean fast training and good generalisation. As we have discussed above, very large values of δ recover simple non-adaptive SGD, so using (<ref>) we should be able to obtain generalisation performance at least as good as SGD and faster optimisation than any choice of δ including the default very small values often used and the larger values considered in Section <ref>. The value of (<ref>) can be easily learned by estimating the variance of the Hessian. The Hessian itself cannot be computed exactly, as it is far too large for P ≥ O(10^7), however one can compute (and hence ^2) for any vector , using ∇^2 L = ∇ (^T∇ L). The full approach is given in Algorithm <ref>. Extension to non-linear shrinkage. If, as we demonstrate below, our interpretation of damping as linear shrinkage is meaningful, it is natural to ask if we can replace linear shrinkage with more general non-linear shrinkage, effectively defining new adaptive optimisers that replace λ_i + δ in (<ref>) by f(λ_i) for some non-linear f. Indeed, non-linear shrinkage is known to outperform linear shrinkage in general <cit.>, so we should expect to see further improvements beyond our optimal damping approach, but there are substantial obstacles to progress in this direction. Absent the strongly simplifying assumptions that lead to linear shrinkage, one must handle integral transforms of the spectral density of _batch to compute general non-linear shrinkage estimators. There are various approaches sin the literature that make use of parametric and kernel estimation fits to these transforms or the spectral density itself <cit.> and there are simpler approaches that use cross-validation to construct improved estimators of the true eigenvalues-eigenvector pairs <cit.>. It is, however, observed by Ledoit and Wolf <cit.> that these methods are infeasible for matrices larger than around 1000× 1000. Ledoit and Wolf <cit.> propose a new, non-parametric non-linear shrinkage estimator that is quite conceptually simple to implement and can scale to larger matrices, but careful inspection reveals that the required computation time for each shrinkage evaluation is nevertheless O(P^2), where in our case P is on the order of 10^7, so even this approach is infeasible. §.§ Experimental Design and Implementation Details In order to test our hypothesis for the derived optimal δ (<ref>), we run the classical VGG network <cit.> with 16 layers on the CIFAR-100 dataset, without weight decay or batch normalisation. This gives us maximal sensitivity to the choice of learning rate and appropriate damping. Now in practice the damping coefficient is typically grid searched over several runs <cit.> or there are heuristics such as the Levenberg–Marquardt to adapt the damping coefficient <cit.>, which however we find does not give stable training for the VGG. We hence compare against a fixed set damping value δ and a learned damping value as given by our equation (<ref>). We find that the variance of the Hessian (<ref>) at a random point in weight space (such as at initialisation) or once network divergence has occurred is zero, hence the initial starting value cannot be learned as, with a damping of near zero, the network entirely fails to train (no change in training loss from random). This is to be expected, as in this case the local quadratic approximation to the loss inherent in adaptive methods breaks down. Hence we initialise the learning algorithm with some starting value δ^*, which is then updated every 100 training iterations using equation (<ref>). Strictly speaking we should update every iteration, but the value of 100 is chosen arbitrarily as a computational efficiency. Since we are using the variance of the Hessian, which is expensive to compute compared to a simple gradient calculation, we do not want to compute this quantity too often if it can be helped. We run our experiments on a logarithmic grid search in near factors of 3. So learning rates and damping rates, either flat or learned are on the grid of 0.0001,0.0003,0.001.... We find under this setup that the time taken per epoch against the flat damping schedule is only doubled. We get identical results for using a damping gap of 10 and so do not consider this to be a very relevant hyper-parameter. We further calculate the variance of the Hessian over a sub-sample of 10000 examples and do not calculate the variance sample by sample, but over batches of 128 to speed up the implementation. Under the assumption that the data is drawn i.i.d from the dataset the variance is simply reduced by a factor (1/B-1/N) ≈1/B for a small batch size. We do not consider the impact of using only a sub-sample of the data for estimation, but we expect similar results to hold compared to the entire dataset as long as the sub-sample size S≫ B. This should allow such a method to be used even for very large datasets, such as ImageNet (with 1-million images), for which a pass of the entire dataset is extremely costly. In theory the sub-sample size and mini-batch size for Hessian variance estimation could be two hyper-parameters which are tuned by considering the effect of reduction on training set or validation set loss metrics with the trade off for computational cost. We do not conduct such analysis here. We also incorporate an exponential moving average into the learned damping with a co-efficient of 0.7[This value is not tuned and in fact from our plots it may be advisable to consider higher values for greater stability] to increase the stability of the learned damping. §.§ Experiment on CIFAR-100 using KFAC to validate the optimal linear shrinkage For large damping values δ we simply revert to SGD with learning rate α/δ, so we follow the typical practice of second order methods and use a small learning rate and correspondingly small damping coefficient. However as shown in Figure <ref> the generalisation and optimisation are heavily dependent on the global learning rate, with larger learning rates often optimising less well but generalising better and vice versa for smaller learning rates. We hence investigate the impact of our damping learner on learning rates one order of magnitude apart. Where in the very low learning rate regime, we show that our method achieves significantly improved training stability with low starting damping and fast convergence and for the large learning rate regime that we even exceed the SGD validation set result. § PREVIOUS WORK Training KFAC with Auto-Damping: We show the results for a global learning rate of 0.0001 in Figure <ref>. We see that for the flat damping methods with low values of damping, that training becomes unstable and diverges, despite an initially fast start. Higher damped methods converge, but slowly. In stark contrast, our adaptive damping method is relatively insensitive to their chosen initial values. We show here δ^* = α,3α,10α and all converge and moreover significantly faster than all flat damping methods. The smaller initial damping coefficients δ = α,3α converge faster than the larger and, interestingly, follow very similar damping trajectories throughout until the very end of training, as shown in Figure <ref>. Getting Great Generalisation with KFAC and Auto-Damping: We similarly train KFAC on the VGG-16 with a larger learning rate of 0.001, in order to achieve better generalisation. Here we see in Figure <ref> that relatively low values of flat damping such as 0.01 and 0.03 very quickly diverge, whereas a large value of 0.1 converges slowly to a reasonable test error. The corresponding learned damping curves of 0.01 and 0.03 however converge quickly and the 0.03 initialised damping curve even beats the generalisation performance of the large flat damped version and the test result of SGD on 3x as many training epochs. A further look at the value of adaptive damping To elucidate the impact and workings of the adaptive damping further, we consider a select set of curves the learning rate of α = 0.0001, shown in Figure <ref>. here we see that starting with an initial damping of δ=α, the adaptive method reaches a comparable generalisation score to the flat damping of δ=0.03 but at a much faster convergence rate. The initial damping of δ=0.03 converges not quite as quickly but trains and generalises better than its lower starting damping counterpart. Note from Figure <ref> that even though the damping of this curve reaches ≈ 0.1 that starting with a flat damping of 0.1 never achieves a comparable generalisation (or even trains well). This implies as expected that it is important to adjust damping during training. §.§ Adam with Auto-Damping Given that Adam does not employ an obvious curvature matrix, it is curious to consider whether our learned damping estimator can be of practical value for this optimiser. As discussed in the previous section, Adam's implied curvature can be considered a diagonal approximation to the square root of the gradient covariance. The covariance of the gradients has been investigated to have similarities to the Hessian <cit.>. However the nature of the square root, derived from the regret bound in <cit.> presents an interesting dilemma. In the case of very very small eigenvalues of , the square root actually reduces their impact on the optimisation trajectory, hence it is very plausible that the learned damping could be too harsh (as it is expected to work optimally for the eigenvalues of and not √()). This is actually exactly what we see in Figure <ref>. Whilst an increase in learning rate and damping, along with auto-damping improves both the convergence and validation result over the standard baseline (where the damping is kept at the default value and maximal learning rate is found which stably trains) the improvements are small and do not make up the gap with SGD. More specifically they are not better than just using a larger learning rate in combination with a larger flat damping, defeating the purpose of learning the damping factor online. To alleviate the effect of overly harsh damping, we consider an alternate learning rate schedule where the base learning rate is increased by a factor of 5 early in training and then subsequently decreased. The constant 5 is not tuned but simply a place-holder to consider a more aggressive learning rate schedule to counter-act the effect of the damping learner. These curves are marked with α^* in Figure <ref>. Warm up Learning Rate Schedule For all experiments unless specified, we use the following learning rate schedule for the learning rate at the t-th epoch: α_t = α_0, if t/T≤ 0.1 α_0[1+(κ-1)(t/T-0.1)/0.2 , if t/T≤ 0.3 α_0[κ - (κ - r)(t/T - 0.3)/0.6] if 0.3 < t/T≤ 0.9 α_0r, otherwise where α_0 is the initial learning rate. T is the total number of epochs budgeted for all CIFAR experiments. We set r = 0.01 and κ = 5. While this introduces some slight training instability early in training, which could potentially be managed by altering the schedule, we find that such a schedule boosts the validation performance, particularly so for auto-damped methods, as shown by the blue curve in Figure <ref>, which surpasses the generalisation of SGD (shown in Figure <ref>). To more clearly expose the combined impact of adaptive damping and this alternative learning schedule we consider the variations in Figure <ref> for a learning rate and damping both equal to 0.0001. Here we see that the aggressive learning rate schedule with flat damping diverges, whereas the autodamping stabilises training, allowing for convergence to a solution with excellent generalisation. We see here in Figure <ref> that the damping coefficient reacts to this large learning rate increase by increasing its rate of damping early, stabilising training. We also show for reference that the typical linear decay schedule, with a larger learning rate and initial damping does not supersede the validation result of smaller learning rate and flat damping counter-part (it does however train better). This demonstrates the necessity of an alternative learning rate schedule to bring out the value of the adaptive damping. We remark however that optimal results in deep learning almost always require some degree of hand-crafted tuning of the learning rate. Our adaptive damping method is not proposed as a panacea, but just an optimal method of setting the damping coefficient. Since changing the damping coefficient effectively changes to geometry of the loss surface, it is entirely reasonable that the learning rate may have to be tweaked to give best results. § CONCLUSION In this chapter we have showed using a spiked random matrix model for the batch loss of deep neural networks that we expect sharp directions of loss surface to retain more information about the true loss surface compared to flatter directions. For adaptive methods, which attempt to minimise an implicit local quadratic of the sampled loss surface, this leads to sub-optimal steps with worse generalisation performance. We further investigate the effect of damping on the solution sharpness and find that increasing damping always decreases the solution sharpness, linking to prior work in this area. We find that for large neural networks an increase in damping both assists training and is even able to best the SGD test baseline. An interesting consequence of this finding is that it suggests that damping should be considered an essential hyper-parameter in adaptive gradient methods as it already is in stochastic second order methods. Moreover, our random matrix theory model motivates a novel interpretation of damping as linear shrinkage estimation of the Hessian. We establish the validity of this interpretation by using shrinkage estimation theory to derive an optimal adaptive damping scheme which we show experimentally to dramatically improve optimisation speed with adaptive methods and closes the adaptive generalisation gap. Our work leaves open several directions for further investigation and extension. Mathematically, there is the considerable challenge of determining optimal assumptions on the network, loss function and data distribution such that the key outlier overlap result in Theorem <ref>, or sufficiently similar analogues thereof, can be obtained. On the experimental side, we have restricted ourselves to computer vision datasets and a small number of appropriate standard network architectures. These choices helped to maintain clarity on the key points of investigation, however they are clearly limiting. In particular, it would be natural to reconsider our investigations in situations for which adaptive optimisers typically obtain state of the art results, such as modern natural language processing <cit.>. Practically speaking, we have proposed a novel, theoretically motivated and effective adaptive damping method, but it is reliant on relatively expensive Hessian variance estimates throughout training. Future work could focus on cheaper methods of obtaining the required variance estimates. CHAPTER: APPEARANCE OF LOCAL RANDOM MATRIX STATISTICS The content of this chapter was published first as a pre-print in February 2021 (<https://arxiv.org/abs/2102.06740>) and later as a journal article: “Appearance of random matrix theory in deep learning”. Nicholas P Baskerville, Diego Granziol and Jonathan P Keating. Physica A: Statistical Mechanics and its Applications, 590:126742, 2022. NPB performed the calculations, designed, coded and ran most of the experiments and wrote most of the random matrix theory aspects of the paper. DG assisted with writing code, ran the training of a few of the neural networks and wrote some of the more machine learning oriented sections of the paper. JPK proposed the research idea, advised throughout and contributed several sections to the paper. Anonymous reviewers spotted some minor errors, advised on changes of presentation and extra experiments and provided useful references. § PRELIMINARIES Consider a neural network with weights w⃗∈ℝ^P and a dataset with distribution ℙ_data. For the purposes of our discussion, a neural network, f_w⃗ say, is just a non-linear function from some ℝ^d to some ℝ^c, parametrised by w⃗. Neural networks can be defined in many different ways in terms of their weights (the architecture of the network), but these details will not play role in our discussion. What will be important is that the number of weights P will be large, i.e. approaching 10,000 even in the simplest of cases. Let L(w⃗, x⃗) be the loss of the network for a single datum x⃗ and let 𝒟 denote any finite sample of data points from ℙ_data. A simple example of L is the squared error L(w⃗, (x⃗, y⃗)) = ||f_w⃗(x⃗) - y⃗||_2^2, where ℙ_data is a distribution on tuples of features x⃗ and labels y⃗. The true loss is given by ℒ_true(w⃗) = 𝔼_x⃗∼ℙ_data L(w⃗, x⃗) and the empirical loss (or training loss) is given by ℒ_emp(w⃗, 𝒟) = 1/|𝒟|∑_x⃗∈𝒟 L(w⃗, x⃗). Where 𝒟 denotes the dataset. The true loss is a deterministic function of the weights, while the empirical loss is a random function with the randomness coming from the random sampling of the finite dataset 𝒟. The empirical Hessian _emp() = ∇^2ℒ_emp(), describes the loss curvature at the point in weight space. By the spectral theorem, the Hessian can be written in terms of its eigenvalue/eigenvector pairs _emp = ∑_i^Pλ_i_i_i^T, where the dependence on has been dropped to keep the notation simple. The eigenvalues of the Hessian are particularly important, being explicitly required in second-order optimisation methods, and characterising the stationary points of the loss as local minima, local maxima or generally saddle points of some other index. For a matrix drawn from a probability distribution, its eigenvalues are random variables. The eigenvalue distribution is described by the joint probability density function (j.p.d.f) p(λ_1, λ_2, …, λ_P), also known as the P-point correlation function. The simplest example is the empirical spectral density (ESD), ρ^(P)(λ) = 1/P∑_i^Pδ(λ-λ_i). Integrating ρ^(P)(λ) over an interval with respect to λ gives the fraction of the eigenvalues in that interval. Taking an expectation over the random matrix ensemble, we obtain the mean spectral density 𝔼ρ^(P)(λ), which is a deterministic probability distribution on ℝ. Alternatively, taking the P→∞ limit, assuming it exists, gives the limiting spectral density (LSD) ρ, another deterministic probability distribution on ℝ. A key feature of many random matrix ensembles is self-averaging or ergodicity, meaning that the leading order term (for large P) in 𝔼ρ^(P) agrees with ρ. Given the j.p.d.f, one can obtain the mean spectral density, known as the 1-point correlation function (or any other k-point correlation function) by marginalisation 𝔼ρ^(P)(λ) = ∫ p(λ, λ_2, …, λ_P) dλ_2… dλ_P. A GOE matrix is an example of a Wigner random matrix, namely a real-symmetric (or complex-Hermitian) matrix with otherwise i.i.d. entries and off-diagonal variance σ^2.[The GOE corresponds to taking the independent matrix entries to be normal random variables.] The mean spectral density for Wigner matrices is known to be Wigner's semicircle <cit.> ρ_SC(λ) = 1/2πσ^2 P√(4Pσ^2 - λ^2)_|λ|≤ 2σP. The radius of the semicircle[Using the Frobenius norm identity ∑_i^Pλ_i^2 = P^2σ^2] is proportional to Pσ, hence scaling Wigner matrices by 1/√(P) leads to a limit distribution when P→∞. This is the LSD. With this scaling, there are, on average, 𝒪(P) eigenvalues in any open subset of the compact spectral support. In this sense, the mean (or limiting) spectral density is macroscopic, meaning that, as P→∞, one ceases to see individual eigenvalues, but rather a continuum with some given density. § MOTIVATION: MICROSCOPIC UNIVERSALITY Random Matrix Theory was first developed in physics to explain the statistical properties of nuclear energy levels, and later used to describe the spectral statistics in atomic spectra, condensed matter systems, quantum chaotic systems etc; see, for example <cit.>. None of these physical systems exhibits a semicircular empirical spectral density. However they all generically show agreement with RMT at the level of the mean eigenvalue spacing when local spectral statistics are compared. Our point is that while neither multi-layer perceptron (MLP) nor Softmax Regression Hessians are described by the Wigner semicircle law which holds for GOE matrices (c.f. Figure 1a) – their spectra contain outliers, large peaks near the origin and the remaining components of the histogram also do not match the semicircle – nevertheless Random Matrix Theory can still (and we shall demonstrate does) describe spectral fluctuations on the scale of their mean eigenvalue spacing. It is worth noting in passing that possibilities other than random-matrix statistics exist and occur. For example, in systems that are classically integrable, one finds instead Poisson statistics <cit.>; similarly, Poisson statistics also occur in disordered systems in the regime of strong Anderson localisation <cit.>; and for systems close to integrable one finds a superposition of random-matrix and Poisson statistics <cit.>. So showing that Random Matrix Theory applies is far from being a trivial observation. Indeed it remains one of the outstanding challenges of mathematical physics to prove that the spectral statistics of any individual Hamiltonian system are described by it in the semiclassical limit. Physics RMT calculations re-scale the eigenvalues to have a mean level spacing of 1 and then typically look at the nearest neighbour spacings distribution (NNSD), i.e. the distribution of the distances between adjacent pairs of eigenvalues. One theoretical motivation for considering the NNSD is that it is independent of the Gaussianity assumption and reflects the symmetry of the underlying system. It is the NNSD that is universal (for systems of the same symmetry class) and not the average spectral density, which is best viewed as a parameter of the system. The aforementioned transformation to give mean spacing 1 is done precisely to remove the effect of the average spectral density on the pair correlations leaving behind only the universal correlations. To the best of our knowledge no prior work has evaluated the NNSD of artificial neural networks and this is a central focus of this chapter. In contrast to the LSD, other k-point correlation functions are also normalised such that the mean spacing between adjacent eigenvalues is unity. At this microscopic scale, the LSD is locally constant and equal to 1 meaning that its effect on the eigenvalues' distribution has been removed and only microscopic correlations remain. In the case of Wigner random matrices, for which the LSD varies slowly across the support of the eigenvalue distribution, this corresponds to scaling by √(P). On this scale the limiting eigenvalue correlations when P→∞ are universal; that is, they are the same for wide classes of random matrices, depending only on symmetry <cit.>. For example, this universality is exhibited by the NNSD. Consider a 2× 2 GOE matrix, in which case the j.p.d.f has a simple form: p(λ_1, λ_2) ∝ |λ_1 - λ_2| e^-1/2(λ_1^2 + λ_2^2). Making the change of variables ν_1 = λ_1 - λ_2, ν_2 = λ_1 + λ_2, integrating out ν_2 and setting s = |ν_1| results in a density ρ_Wigner(s) = π s/2e^-π/4s^2, known as the Wigner surmise (see Figure <ref>). For larger matrices, the j.p.d.f must include an indicator function {λ_1≤λ_2≤…λ_P} before marginalisation so that one is studying pairs of adjacent eigenvalues. While the Wigner surmise can only be proved exactly, as above, for the 2× 2 GOE, it holds to high accuracy for the NNSD of GOE matrices of any size provided that the eigenvalues have been scaled to give mean spacing 1.[An exact formula for the NNSD of GOE matrices of any size, and one that holds in the large P limit, can be found in <cit.>.] The Wigner surmise density vanishes at 0, capturing `repulsion' between eigenvalues that is characteristic of RMT statistics, in contrast to the distribution of entirely independent eigenvalues given by the Poisson law ρ_Poisson(s) = e^-s. The Wigner surmise is universal in that the same density formula applies to all real-symmetric random matrices, not just the GOE or Wigner random matrices. § METHODOLOGY Prior work <cit.> focusing on the Hessian empirical spectral density has utilised fast Hessian vector products <cit.> in conjunction with Lanczos <cit.> methods. However, these methods approximate only macroscopic quantities like the spectral density, not microscopic statistics such as nearest neighbour spectral spacings. For modern neural networks, the 𝒪(P^3) Hessian eigendecomposition cost will be prohibitive, e.g. for a Residual Network (Resnet) <cit.> with 34 layers P=10^7. Hence, We restrict to models small enough to perform exact full Hessian computation and eigendecomposition. We consider single layer neural networks for classification (softmax regression), 2-hidden-layer MLPs[Hidden layer widths: 10, 100.] and 3 hidden-layer MLPs[Hidden layer widths: 10, 100, 100.]. On MNIST <cit.>, the Hessians are of size 7850× 7850 for logistic regression, 9860× 9860 for the small MLP and 20060× 20060 for the larger 3 hidden-layer MLP, so can be computed exactly by simply applying automatic differentiation twice, and the eigenvalues can be computed exactly in a reasonable amount of time. We also consider a single layer applied to CIFAR-10 <cit.> classification with pre-trained Resnet-34 embedding features <cit.>. While we cannot at present study the full Hessian of, for example, a Resnet-34, we can study the common transfer learning use-case of training only the final layer on some particular task <cit.>. The Hessians can be computed at any data point or over any collection of data points. We consider Hessians computed over the entire datasets in question, and over batches of size 64. We separately consider test and train sets. In order to extend the relevance of our analysis to beyond logistic regression and MLP, we consider one of the simplest convolutional neural networks (CNN) of the form of LeNet <cit.> on CIFAR-10. Compared to the standard LeNet (which has over 50000 parameters) we reduce the number of neurons in the first fully connected layer from 120 to 35 and the second from 84 to 50. Note that the resulting architecture contains a bottleneck in the intermediate layer, in contrast to the “hour-glass” shapes that are necessary to maintain manageable parameter numbers with full MLP architectures. Despite reducing the total number of parameters by a factor of 3 we find the total validation accuracy drop to be no more than 2%. The total validation accuracy of 69% is significantly below state of the art ≈ 95%, but we are clearly in the regime where significant learning can and does take place, which we consider sufficient for the purposes of this manuscript. We also extend our experiments beyond the cross entropy loss function, by considering a regression problem (L_2 loss) and beyond the high-dimensional feature setting of computer vision with the Bike dataset[https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Datasethttps://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset (accessed 14/10/21)] which has only 13-dimensional feature vectors and a single-dimensional regressand (see Appendix <ref> for details of our data pre-processing). The architecture in this case widens considerably in the first layer (from 13 inputs to 100 neurons) and that gradually tapers to the single output. The final test loss (i.e. mean squared error) of the trained model is 0.044 which is competitive with baseline results <cit.>[<cit.> report an RMSE of 0.220 on Bike (which corresponds to 0.048 mean squared error) using a Gaussian process regression model with exact inference.] Training details:All networks were trained using SGD for 300 epochs with initial learning rate 0.003, linear learning rate decay to 0.00003 between epoch 150 and 270, momentum 0.9 and weight decay 5× 10^-4. We use a PyTorch <cit.> implementation. Full code to reproduce our results is made available [<https://github.com/npbaskerville/dnn-rmt-spacings>]. Full descriptions of all network architectures are given in the Appendix <ref>. § SPECTRAL SPACING STATISTICS IN RMT Consider a random P× P matrix M_P with ordered λ_1 ≤λ_2 ≤…≤λ_P. Let I_ave be the mean spectral cumulative density function for the random matrix ensemble from which M_P is drawn. The unfolded spectrum is defined as l_i = I_ave(λ_i). The unfolded spacings are then defined as s_i = l_i - l_i-1,     i=2, …, P. With this definition, the mean of the s_i is unity, which means that this transformation has brought the eigenvalues on to the microscopic scale on which universal spectral spacing statistics emerge. We are investigating the presence of Random Matrix Theory statistics in neural networks by considering the nearest neighbour spectral spacings of their Hessians. Within the Random Matrix Theory literature, it has been repeatedly observed <cit.> that the unfolded spacings of a matrix with RMT pair correlations follow universal distributions determined only by the symmetry class of the M_P. Hessians are real symmetric, so the relevant universality class is GOE and therefore the unfolded neural network spacings should be compared to the Wigner surmise ρ_Wigner(s) = π s/2e^-π/4s^2. A collection of unfolded spacings s_2,…, s_P from a matrix with GOE spacing statistics should look like a sample of i.i.d. draws from the Wigner surmise density (<ref>). For some known random matrix distributions, I_ave may be available explicitly, or at least via highly accurate quadrature methods from a known mean spectral density. For example, for the P× P GOE <cit.> I_ave^GOE(λ) is given by: P[1/2 + λ/2π P√(2P - λ^2) + 1/πarctan(λ/√(2 P - λ^2))]. However, when dealing with experimental data where the mean spectral density is unknown, one must resort to using an approximation to I_ave. Various approaches are used in the literature, including polynomial spline interpolation <cit.>. The approach of <cit.> is most appropriate in our case, since computing Hessians over many mini-batches of data results in a large pool of spectra which can be used to accurately approximate I_ave simply by the empirical cumulative density. Suppose that we have m samples (M^(i)_P)_i=1^m from a random matrix distribution over symmetric P× P matrices. Fix some integers m_1, m_2 > 0 such that m_1 + m_2 = m. The spectra of the matrices (M^(i)_P)_i=1^m_1 can then be used to construct an approximation to I_ave. More precisely, let Λ_1 be the set of all eigenvalues of the (M^(i)_P)_i=1^m_1, then we define Ĩ_ave(λ) = 1/|Λ_1| |{λ' ∈Λ_1 |λ' < λ}|. For each of the matrices (M^(i)_P)_i=m_1 + 1^m, one can then use Ĩ_ave to construct their unfolded spacings. When the matrix size P is small, one can only study the spectral spacing distribution by looking over multiple matrix samples. However, the same spacing distribution is also present for a single matrix in the large P limit. A clear disadvantage of studying unfolded nearest neighbour spectral spacings with the above methods is the need for a reasonably large number of independent matrix samples. This rules-out studying the unfolded spacings of a single large matrix. Another obvious disadvantage is the introduction of error by the approximation of I_ave, giving the opportunity for local spectral statistics to be distorted or destroyed. An alternative statistic is the consecutive spacing ratio of <cit.>. In the above notation, the ratios for a single P× P matrix are defined as r_i= λ_i - λ_i-1/λ_i-1 - λ_i-2,    2 ≤ i ≤ P. <cit.> proved a `Wigner-like surmise' for the spacing ratios, which for the GOE is P(r) = 27(r + r^2)/8(1 + r + r^2)^5/2. In our experiments, we can compute the spacing ratios for Hessians computed over entire datasets or over batches, whereas the unfolded spacing ratios can only be computed in the batch setting, in which case a random 2/3 of the batch Hessians are reserved for computing Ĩ_ave and the remaining 1/3 are unfolded and analysed. This split is essentially arbitrary, except that we err on the side of using more to compute Ĩ_ave since even a single properly unfolded spectrum can demonstrate universal local statistics. § RESULTS We display results as histograms of data along with a plot of the Wigner (or the Wigner-like) surmise density. We make a few practical adjustments to the plots. Spacing ratios are truncated above some value, as the presence of a few extreme outliers makes visualisation difficult. We choose a cut-off at 10. Note that around 0.985 of the mass of the Wigner-like surmise is below 10, so this is a reasonable adjustment. The hessians have degenerate spectra. The Wigner surmise is not a good fit to the observed unfolded spectra if the zero eigenvalues are retained. Imposing a lower cut-off of 10^-20 in magnitude is sufficient to obtain agreement with Wigner.[For example, in the case of the 3-hidden-layer MLP on MNIST shown in Figure <ref>, among 157 batch-wise spectra the proportion of eigenvalues below the cut-off was between 0.29 and 0.40.] This is below the machine precision, so these omitted eigenvalues are indistinguishable from 0. §.§ MNIST and MLPs We show results in Figures <ref> and <ref>, with further plots in the Appendix. We also considered randomly initialised networks and we evaluated the Hessians over train and test datasets separately in all cases. Unfolded spacings were computed only for Hessians evaluated on batches of 64 data points, while spacing ratios were computed in batches and over the entire dataset. We observe a striking level of agreement between the observed spectra and the GOE. There was no discernible difference between the train and test conditions, nor between batch and full dataset conditions, nor between trained and untrained models. Note that the presence of GOE statistics for the untrained models is not a foregone conclusion. Of course, the weights of the model are indeed random Gaussian, but the Hessian is still a function of the data set, so it is not the case the Hessian eigenvalue statistics are bound to be GOE a priori. Overall, the very close agreement between Random Matrix Theory predictions and our observations for several different architectures, model sizes and datasets demonstrates a clear presence of RMT statistics in neural networks. Our results indicate that models for the loss surfaces of large neural networks should include assumptions of GOE local statistics of the Hessian, but ideally avoid such assumptions on the global statistics. To further illustrate this point, consider a Gaussian process ℒ_emp∼𝒢𝒫(0, k) where k is some kernel function. Following from our Gaussian process definition, the covariance of derivatives of the empirical loss can be computed using a well-known result (see <cit.> equation 5.5.4), e.g. Cov(∂_i ℒ_emp(w⃗), ∂_jℒ_emp(w⃗') ) = ∂_w_i∂_w'_j k(w⃗, w⃗') and further, assuming a stationary kernel k(w⃗, w⃗') = k(-1/2||w⃗ - w⃗'||_2^2) (note abuse of notation) Cov(∂_i ℒ_emp(w⃗), ∂_jℒ_emp(w⃗') ) = (w_i - w'_i)(w'_j - w_j) k”(-1/2||w⃗ - w⃗'||_2^2) + δ_ijk'(-1/2||w⃗ - w⃗'||_2^2). Differentiating (<ref>) further, we obtain Cov(∂_ijℒ_emp(w⃗), ∂_klℒ_emp(w⃗)) = k”(0)(δ_ikδ_jl + δ_ilδ_jk) + k'(0)^2 δ_ijδ_kl The Hessian _emp has Gaussian entries with mean zero, so the distribution of _emp is determined entirely by k'(0) and k”(0). Neglecting to choose k explicitly, we vary the values of k'(0) and k”(0) to produce nearest neighbour spectral spacings ratios and spectral densities. The histograms for spectral spacing ratios are indistinguishable and agree very well with the GOE, as shown in Figure <ref>. The spectral densities are shown in Figure <ref>, including examples with rank degeneracy, introduced by defining k only on a lower-dimensional subspace of the input space, and outliers, introduced by adding a fixed diagonal matrix to the Hessian. Figure <ref> shows varying levels of agreement with the semi-circle law, depending on the choice of k'(0), k”(0). The covariance structure in (<ref>) is very close to that of a GOE matrix. If k'(0) = 0 and k”(0)≠ =0, then the covariance would be exactly that of a GOE matrix. With general values k'(0)≠ 0 ≠ k”(0), we see that the second term is non-zero on, and only on, the diagonals of the Hessian, however it does induce dependence between all diagonal elements. We have been unable to compute the limiting spectral density exactly but we suspect it may well be possible. §.§ Beyond the MLP Figure <ref> shows the mean spectral density and adjacent spacing ratios for the Hessian of a CNN trained on CIFAR10. As with the MLP networks and MNIST data considered above, we see an obviously non-semicircular mean level density but the adjacent spacing ratios are nevertheless described by the universal GOE law. §.§ Beyond image classification Figure <ref> shows the mean spectral density and adjacent spacing ratios for the Hessian of an MLP trained on the Bike dataset. Once again we see an obviously non-semicircular mean level density but the adjacent spacing ratios are nevertheless described by the universal GOE law. This serves to demonstrate that there is nothing special about image data or, more importantly, high input feature dimension, since the Bike dataset has only 13 input features. §.§ Beyond the Hessian Given that the Hessian is not the only matrix of interest in Machine Learning, it is pertinent to study whether our empirical results hold more generally. There have been lots of investigations for the Gauss-Newton <cit.>, or generalised Gauss-Newton (which is the analogue of the Gauss-Newton when using the cross entropy instead of square loss) matrices, particularly in the fields of optimisation <cit.>. We consider the Gauss-Newton of the network trained on the Bike dataset with square loss. In this case the Gauss Newton G⃗ = J⃗^⃗T⃗J⃗ shares the same non-null subspace as the Neural Tangent Kernel (NTK) <cit.>, where J⃗ denotes the Jacobian, i.e the derivative of the output with respect to the weights, which in this case is simply a vector. The NTK is used for the analysis of trajectories of gradient descent and is particularly interesting for large width networks, where it can be analytically shown that weights remain close to their initialisation and the network is well approximated by its linearisation. Figure <ref> shows the mean spectral density and adjacent spacing ratios for the Gauss-Newton matrix of an MLP trained on the Bike dataset. The results are just as for the Hessians above: universal GOE spacings, but the mean density is very much not semicircular. This is an interesting result because even for a different matrix employed in a different context we still see the same universal RMT spacings. § CONCLUSION We have demonstrated experimentally the existence of random matrix statistics in small neural networks on the scale of the mean eigenvalue separation. This provides the first direct evidence of universal RMT statistics present in neural networks trained on real datasets. Hitherto the role of random matrix theory in deep learning has been unclear. Prior work has studied theoretical models with specific assumptions leading to specific random matrix ensembles. Though certainly insightful, it is not clear to what extent any of these studies are applicable to real neural networks. This work aims to shift the focus by demonstrating the clear presence of universal random matrix behaviour in real neural networks. We expect that future theoretical studies will start from this robust supposition. When working with a neural network on some dataset, one has information a priori about its Hessian. Its distribution and correlation structure may well be entirely inaccessible, but correlations between Hessian eigenvalues on the local scale can be assumed to be universal and overall the matrix can be rightly viewed as a random matrix possessing universal local statistics. We focus on small neural networks where Hessian eigendecomposition is feasible. Future research that our work motivates could develop methods to approximate the level spacing distribution of large deep neural networks for which exact Hessian spectra cannot be computed. If the same RMT statistics are found, this would constitute a profound universal property of neural networks models; conversely, a break-down in these RMT statistics would be an indication of some fundamental separation between different network sizes or architectures. A few recent works <cit.> considered and used the idea of Gaussian equivalence to make theoretical progress in neural network models with fewer assumptions than previously required (e.g. on the data distribution). The principle is that complicated random matrix distributions on non-linear functions of random matrices can be replaced in calculations training and test loss by their Gaussian equivalents, i.e. Gaussian matrices with matching first and second moments. This idea reflects a form of universality and can drastically increase the tractability of calculations. The random matrix universality we have here demonstrated in neural networks may be related, and should be considered as a possible source of other analogous universality simplifications that can render realistic but intractable models tractable. One intriguing possible avenue is the relation to chaotic systems. Quantum systems with chaotic classical limits are know to display RMT spectral pairwise correlations, whereas Poisson statistics correspond to integrable systems. We suggest that the presence of GOE pairwise correlations in neural network Hessians, as opposed to Poisson, indicates that neural network training dynamics cannot be reduced to some simpler, smaller set of dynamical equations. CHAPTER: UNIVERSAL CHARACTERISTICS OF LOSS SURFACES The content of this chapter was published first as a pre-print in May 2022 (<https://arxiv.org/abs/2205.08601>) and later as a journal article in December 2022: “Universal characteristics of neural network loss surfaces from random matrix theory”. Nicholas P Baskerville, Jonathan P Keating, Francesco Mezzadri, Joseph Najnudel and Diego Granziol. Journal of Physics A: Mathematical and Theoretical. NPB proposed all three of the main ideas, performed all calculations, proved most of the results, did the vast majority of the write-up and did all analysis of experimental results. DG conducted the neural network training, extracted the empirical Hessian data and contributed to the write-up of sections pertaining to experiments. JN provided several important ideas for the proof in Appendix A. Anonymous reviewers provided helpful feedback on the presentation and spotted several typos. § GENERAL RANDOM MATRIX MODEL FOR LOSS SURFACE HESSIANS §.§ The model Given a loss function ℒ: 𝒴×𝒴→, a data generating distribution _data supported on 𝒳×𝒴 and a neural network f_w⃗: 𝒳→𝒴 parametrised by w⃗∈ℝ^N, its batch Hessian is given by H_batch = 1/b∑_i=1^b ∂^2/∂w⃗^2ℒ(f_w⃗(x⃗_⃗i⃗), y_i),    (x⃗_i, y_i)i.i.d.∼ℙ_data and its true Hessian is given by H_true = _(x⃗, y)∼ℙ_data∂^2/∂w⃗^2ℒ(f_w⃗(x⃗), y). Both H_batch and H_true are N× N matrix functions of w⃗; H_batch is random but H_true is deterministic. Only in very specific cases and under strong simplifying assumptions can one hope to obtain the distribution of H_batch or the value of H_true from ℒ, _data and f_w⃗. Inspired by the success of many random matrix theory applications, e.g. in Physics, we will instead seek to capture the essential features of deep neural network Hessians in a sufficiently general random matrix model. We introduce the following objects: * A sequence (in N) of random real symmetric N× N matrices X. X possesses a limiting spectral probability measure μ, i.e. if λ_1,…,λ_N are the eigenvalues of X then 1/N∑_i=1^N δ_λ_i→μ weakly almost surely. We further assume that μ has compact support and admits a smooth density with respect to Lebesgue measure. * A sequence (in N) of deterministic real symmetric N× N matrices A with eigenvalues θ_1,…,θ_p,ξ_1,…ξ_N-p-q,θ_1',…,θ_q' for fixed integers p, q. We assume the existence of limiting measure ν such that, weakly, 1/N-p-q∑_i=1^N-p-qδ_ξ_i→ν where ν is a compactly supported probability measure. The remaining eigenvalues satisfy θ_1 >… > θ_p > (ν),  θ_1' < … < θ_q' < (ν). ν is also assumed to be of the form ν = ϵη + (1-ϵ)δ_0 where η is a compactly supported probability measure which admits a density with respect to Lebesgue measure. * A decreasing function : → (0, 1). With these definitions, we construct the following model for the Hessian: H_batch≡ H = X + A where b is the batch size. We have dropped the subscript on H_batch for brevity. Note that H takes the place of the batch Hessian and A taken the place of the true Hessian. X takes the place of the random noise introduced by sampling a finite batch at which to evaluate the Hessian. is an overall scaling induced in X by the batch-wise averaging. This model is almost completely general. Note that we allow the distribution of X and the value of A to depend on the position in weight space w⃗. The only restrictions imposed by the model are * the existence of ν; * the position of θ_i, θ_j' relative to the support of ν; * ν may only possess an atom at 0; * the fixed number of θ_i, θ_j'; * the existence of μ; * the existence of the scaling in batch size. All of the above restrictions are discussed later in the section. Finally, we must introduce some properties of the noise model X in order to make any progress. We introduce the assumption that the eigenvectors of X obey quantum unique ergodicity (QUE) <cit.>. The precise meaning of this assumption and a thorough justification and motivation is given later in this section. For now it suffices to say that QUE roughly means that the eigenvectors of X are delocalised or that they behave roughly like the rows (or columns) of a uniform random N× N orthogonal matrix (i.e. a matrix with Haar measure). QUE is known to hold for standard ensembles in random matrix theory, such as quite general Wigner matrices, Wishart matrices, adjacency matrices of certain random graphs etc. Moreover, as discussed further section <ref> below, QUE can be thought of as a property of quite general random matrix models. §.§ Quantum unique ergodicity Quantum unique ergodicity was introduced in Chapter <ref> but for convenience we recall some details here. It is well known that the eigenvectors of quite general random matrices display a universal property of delocalisation, namely |u_k|^2 ∼1/N for any component u_k of an eigenvector u⃗. Universal delocalisation was conjectured by Wigner along with the Wigner surmise for adjacent eigenvalue spacing. Both of these properties, and the more familiar phenomenon of universal correlation functions on the microscopic scale have since been rigorously established for quite a variety of matrix models e.g. <cit.>. <cit.> show that the eigenvectors of generalised Wigner matrices obey Quantum unique ergodicity, a particular form of delocalisiation, stronger than the above statement. Specifically, they are shown to be approximately Gaussian in the following sense (<cit.> Theorem 1.2): sup_||q⃗|| = 1sup_I⊂ [N], |I| = n| P((N|q⃗^Tu⃗_k|^2)_k∈ I) - P((|𝒩_j|^2)_j=1^n)| ≤ N^-ϵ, for large enough N, where 𝒩_j are i.i.d. standard normal random variables, (u⃗_k)_k=1^N are the normalised eigenvectors, P is any polynomial in n variables and ϵ > 0. Note that the set I in this statement is a subset of [N] of fixed size n; n is not permitted to depend on N. §.§ Batch Hessian outliers Let {λ_i} be the eigenvalues of H. To set the context of our results, let us first simplify and suppose momentarily that = 1 and, instead of mere QUE, X has eigenvectors distributed with Haar measure, and A is fixed rank, i.e. ξ_i=0  ∀ i, then the results of <cit.> would apply and give λ_j a.s.→ g_μ^-1(1/θ_j)    if θ_j > 1/g_μ((μ)), (μ) otherwise, for j=1,…, p, and λ_N-j+1a.s.→ g_μ^-1(1/θ_j')    if θ_j' < 1/g_μ((μ)), (μ) otherwise, for j=1,…, q. What follows is our main results for the outliers of H under the general conditions described above. Let H be the Hessian matrix model defined in (<ref>) and meeting all the conditions in Section <ref>. Then there exist U_ϵ, L_ϵ∈ℝ such that, for j=1, …, p, λ_j = ω^-1(θ_j) if ω^-1(θ_j)> U_ϵ, U_ϵ otherwise. and for j=1, …, q, λ_N-j+1 = ω^-1(θ_j') if ω^-1(θ_j) < L_ϵ, L_ϵ otherwise, and ω^-1(θ) = θ + R_μ(θ^-1) + ϵ^2 d_η(θ)R_μ'(θ^-1) + 𝒪(ϵ^2) where we define d_η(z) = g_η(θ_j) - θ_j^-1. An interlude on prior outlier results It was conjectured in <cit.> that (<ref>)-(<ref>) still hold when X has delocalised eigenvectors in some sense, rather than strictly Haar. Indeed, a careful consideration of the proof in that work does reveal that something weaker than Haar would suffice, for example QUE. See in particular the proof of the critical Lemma 9.2 therein which can clearly be repeated using QUE. There is a considerable subtlety here, however, which is revealed best by considering more recent results on deformations of general Wigner matrices. <cit.> shows that very general deterministic deformations of general Wigner matrices possess an optimal anisotropic local law, i.e. Y + B for Wigner Y and deterministic symmetric B. It is expected therefore that Y+B has delocalised eigenvectors in the bulk. Consider the case where B is diagonal, and say that B has a fixed number of “spike” eigenvalues φ_1>…>φ_r and remaining eigenvalues ζ_1,…, ζ_N-r where the empirical measure of the ζ_i converges to some measure τ and φ_r > (τ). We can then split B = B_i + B_o where B_i contains only the ζ_j and B_o only the φ_j. The previously mentioned results applies to Y + B_i and then we might expect the generalised result of <cit.> to apply to give outliers g_μ_SC⊞τ^-1(1/φ_i) of Y + B. This contradicts, however, another result concerning precisely the the outliers of such generally deformed Wigner matrices. It was shown in <cit.> that the outliers of Y+B are ω^-1(φ_j) where ω is the subordination function such that g_μ_SC⊞τ(z) = g_τ(ω(z)). These two expressions coincide when ω^-1(z) = g_μ_SC⊞τ^-1(z^-1) ω^-1(z) = ω^-1(g_τ^-1(z^-1)) g_τ^-1(z^-1) = z g_τ(z) = z^-1 τ = δ_0, i.e. only when B is in fact of negligible rank as N→∞. This apparent contradiction is resolved by the observation that the proof in <cit.> in fact relies implicitly on an isotropic local law. Note in particular section 4.1, which translated to our context, would require v⃗^TG_Y + B_i(z)v⃗≈ g_μ_SC⊞τ(z) with high probability for general unit vectors v⃗. Such a result holds if and only if Y + B_i obeys an isotropic local law and is violated if its local law is instead anistropic, as indeed it is, thanks to the deformation. The conditions on X required to invoke Theorem <ref> from Section <ref> are satisfied, so we conclude that ĝ_H(z) = g_μ_b ⊞ν(z) + o(1) = g_ν(ω(z)) + o(1) = ĝ_A(ω(z)) + o(1) where ω is the subordination function such that g_μ_b⊞ν(z) = g_ν(ω(z)) and μ_b is the limiting spectral measure of X. The reasoning found in <cit.> then applies regarding the outliers of H. Indeed, suppose that λ is an outlier of H, i.e. λ is an eigenvalue of H contained in \supp(μ⊞ν). Necessarily ĝ_H possesses a singularity at λ, and so ĝ_A must have a singularity at ω(λ). For this singularity to persist for all N, ω(λ) must coincide with one of the outliers of A which, unlike the bulk eigenvalues ξ_j, remain fixed for all N. Therefore we have the following expressions for the outliers of H: {ω^-1(θ_j) |ω^-1(θ_j)∈\supp(μ_b⊞ν)}∪{ω^-1(θ_j') |ω^-1(θ_j')∈\supp(μ_b⊞ν)}. We now consider ϵ to be small and analyse these outlier locations as a perturbation in ϵ. Firstly note that g_μ_b(z) = ∫dμ_b(x)/z - x = ∫dμ(x/)/z - x = ∫dμ(x)/z - x = g_μ(z/). Also ω^-1(z) = g_μ_b ⊞ν^-1(g_ν(z)) = R_μ_b(g_ν(z)) + g_ν^-1(g_ν(z)) = R_μ_b(g_ν(z)) + z. We now must take care in computing R_μ_b from g_μ_b. Recall that the R-transform of a measure is defined as a formal power series <cit.> R(z) = ∑_n=0^∞ k_n+1 z^n where k_n is the n-th cumulant of the measure. It is known <cit.> that k_n=C_n where the functional inverse of the Stieljtes transform of the measure is given by the formal power series K(z) = 1/z + ∑_n=1 C_n z^n-1. Now let m_n be the n-th moment of μ and similarly let m_n^(b) be the n-th moment of μ_b, so formally g_μ(z) = ∑_n≥ 0 m_n z^-(n+1),    g_μ_b(z) = ∑_n≥ 0 m_n^(b) z^-(n+1). Also let k_n be the n-th cumulant of μ and k_n^(b) be the n-th cumulant of μ_b. Referring to the proof of Lemma 5.3.24 in <cit.> we find the relations m_n = ∑_r=1^n ∑_0≤ i_1,…, i_r≤ n-r i_1+… + i_r = n-r k_r m_i_1… m_i_r, m_n^(b) = ∑_r=1^n ∑_0≤ i_1,…, i_r≤ n-r i_1+… + i_r = n-r k_r^(b) m_i_1^(b)… m_i_r^(b). Note, in particular, that m_1 = k_1. But clearly the moments of μ_b have a simple scaling in , namely m_n^(b) = ^n m_n, hence m_n = ^-n∑_r=1^n ∑_0≤ i_1,…, i_r≤ n-r i_1+… + i_r = n-r k_r^(b) m_i_1… m_i_r^n-r from which we deduce k_n^(b) = ^n k_n, which establishes that R_μ_b(z) = R_μ( z). Recalling (<ref>) we find ω^-1(z) = R_μ ( g_ν(z)) + z. The form of ν gives g_ν(z) = (1-ϵ)∫dt/z-tδ_0(t) + ϵ∫dη(t)/t-z=1-ϵ/z + ϵ g_η(z) = 1/z + ϵ(g_η(z) - 1/z) and so we can expand to give ω^-1(θ_j) = θ_j + R_μ(θ_j^-1) + ϵ^2 (g_η(θ_j) - θ_j^-1)R_μ'(θ_j^-1) + 𝒪(ϵ^2) = θ_j + R_μ(θ_j^-1) + ϵ^2 d_η(θ_j)R_μ'(θ_j^-1) + 𝒪(ϵ^2) where we have defined d_η(z) = g_η(θ_j) - θ_j^-1. The argument with the lower outliers {θ_j'}_j=1^q is identical. The problem of determining the support of μ_b⊞ν is difficult and almost certainly analytically intractable, with <cit.> containing the most advanced results in that direction. However overall, we have a model for deep neural network Hessians with a spectrum consisting, with high-probability, of a compactly supported bulk μ_b⊞ν and a set of outliers given by (<ref>) (and similarly for θ_j') subject to (<ref>). The constants L_ϵ, U_ϵ in the statement (<ref>)-(<ref>) of the theorem are simply the lower and upper edges of the support of (μ_b ⊞ν). Note that (<ref>) reduces to outliers of the form θ_j + ^2 R_μ(θ_j^-1) if ϵ=0 or d_η = 0, as expected from <cit.>[Note that d_η=0 η = δ_0 which is clearly equivalent (in terms of ν) to ϵ=0.]. (<ref>) is a generalised form of the result used in <cit.>. We have the power series R_μ(θ_j^-1) = k_1^(μ) + k_2^(μ)/θ_j + k_3^(μ)^2/θ_j^2 + …, d_η(θ_j) = m_1^(η)/θ_j^2 + m_2^(η)/θ_j^3 + … where m_n^(η) are the moments of η and k_n^(μ) are the cumulants of μ. In the case that the spikes θ_j are large enough, we approximate by truncating these power series to give ω^-1(θ_j) ≈θ_j + m_1^(μ) + ^2k_2^(μ)(1/θ_j + ϵ m_1^(η)/θ_j^2) where the approximation is more precise for larger b and smaller ϵ and we have used the fact that the first cumulant of any measure matches the first moment. One could consider for instance a power law for , i.e. ω^-1(θ_j) ≈θ_j + k_1^(μ)/b^υ + k_2^(μ)/b^2υ(1/θ_j + ϵ m_1^(η)/θ_j^2) =θ_j + m_1^(μ)/b^υ + k_2^(μ)/b^2υ(1/θ_j + ϵ m_1^(η)/θ_j^2) for some υ > 0. In the case that μ is a semicircle, then all cumulants apart from the second vanish, so setting ϵ = 0 recovers exactly ω^-1(θ_j) = θ_j + σ^2/4b^2υθ_j where σ is the radius of the semicircle. To make the link with <cit.> obvious, we can take υ = 1/2 and μ to be the semicircle, so giving ω^-1(θ_j) ≈θ_j + σ^2/4bθ_j where we have truncated 𝒪(ϵ) term. We present an argument in favour of the υ=1/2 power law below, but we allow for general υ when comparing to experimental data. It is quite possible for μ's density to have a sharp spike at the origin, or even for μ to contain a δ atom at 0, as observed empirically in the spectra of deep neural network Hessians. §.§ Experimental results The random matrix Hessian model introduced above is quite general and abstract. Necessarily the measures μ and η must be allowed to be quite general as it is well established experimentally <cit.> that real-world deep neural network Hessians have spectral bulks that are not familiar as being any standard canonical examples from random matrix theory. That being said, the approximate form in (<ref>) gives quite a specific form for the Hessian outliers. In particular, the constants m_1^(μ), m_1^(η) and m_2^(μ), ϵ > 0 are shared between all outliers at all batch sizes. If the form of the Hessian outliers seen in (<ref>) is not observed experimentally, it would suggest at least one of the following does not hold: * batch sampling induces a simple multiplicative scaling on the Hessian noise (<ref>); * the true Hessian is approximately low-rank (as measured by ϵ) and has a finite number of outliers; * the Hessian noise model X has QUE. In view of this third point, agreement with (<ref>) provides an indirect test for the presence of universal random matrix statistics in deep neural network Hessians. We can use Lanczos power methods <cit.> to compute good approximations to the top few outliers in the batch Hessian spectra of deep neural networks <cit.>. Indeed the so-called Pearlmutter trick <cit.> enables efficient numerical computation of Hessian-vector products, which is all that one requires for power methods. Over a range of batch sizes, we compute the top 5 outliers of the batch Hessian for 10 different batch seeds. We repeat this procedure at every 25 epochs throughout the training of two standard deep neural networks for computer vision tasks, VGG16 and WideResNet28×10, on the CIFAR100 dataset <cit.> and at every epoch during the training of a simple multi-layer perceptron network on the MNIST dataset <cit.>. By the end of training each of the models have high test accuracy, specifically the VGG16 architecture which does not use batch normalisation, has a test accuracy of ≈ 75%, whereas the WideResNet28×10 has a test accuracy of ≈ 80 %. The MLP has a test set accuracy of ≈ 95%. Full experimental details are given in Appendix <ref>. There is a subtlety with regard to obtaining the top outliers using the Lanczos power method. Indeed, since Lanczos provides, in some sense, an approximation to the whole spectrum of a matrix, truncating at m iterations for a N× N matrix cannot produce good approximations to all of the m top eigenvalues. In reality, experimental results <cit.> show that, for deep neural networks, and using sufficiently many iterations (m), the top r eigenvalues may be recovered, for r≪ m. We display some spectral plots of the full Lanczos results in the Figure <ref> which demonstrate clearly a large number of outliers, and clearly more than 5. These are not intended to be exhaustive and we recommend references such as <cit.> for detailed discussion of spectral densities like these. As a result, we can have confidence that our numerical procedure is indeed recovering approximations to the top few eigenvalues required for our experiments. Let λ^(i, j, e)_b be the top i-th empirical outlier (so i=1 is the top outlier) for the j-th batch seed and a batch size of b for the model at epoch e. To compare the experimental results to our theoretical model, we propose the following form: λ^(i, j, e)_b ≈θ^(i, e) + α^(e)/b^υ + β^(e)/b^2υ(1/θ^(i, e) + γ^(e)/(θ^(i, e))^2) where β^(e) > 0 (as the second cumulant of a any measure of non-negative) and θ^(i, e) > θ^(i+1, e) > 0 for all i,e. The parameters α^(e), β^(e), γ^(e) and θ^(i, e) need to be fit to the data, which could be done with standard black-box optimisation to minimise squared error in (<ref>), however we propose an alternative approach which reduces the number of free parameters and hence should regularise the optimisation problem. Observe that (<ref>) is linear in the parameters α^(e), β^(e), γ^(e) so, neglecting the positivity constraint on β^(e), we can in fact solve exactly for optimal values. Firstly let us define to be the empirical mean of over the batch seed index j. Each epoch will be treated entirely separately, so let us drop the e superscripts to streamline the notation. We are then seeking to optimise α, β, γ, θ^(i) to minimise E = ∑_i,b( - θ^(i) - α/b^υ - β/θ^(i)b^2υ - βγ/b^2υ (θ^(i))^2)^2. Now make the following definitions y_ib = - θ^(i),  x⃗_ib = ([ b^-υ; (θ^(i) b)^-2υ; (b^2υ (θ^(i))^2)^-1 ]),  w⃗ = ([ α; β; βγ ]), so that E = ∑_i,b (y_ib - w⃗^T x⃗_ib)^2. Finally we can define the n-dimensional vector Y⃗ by flattening the matrix (y_ib)_ib, and the 3× n matrix X⃗ by stacking the vectors x⃗_ib and then flattening of the i,b indices. That done, we have have a standard linear regression problem with design matrix X⃗ and parameters w⃗. For fixed θ, the global minimum of E is then attained at parameters w⃗^*(θ⃗) = (X⃗X⃗^T)^-1X⃗Y⃗ where the dependence on the parameters θ⃗ is through Y⃗ and X⃗ as above. We thus have α = w^*_1, β = w^*_2, γ = w^*_3/w^*_2 and can plug these values back in to (<ref>) to obtain an optimisation problem only over the θ^(i). There is no closed form solution for the optimal θ^(i) for this problem, so we fit them using gradient descent. The various settings and hyperparameters of this optimisation were tuned by hand to give convergence and are detailed in <ref>. To address the real constraint β > 0, we add a penalty term to the loss (<ref>) which penalises values of θ^(i) leading to negative values of β. The constraint θ^(i) > θ^(i+1) > 0 is implemented using a simple differentiable transformation detailed in <ref>.. Finally, the exponent υ is selected by fitting the parameters for each υ in {-0.1, -0.2, …, -0.9} and taking the value with the minimum mean squared error E. The above process results in 12 fits for VGG and Resnet and 10 for MLP (one per epoch). For each of these, we have a theoretical fit for each of the 5 top outliers as a function of batch size which can be compared graphically to the data, resulting in (2× 12 + 10)× 5 = 170 plots. Rather than try to display them all, we will select a small subset that illustrates the key features. Figure <ref> shows results for the Resnet at epochs 0 (initialisation), 25, 250 and 300 (end of training) and outliers 1, 3 and 5. Between the three models, the Resnet shows consistently the best agreement between the data and the parametric form (<ref>). The agreement is excellent at epoch 0 but quickly degrades to that seen in the second row of Figure <ref>, which is representative of the early and middle epochs for the Resnet. Towards the end of training the Resnet returns to good agreement between theory and data, as demonstrated in the third and fourth rows of Figure <ref> at epochs 250 and 300 respectively. The VGG16 also has excellent agreement between theory and data at epoch 0, and thereafter is similar to the early epochs of the Resnet, i.e. reasonable, but not excellent, until around epoch 225 where the agreement starts to degrade significantly until the almost complete failure at epoch 300 shown in the first row of Figure <ref>. The MLP has the worst agreement between theory and data, having again excellent agreement at epoch 0, but really quite poor agreement even by epoch 1, as shown in the second row of Figure <ref>. The experimental results show an ordering Resnet > VGG > MLP, in terms of how well the random matrix theory loss surface predictions explain the Hessian outliers. We conjecture that this relates to the difficulty of the loss surfaces. Resnets are generally believed to have smoother, simpler loss surfaces <cit.> and be easier to train than other architectures, indeed the residual connections were originally introduced for precisely this reason. The VGG is generally more sensitive to training set-up, requiring well-tuned hyperparameters to avoid unstable or unsuccessful training (see Chapter <ref> <cit.>). The MLP is perhaps too small to benefit from high-dimensional highly over-parametrised effects. The parameter values obtained for all models over all epochs are shown in Figure <ref>, with a column for each model. There are several interesting features to draw out of these plots, however note that we cannot meaningfully interpret the parameters for the MLP beyond epoch 0, as the agreement with (<ref>) is so poor. Firstly consider the parameter m_1^(μ), which is interpreted as the first moment (i.e. mean) of the spectral density of the noise matrix X. m_1^(μ)=0 is significant, as it is seen in the case of the a symmetric measure μ, such as the Wigner semicircle used by <cit.>. For the VGG, m_1^(μ) starts close to 0 (Figure <ref>) and generally grows with training epochs (note that the right hand side of this plot is not trustworthy, as we have observed that the agreement with (<ref>) does not survive to the end of training). For the Resnet, we see a similar upwards trend (Figure <ref>), with the notable exception that of initialisation (epoch 0). These two observations together, suggest that training encourages a skew in the spectrum of X away from symmetry around 0, however for some structural reason the Resnet is highly skewed at initialisation. Note that for all models this parameter starts close to 0 and generally grows with training epochs, noting that the right hand side of Figure <ref> at the higher epochs should be ignored owing to the bad fit discussed above. It is interesting also to observe that ϵ m_1^(η) remains small for all epochs particularly compared to m_1^(μ), k_2^(μ). This is consistent with the derivation of (<ref>), which relies on ϵ being small, however we emphasise that this was not imposed as a numerical constraint but arises naturally from the data. Recall that the magnitude of ϵ m_1^(η) measures the extent of the deviation of A from being exactly low rank, so its small but non-zero values suggest that it is indeed important to allow for the true Hessian to have non-zero rank in the N→∞ limit. Finally, we comment that the best exponent is generally not υ=1/2. Again, the results from the Resnet are the most reliable and they appear to show that the batch scaling, as characterised by υ, is not constant throughout training, particularly comparing epoch 0 and epoch 300, say. §.§ Justification and motivation of QUE We recall the various types of local law first introduced in section <ref>. All provide high probability control on the error between the (random) matrix Green's function G(z) = (z - X)^-1 and certain deterministic equivalents. In all cases we use the set S⃗ = {E + iη∈| |E| ≤ω^-1,   N^-1 + ω≤η≤ω^-1} for ω∈(0, 1) and the local law statements holds for all (large) D>0 and (small) ξ > 0 and for all large enough N. The averaged local law states: sup_z∈S⃗(|1/N G(z) - g_μ(z)| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D. The isotropic local law states: sup_u⃗,v⃗ = 1, z∈S⃗( |u⃗^TG(z)v⃗ - g_μ(z)| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D. The anisotropic local law states: sup_u⃗,v⃗ = 1, z∈S⃗( |u⃗^TG(z)v⃗ - u⃗^TΠ(z)v⃗| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D where Π(·) is an N× N deterministic matrix function on ℂ. The entrywise local law states: sup_z∈S⃗, 1≤ i,j≤ N( |G_ij(z) - Π_ij(z)| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D. As mentioned above, quantum unique ergodicity was proved for general Wigner matrices in <cit.>. It appears that the key ingredient in the proof of QUE (<ref>) in <cit.> is the isotropic local semicircle law (<ref>) for general Wigner matrices. Indeed, all the intermediate results in Sections 4 of <cit.> take only (<ref>) and general facts about the Dyson Brownian Motion eigenvector flow given by dλ_k = dB_kk/N + (1/N∑_ℓ≠ k1/λ_k - λ_ℓ)dt, du_k = 1/N∑_ℓ≠ kdB_kl/λ_k - λ_ℓ u_ℓ - 1/2N∑_ℓ≠ kdt/(λ_k - λ_ℓ)^2 u_k. This can be generalised to dλ_k = dB_kk/N + (-V(λ_i) + 1/N∑_ℓ≠ k1/λ_k - λ_ℓ)dt, du_k = 1/N∑_ℓ≠ kdB_kl/λ_k - λ_ℓ u_ℓ - 1/2N∑_ℓ≠ kdt/(λ_k - λ_ℓ)^2 u_k. where V is a potential function. Note that the eigenvector dynamics are unaffected by the presence of the potential V, so we expect to be able to generalise the proof of <cit.> to any random matrix ensemble with an isotropic local law by defining the potential V so that the invariant ensemble with distribution Z^-1e^-N V(X)dX has equilibrium measure μ (Z is a normalisation constant). We show how to construct such a V from μ in Section <ref>. The arguments so far suffice to justify a generalisation of the “dynamical step” in the arguments of <cit.>, so it remains to consider the “comparison step”. The dynamical step establishes QUE for the matrix ensemble with a small Gaussian perturbation, but in the comparison step one must establish that the perturbation can be removed without breaking QUE. To our knowledge no such argument has been articulated beyond generalized Wigner matrices, with the independence of entries and comparable scale of variances being critical to the arguments given by <cit.>. Our guiding intuition is that QUE of the form (<ref>) is a general property of random matrices and can reasonably be expected to hold in most, if not all, cases in which there is a local law and universal local eigenvalue statistics are observed. At present, we are not able to state a precise result establishing QUE in sufficient generality to be relevant for this work, so we shall take it as an assumption. Let X be an ensemble of N× N real symmetric random matrices. Assume that X admits a limiting spectral measure is μ with Stieljtes transform m. Suppose that the isotropic local law (<ref>) holds for X with μ. Then there is some set 𝕋_N ⊂ [N] with |𝕋_N^c| = o(N) such that with |I|=n, for any polynomial P in n indeterminates, there exists some ϵ(P) > 0 such that for large enough N we have sup_I ⊂𝕋_N, |I|=n, q⃗ = 1| (P((N(q⃗^Tu⃗_k)^2)_k∈ I)) - (P((|𝒩_j|^2)_k∈ I))| ≤ N^-ϵ. Note that the isotropic local law in Assumption <ref> can be obtained from the weaker entrywise law (<ref>) as in Theorem 2.14 of <cit.> provided there exists a C>0 such that |X_ij|^2 ≤ CN^-1 for all i,j and there exists C_p > 0 such that |NX_ij|^p ≤ C_p for all i,j and integer p>0. In <cit.> the restriction I⊂ is given for the explicit set 𝕋_N = [N] \{(N^1/4, N^1-δ) ∪ (N - N^1-δ, N - N^1/4)} for some 0< δ < 1. In the case of generalised Wigner matrices, this restriction on the indices has since been shown to be unnecessary <cit.>. In our context, we could simply take as an assumption all results holds with 𝕋_N = [N], however our results can in fact be proved using only the above assumption that |𝕋_N^c| = o(N), so we shall retain this weaker form of the assumptions. This section is not intended to prove QUE from explicit known properties of deep neural network Hessians, but rather to provide justification for it as a reasonable modeling assumption in the noise model for Hessians defined in section <ref>. We have shown how QUE can be obtained from an isotropic (or entrywise) local law beyond the Wigner case. It is important to go beyond Wigner or any other standard random matrix ensemble, as we have observed above that the standard macroscopic spectral densities of random matrix theory such as the semicircle law are not observed in practice. That said, we are not aware of any results establishing QUE in the more general case of anistropic local laws, and this appears to be a very significant technical challenge. We must finally address why a local law assumption, isotropic or otherwise, may be reasonable for the noise matrix X in our Hessian model. Over the last decade or so, universal local statistics of random matrices in the form of k-point correlation functions on the appropriate microscopic scale have been established for a litany of random matrix ensembles. An immediate consequence of such results is that, on the scale of unit mean eigenvalue spacing, Wigner's surmise holds to a very good approximation, depending only on the symmetry class (orthogonal, unitary or symplecitic). Such universality results are rather older for invariant ensembles <cit.> and can be established with orthogonal polynomial techniques, however the recent progress focusing on non-invariant ensembles, beginning with Wigner matrices <cit.> and proceeding to much more general ensembles <cit.>, is built on a very general “three step strategy” (though see <cit.> for connections between universality in invariant and non-invariant ensembles). As with the QUE proof discussed above, the key ingredient in these proofs, as part of the three step strategy <cit.>, is establishing a local law. The theoretical picture that has emerged is that, for very general random matrices, when universal local eigenvalue statistics are observed in random matrices, it is due to the mechanism of short time scale relaxation of local statistics under Dyson Brownian Motion made possible by a local law. In Chapter <ref> <cit.> we observed that universal local eigenvalue statistics do indeed appear to be present in the Hessian of real, albeit quite small, deep neural networks. Given all of this context, we propose that a local law assumption of some kind is reasonable for deep neural network Hessians and not particularly restrictive. As we have shown, if we are willing to make the genuinely restrictive assumption of an isotropic local law for the Hessian noise model, then QUE follows. However an anistropic local law is arguably more plausible as we expect deep neural networks Hessians to contain a good deal of dependence between entries, and such correlations are know to generically lead to anisotropic local laws <cit.>. §.§ Motivation of true Hessian structure In this section we revisit and motivate the assumptions made about the Hessian in Section <ref>. Firstly note that one can always define A = H_batch and it is natural then to associate A with the true Hessian H_true. In light of (<ref>), it is natural to expect some fixed form of the law for H_batch - A for any batch size, but with an overall scaling , which must naturally be decreasing in b as experimental results show that the overall spectral width of the batch Hessians of neural networks decreases with increasing batch size. Next we address the assumptions made about the spectrum of A. The first assumption one might think to make is that A has fixed rank relative to N, with spectrum consisting only of the spikes θ_i, θ_j'. Indeed, it has been repeatedly observed, in our own experiments and others <cit.>, that neural network Hessians contain a number of spectral outliers separated from the spectral bulk. It is natural to conjecture that such outliers arise from some outliers in an underlying structured deterministic matrix of which the batch Hessian is a noisy version, as in the case of BBP style phase transitions in random matrix theory. The outliers in neural network Hessians have been associated with inter-class separation in the case of classification models <cit.> and it can be observed that spectra lack (or have smaller and fewer) outliers at the start of training, or if they are intentionally trained to give poor (i.e. random) predictive performance. That being said, in almost any experiment with sensibly trained neural networks, spectral outliers are observed, and over a range of batch sizes (and hence noise levels) suggesting that some of the spike eigenvalues in the true Hessian are above the phase transition threshold. Behind such an assumption is the intuition that the data distribution does not depend on N and so, in the over-parametrised limit N→∞, the overwhelming majority of directions in weight space are unimportant. The form we take for A in the above is a strict generalisation of the fixed rank assumption; A still has a fixed number of spiked directions, but the parameter ϵ controls the rank of A. Since any experimental investigation is necessarily limited to N<∞, the generalisation to ϵ>0 is particularly important. Compact support of the measures μ and η is consistent with experimental observations of deep neural network Hessian spectra. §.§ The batch size scaling Our experimental results considered = b^-υ and υ=1/2 is the value required to give agreement with <cit.>, a choice which we now justify. From (<ref>) we have H_batch = 1/b∑_i=1^b(H_true + X^(i)) where X^(i) are i.i.d. samples from the law of X. Suppose that the entries X_ij were Gaussian, with Cov(X_ij, X_kl) = Σ_ij,kl. Then Z = X^(p)_ij + X^(q)_ij has Cov(Z_ij, Z_kl) = X_ij^(p)X_kl^(p) + X_ij^(q)X_kl^(q) - X_ij^(p) X_kl^(p) - X_ij^(q) X_kl^(q) = 2Σ_ij, kl. In the case of centred X, one then obtains 1/b∑_i=1^b X^(i)d= b^-1/2X. Note that this does not quite match the case described in Section <ref>, since we do not assume there that X = 0, however we take this a rough justification for = b^-1/2 as an ansatz. Moreover, numerical experimentation with = b^-υ for values of υ>0 shows that q = 1/2 gives a reasonable fit to the data (note that the values shown in Figures <ref>, <ref>, <ref> are those producing the best fit, but υ=1/2 was seen to be not much inferior). § SPECTRAL FREE ADDITION FROM QUE §.§ Intermediate results on QUE This section establishes some intermediate results that follow from assuming QUE for the eigenvectors of a matrix. They will be crucial for our application in the following section. Consider a real orthogonal N× N matrix U with rows {u⃗_i^T}_i=1^N. Assume that {u⃗_i}_i=1^N are the eigenvectors of a real random symmetric matrix with QUE. Let P be a fixed N× N real orthogonal matrix. Let V = UP and denote the rows of V by {v⃗_i^T}_i=1^N. Then {v⃗_i}_i=1^N also satisfy QUE. Take any unit vector q⃗, then for any k=1,…, N q⃗^T v⃗_k = ∑_jq_jV_kj = ∑_j, lq_j U_klP_lj = (Pq⃗)^T u⃗_k. But Pq⃗_2=q⃗_2=1 since P is orthogonal, so the statement of QUE for {u⃗_i}_i=1^N transfers directly to {v⃗_i}_i=1^N thanks to the supremum of all unit q⃗. Consider a real orthogonal N× N matrix U with rows {u⃗_i^T}_i=1^N. Assume that {u⃗_i}_i=1^N are the eigenvectors of a real random symmetric matrix with QUE. Let ℓ_0(q⃗) = ∑_i {q_i ≠ 0} count the non-zero elements of a vector with respect to a fixed orthonormal basis {e⃗_i}_i=1^N. For any fixed integer s>0, define the set = {q⃗∈^N |q⃗=1,  ℓ_0(q⃗)=s,   q_i=0  ∀ i∈^c} where, recall the definition 𝕋_N = [N] \{(N^1/4, N^1-δ) ∪ (N - N^1-δ, N - N^1/4)}. Then the columns {u⃗_i'}_i=1^N of U satisfy a weaker form of QUE (for any fixed n, s>0): sup_q⃗∈ sup_I ⊂ |I| = n| P((N|q⃗^Tu⃗_k|^2)_k∈ I) - P((|𝒩_j|^2)_j=1^m )| ≤ N^-ϵ. We will denote this form of QUE as . Take some q⃗∈. Then there exists some J⊂ with |J|=s and non-zero {q_k}_k∈ J such that q⃗^Tu⃗_k' = ∑_j∈ J q_j e⃗_j^T u⃗_k'. Take {e⃗_i}_i=1^N to be a standard basis with (e⃗_i)_j = δ_ij, then e⃗_j^T u⃗_k' = U_jk = e⃗_k^Tu⃗_j so q⃗^Tu⃗_k' = ∑_j∈ J q_j e⃗_k^Tu⃗_j but then the coefficients q_j can be absorbed into the definition of the general polynomial in the statement (<ref>) of QUE for {u⃗_i}_i=1^N, which completes the proof, noting that the sum only includes indices contained in owing to the definition of . Fix some real numbers {y_i}_i=1^r. Fix also a diagonal matrix Λ and an orthonormal set of vectors {v⃗_i}_i=1^N that satisfies . Then there exists an ϵ>0 and η⃗_i∈^N with η_ij^2 ∈ [-1, 1]  ∀ j∈, η_ij^2 ∈ [-N^ϵ, N^ϵ]  ∀ j∈^c. such that for any integer l>0 𝔼(∑_i=1^r y_i v⃗_i^TΛv⃗_i)^l - 𝔼(∑_i=1^r y_i 1/Ng⃗_i^TΛg⃗_i)^l = N^-(1+ϵ)l( ∑_i=1^r y_i η⃗_i^TΛη⃗_i)^l where the g⃗_i are i.i.d. Gaussians N(0, I_N). Let {e⃗_i}_i=1^N be the standard orthonormal basis from above. Then (∑_i=1^r y_i v⃗_i^TΛv⃗_i)^l = ∑_i_1,…, i_l=1^r∏_k=1^l y_i_kv⃗_i_k^TΛv⃗_i_k = ∑_i_1,…, i_l=1^r∑_j_1,…, j_l=1^N∏_k=1^l y_i_kλ_j_k (e⃗_j_k^Tv⃗_i_k)^2 (∑_i=1^r y_i v⃗_i^TΛv⃗_i)^l-𝔼(∑_i=1^r y_i 1/Ng⃗_i^TΛg⃗_i)^l = N^-l∑_i_1,…, i_l=1^r∑_j_1,…, j_l=1^N∏_k=1^l y_i_kλ_j_k[N(e⃗_j_k^Tv⃗_i_k)^2 - (e⃗_j_k^Tg⃗_i_k)^2] =N^-l∑_i_1,…, i_l=1^r∑_j_1,…, j_l∈∏_k=1^l y_i_kλ_j_k[N(e⃗_j_k^Tv⃗_i_k)^2 - (e⃗_j_k^Tg⃗_i_k)^2] + N^-l∑_i_1,…, i_l=1^r∑_j_1∈^c, j_2,…, j_l∈∏_k=1^l y_i_kλ_j_k[N(e⃗_j_k^Tv⃗_i_k)^2 - (e⃗_j_k^Tg⃗_i_k)^2] + … The ellipsis represents the similar terms where further of the j_1, …, j_r are in ^c. For j∈^c the terms [N(e⃗_j_k^Tv⃗_i_k)^2 - (e⃗_j_k^Tg⃗_i_k)^2] are excluded from the statement of , however we can still bound them crudely. Indeed ∑_j∈^c N(e⃗_j^Tv⃗_i)^2 = ∑_j=1^N N(e⃗_j^Tv⃗_i)^2 - ∑_j∈N(e⃗_j^Tv⃗_i)^2 = N - ∑_j∈N(e⃗_j^Tv⃗_i)^2 but since the bound of applies for j∈ N (e⃗_j^Tv⃗_i)^2 = (e⃗_j^Tg⃗)^2 + o(1) = 1 + o(1)   ∀ j∈, then ∑_j∈^c N(e⃗_j^Tv⃗_i)^2 = N - N(1 + o(1)) = o(N)   (e⃗_j^Tv⃗_i)^2 = o(1)  ∀ j ∈^c. Note that this error term is surely far from optimal, but is sufficient here. Overall we can now say |[N(e⃗_j^Tv⃗_i)^2 - (e⃗_j^Tg⃗_i)^2]| ≤ 1 + o(1) ≤ 2  ∀ j ∈^c. We can apply to the terms in square parentheses to give ϵ_1, …, ϵ_r>0 such that |N(e⃗_j_k^Tv⃗_i_k)^2 - (e⃗_j_k^Tg⃗_i_k)^2| ≤ N^-ϵ_i_k   ∀ j_k∈ ∀ i_k=1,…, r. We can obtain a single error bound by setting ϵ = min_i ϵ_i, where clearly ϵ > 0 and then write N(e⃗_j_k^Tv⃗_i_k)^2 - (e⃗_j_k^Tg⃗_i_k)^2 = η_i_kj_k^2 N^-ϵ where η_i_kj_k^2 ∈ [-1, 1]. To further include the indices j∈^c, we extend the expression (<ref>) to all j_k by saying η_i_kj_k^2 ∈ [-1, 1]  ∀ j_k∈, η_i_kj_k^2 ∈ [-N^ϵ, N^ϵ]  ∀ j_k∈^c. Overall we have (∑_i=1^r y_i v⃗_i^TΛv⃗_i)^l-𝔼(∑_i=1^r y_i 1/Ng⃗_i^TΛg⃗_i)^l = N^-l(1+ϵ)∑_i_1,…, i_l=1^r∑_j_1,…, j_l=1^N∏_k=1^l y_i_kλ_j_kη_i_kj_k^2 but by comparing with (<ref>) we can rewrite as (∑_i=1^r y_i v⃗_i^TΛv⃗_i)^l-𝔼(∑_i=1^r y_i 1/Ng⃗_i^TΛg⃗_i)^l = ( ∑_i=1^r N^-(1+ϵ)y_i η⃗_i^TΛη⃗_i)^l where η⃗_i^T = (η_i1,…, η_iN). §.§ Main result Let X be an N× N real symmetric random matrix and let D be an N× N symmetric matrix (deterministic or random). Let μ̂_X, μ̂_D be the empirical spectral measures of the sequence of matrices X, D and assume there exist deterministic limit measures μ_X, μ_D. Assume that X has QUE, i.e. <ref>. Assume also the μ̂_X concentrates in the sense that (W_1(μ̂_X, μ_X) > δ) ≲ e^-N^τ f(δ) where τ>0 and f is some positive increasing function. Then H = X + D has a limiting spectral measure and it is given by the free convolution μ_X ⊞μ_D. A condition like (<ref>) is required so that the Laplace method can be applied to the empirical measure μ̂_X. There are of course other ways to formulate such a condition. Consider for example the conditions used in Theorems 1.2 and 4.1 of <cit.>. There it is assumed the existence of a sequence of deterministic measures (μ_N)_N ≥ 1 and a constant κ>0 such that for large enough N W_1(μ̂_̂X̂, μ_N) ≤ N^-κ,    W_1(μ_N, μ_X) ≤ N^-κ, which is of course just a deterministic version of (<ref>). <cit.> introduce the extra condition around concentration of Lipschitz traces: ( |1/N f(H_N) - 1/N f(H_N)| > δ) ≤exp(-c_ζ/N^ζmin{(Nδ/f_Lip)^2, (Nδ/f_Lip)^1+ϵ_0}), for all δ>0, Lipschitz f and N large enough, where ζ, c_ζ>0 are some constants. As shown in the proof of Theorem 1.2, this condition is sufficient to obtain ( |∫ |λ| dμ̂_X(λ) - ∫ |λ| dμ̂_X(λ)| ≤ t ) ≤exp(-c_ζ/N^ζmin{ (2Ntη)^2, (2Ntη)^1+ϵ_0}) for any t>0 and for large enough N. Note that <cit.> prove this instead for integration against a regularised version of log|λ|, but the proof relies only the integrand's being Lipschitz, so it goes through just the same here. (<ref>) and (<ref>) clearly combine to give (<ref>). The reader may ignore this remark if they are content to take (<ref>) as an assumption. Alternatively, as we have shown, (<ref>) can be replaced by (<ref>) and (<ref>), conditions which have already been used for quite general results in the random matrix theory literature. We shall denote use the notation G_H(z) = 1/N (z - H)^-1. Recall the supersymmetric approach to calculating the expected trace of the resolvent of a random matrix ensemble: _H G_H(z) = 1/N∂/∂ȷ|_ȷ=0𝔼_H Z_H(ȷ) where Z_H(ȷ) = (z + ȷ - H)/(z - H) = ∫ dΨ e^-i AH e^iΨΨ^†J, A = ϕϕ^† + χχ^†, J = I_N ⊗([ z 0; 0 ȷ + z ]), dΨ = dϕ dϕ^* dχ dχ^*/-(2π)^N i, Ψ = ([ ϕ; χ ]) with ϕ∈^N and χ,χ^* being N-long vectors of anti-commuting variables. Independence of X and D gives _H Z_H(ȷ) = ∫ dΨ e^iΨΨ^†J_X,D e^-i A(X + D) = ∫ dΨ e^iΨΨ^†J_D e^-i AD_X e^-i AX. _D simply means integration against a delta-function density if D is deterministic. Let us introduce some notation: for N× N matrices K, Φ_X(K) = _X e^-i XK, and similarly Φ_D. We also define a new matrix ensemble X̅d= O^T Λ O, where Λ=diag(λ_1, …, λ_N) are equal in distribution to the eigenvalues of X and O is an entirely independent Haar-distributed orthogonal matrix. Now _H Z_H(ȷ) = ∫ dΨ e^iΨΨ^†JΦ_(K)Φ_D(K) + ∫ dΨ e^iΨΨ^†J (Φ_X(A) - Φ_(A))Φ_D(A) G_D+X(z) = G_D+(z) + 1/N∂/∂ȷ|_ȷ = 0∫ dΨ e^iΨΨ^†J (Φ_X(A) - Φ_(A))Φ_D(A) ≡ G_D+(z) + E(z) and so we need to analyse the error term E(z). Now consider X = U^TΛ U where the rows of U are the eigenvectors {u⃗_i}_i of X. Say also that K = Q^TYQ for diagonal Y = (y_1, …, y_r, 0, …, 0), where we note that K has fixed rank, by construction. Then XK = Y (UQ^T)^TΛ (UQ^T) but Lemma <ref> establishes that the rows of UQ^T obey QUE, since the rows of U do. Further, Lemma <ref> then establishes that the columns of UQ^T obey as required by Lemma <ref>. Let {v⃗_i} be those columns, then we have XK = ∑_i=1^r y_i v⃗_i^T Λv⃗_i. The expectation over X can be split into eigenvalues and conditional eigenvectors Φ_X(K) = _Λ_U|Λ∑_l=0^∞1/l! (-i)^l( U^T Λ UK)^l. We can simply bound | ∑_l=0^n1/l! (-i)^l( U^T Λ U)^l| ≤ e^| U^TΛ UK| for any n, but clearly _U|Λe^| U^TΛ UK| < ∞ since, whatever the distribution of U|Λ, the integral is over a compact group (the orthogonal group O(N)) and the integrand has no singularities. Therefore, by the dominated convergence theorem Φ_X(K) = _Λ∑_l=0^∞1/l! (-i)^l_U|Λ( U^T Λ UK)^l and in precisely the same way Φ_X(K) = _Λ∑_l=0^∞1/l! (-i)^l_O∼μ_Haar( O^T Λ OK)^l. Recalling (<ref>) we now have Φ_X(K) = _Λ∑_l=0^∞1/l! (-i)^l _U|Λ(∑_i=1^r y_i v⃗_i^T Λv⃗_i)^l. and similarly Φ_(K) = _Λ∑_l=0^∞1/l! (-i)^l _U|Λ(∑_i=1^r y_i v̅⃗̅_i^T Λv̅⃗̅_i)^l. where the v̅⃗̅_i are defined in the obvious way from . We would now apply , but to do so we must insist that _Λ is taken over the ordered eigenvalues of X. Having fixed that convention, Lemma <ref> can be applied to the terms _U|Λ(∑_i=1^r y_i v⃗_i^T Λv⃗_i)^l in (<ref>). The terms in Φ_ can be treated similarly. This results in Φ_X(K) - Φ_(K) = _Λ[∑_l=0^∞i^l/l!{_{g⃗_i}_i=1^r(∑_i=1^r y_i 1/Ng⃗_i^TΛg⃗_i)^l + (∑_i=1^r N^-(1+ϵ) y_i η⃗_i^TΛη⃗_i)^l }         -∑_l=0^∞i^l/l!{_{g⃗_i}_i=1^r(∑_i=1^r y_i 1/Ng⃗_i^TΛg⃗_i)^l + (∑_i=1^r N^-(1+ϵ) y_i η̅⃗̅_i^TΛη̅⃗̅_i)^l }] The exponential has infinite radius of convergence, so we may re-order the terms in the sums to give cancellation Φ_X(K) - Φ_(K) = _Λ∑_l=1^∞1/l! N^-(1+ϵ)l(-i)^l(∑_i=1^r y_i η⃗_i^TΛη⃗_i)^l -_Λ∑_l=1^∞1/l! N^-(1+ϵ)l(-i)^l(∑_i=1^r y_i η̅⃗̅_i^TΛη̅⃗̅_i)^l. Here ϵ>0 and η⃗_i, η̃⃗̃_i∈^N with -1≤ [(η⃗_i)_j]^2 , [(η̅⃗̅_i)_j]^2 ≤ 1    ∀ i=1,…, r,  ∀ j∈, -N^ϵ≤ [(η⃗_i)_j]^2 , [(η̅⃗̅_i)_j]^2 ≤ N^ϵ   ∀ i=1,…, r,  ∀ j∈. Simplifying, we obtain Φ_X(K) - Φ_(K) = _Λexp(-iN^-(1+ϵ)∑_i=1^r y_i η⃗_i^TΛη⃗_i) - _Λexp(-iN^-(1+ϵ)∑_i=1^r y_i η̃⃗̃_i^TΛη̃⃗̃_i). Since |^c| ≤ 2N^1-δ we have ∑_j∈^c |λ_j| ≤𝒪(N^1-d N^-1) |Λ| and so |η⃗_i^TΛη⃗_i| ≤|Λ|( 1 + 𝒪(N^ϵ - δ)). For any fixed δ>0, ϵ can be reduced if necessary so that ϵ < δ and then for sufficiently large N we obtain, say, |η⃗_i^TΛη⃗_i| ≤ 2 |Λ|. Thence we can write η⃗_i^TΛη⃗_i = |Λ| ξ_i for ξ_i∈[-2, 2], and similarly η̃⃗̃_i^TΛη̃⃗̃_i = |Λ| ξ̃_i. Now _Λexp(-iN^-(1+ϵ)∑_i=1^r ξ_iy_i |Λ|)=_Λexp(-iN^-ϵ∑_i=1^r ξ_iy_i∫ dμ̂_X(λ)|λ|) so we can apply Laplace's method to the empirical spectral measure μ̂_X to obtain _Λexp(-iN^-(1+ϵ)∑_i=1^r ξ_iy_i |Λ|)=exp(-iN^-ϵ (q+o(1))∑_i=1^r ξ_iy_i) + o(1) where the o(1) terms do not depend on the y_i and where we have defined q = ∫ dμ_X(λ)|λ|. Further, we can write ∑_i=1^r ξ_i y_i = ζ K, where ζ∈ [min_i{ξ_i}, max_i{ξ_i}]⊂ [-1, 1], and similarly ∑_i=1^r ξ̃_i y_i = ζ̃ K. Then Φ_X(K) - Φ_(K) = e^-iN^-ϵζ (q + o(1)) K - e^-iN^-ϵζ̃ (q + o(1)) K + o(1) but 1/N∂/∂ȷ|_ȷ=0∫ dΨ e^iΨΨ^†Je^-iN^-ϵζ (q + o(1)) KΦ_D(A) = G_D + N^-ϵζ(q + o(1))I(z) = G_D(z + 𝒪(N^-ϵ)) E(z) = G_D(z + 𝒪(N^-ϵ)) + o(1) - G_D(z + 𝒪(N^-ϵ)) - o(1) = o(1). We have thus established that G_D+X(z) = G_D+(z) + o(1) from which one deduces that μ_D+X = μ_D + = μ_D ⊞μ_ = μ_D⊞μ_X. We have also constructed a non-rigorous argument for Theorem <ref> where the supersymmetric approach is replaced by the replica method. This approach simplifies some of the analysis but at the expense of being not at all rigorous (indeed there are integral expressions in this argument that are manifestly infinite). The supersymmetric methods used here are not fully rigorous (like most of their applications) but we note that recent work is beginning to elevate supersymmetric random matrix calculations to full rigour <cit.>. §.§ Experimental validation Let U(a, b) denote the uniform distribution on the interval (a, b), and Γ(a) the Gamma-distribution with scale parameter a. We consider the following matrix ensembles: M∼ GOE^n   :  Var(M_ij) = 1 + δ_ij/2n, M∼ UWig^n :  nM_iji.i.d∼ U(0, 6)   up to symmetry, M∼Γ Wig^n :   2nM_iji.i.d∼Γ(2)   up to symmetry, M∼ UWish^n :   Md=1/mXX^T,   X_iji.i.d∼U(0, 12)   for X of size n× m,   n/mn,m→∞→α, M∼ Wish^n : Md=1/mXX^T,  X_iji.i.d∼𝒩(0,1)  for X of size n× m,   n/mn,m→∞→α. All of the GOE^n, UWig^n, Γ Wig^n have the same limiting spectral measure, namely μ_SC, the semi-circle of radius 2. UWish^n, Wish^n have a Marcenko-Pastur limiting spectral measure μ_MP, and the constant 12 is chosen so that the parameters of the MP measure match those of a Gaussian Wishart matrix Wish^n. GOE^n, Wish^n are the only ensembles whose eigenvectors are Haar distributed, but all ensembles obey a local law in the sense above. It is known that the sum of GOE^n and any of the other ensembles will have limiting spectral measure given by the free additive convolution of μ_SC and the other ensemble's measure (so either μ_SC⊞μ_MP or μ_SC⊞μ_SC), indeed this free addition property holds for any invariant ensemble <cit.>. Our result implies that the same holds for addition of the non-invariant ensembles. Sampling from the above ensembles is simple, so we can easily generate spectral histograms from multiple independent matrix samples for large n. μ_SC⊞μ_SC is just another semi-circle measure but with radius 2. μ_SC⊞μ_MP can be computed in the usual manner with R-transforms and is given by the solution to the polynomial α/2t^3 - (1/2 + α z)t^2 + (z + α - 1)t - 1 = 0. i.e. Say the cubic has roots {r_1, r_2 + is_2, r_2 - is_2} for s_2≥ 0, then the density of μ_SC⊞μ_MP at z is s_2/π. This can all be solved numerically. The resulting plots are in Figure <ref> and clearly show agreement between the free convolutions and sampled spectral histograms. We can also test the result in another more complicated case. Consider the case of random d-regular graphs on N vertices. Say M∼ Reg^N,d is the distribution of the adjacency matrix of such random graphs. The limiting spectral density of M∼ Reg^N,d is known in closed form, as is its Stieljtes transform <cit.> and <cit.> established a local law of the kind required for our results. Moreover, there are known efficient algorithms for sampling random d-regular graphs <cit.> along with implementations <cit.>. Let μ_KM^(d) be the Kesten-McKay law, the limiting spectral measure of d-regular graphs. We could find an explicit degree-6 polynomial for the Stieljtes transform of μ_KM^(d)⊞μ_SC and compare to spectral histograms as above. Alternatively we can investigate agreement with μ_KM^(d)⊞μ_SC indirectly by sampling and comparing spectra from say Reg^N,d + UWig^N and also from Reg^N,d + GOE^N. The latter case will certainly yield the distribution μ_KM^(d)⊞μ_SC since the GOE matrices are freely independent from the adjacency matrices. Figure shows a q-q plot[Recall that a q-q plot shows the quantiles of one distribution on the x axis and another on the y axis. Given two cumulative density functions F_X, F_Y and their percent point functions F_X^-1, F_Y^-1, the q-q plot is a plot of the parametric curve (F_X^-1(q), F_Y^-1(q)) for q∈[0,1]. Given only finite samples from the random variables X and Y, the empirical percent point functions can be estimated and used in the q-q plot.] for samples of the spectra from these two matrix distributions and demonstrates near-perfect agreement, thus showing that indeed the spectrum of Reg^N,d + UWig^N is indeed described by μ_KM^(d)⊞μ_SC. We reached the same conclusion when repeating the above experiment with UWish^N + Reg^N,d and Wish^n + Reg^N,d. § INVARIANT EQUIVALENT ENSEMBLES For an invariant ensemble <cit.> with potential V we have the following integro-differential equation relating the equilibrium measure μ to the potential V <cit.>: β/21/x-ydμ(y) = V'(x). So in the case of real symmetric matrices we have 1/2g̅_̅μ̅(x) = V'(x) where g_μ is the Stieljtes transform of μ and the bar over g̅_̅μ̅ indicates that the principal value has been taken. Given a sufficiently nice μ (<ref>) defines V up-to a constant of integration on (μ), but V is not determined on \(μ), as is made clear by the following lemma, which we prove for completeness but which has appeared before in various works (e.g. <cit.>). For compactly supported probability measure μ on and real potential V, define S_V[μ](y) = V(y) - ∫ dμ(x) log|y-x|. Suppose S_V[μ](y)=c, a constant, for all y∈(μ) and S_V[μ](y) ≥ c for all y∈ℝ. Then μ is a minimiser amongst all probability measures on of the energy ℰ_V[μ] = ∫ dμ(x) V(x) - ∬_x< y dμ(x)dμ(y)log|x-y|. Consider a probability measure that is close to μ in the sense of W_1 distance, say. For any such measure, one can find an arbitrarily close probability measure μ' of the form μ' = μ + ∑_i=1^r a_i_[y_i - δ_i, y_i + δ_i] - ∑_i=1^s b_i_[z_i - η_i, z_i + η_i] where all a_i, b_i>0 and δ_i, η_i, a_i, b_i ≤ϵ for some small ϵ>0. To ensure that μ' is again a probability measure we must impose ∑_ia_i = ∑_jb_j. The strategy now is to expand ℰ_V[μ'] about μ to first order in ϵ, but first note the symmetrisation ∬_x< y dμ(x)dμ(y)log|x-y| = 1/2∬_x≠ y dμ(x)dμ(y)log|x-y|. Then ℰ_V[μ'] - ℰ_V[μ] = ∑_i=1^r a_i V(y_i) - ∑_i=1^s b_i V(z_i) - ∑_i=1^ra_i ∫ dμ(x)log|x-y_i| + ∑_i=1^rb_i ∫ dμ(x)log|x-z_i| + 𝒪(ϵ^2) = ∑_i=1^r a_i S_V[μ](y_i) - ∑_i=1^r b_i S_V[μ](z_i) + 𝒪(ϵ^2). Observe that if all y_i, z_i∈supp(μ) then S_V[μ](y_i) = S_V[μ](y_i) = c and so ℰ_V[μ'] = ℰ_V[μ]. Without loss of generality therefore, we take y_i∉supp(μ) and z_i∈supp(μ), whence ℰ_V[μ'] - ℰ_V[μ] ≥ c∑_i=1^r a_i - c∑_i=1^s b_i = 0. The next lemma establishes that, while not unique, a potential V can always be constructed given a measure μ. Consider a probability measure μ on with compact support, absolutely continuous with respect to the Lebesgue measure. Then there exists a potential V:→ which yields a well-defined invariant distribution on real symmetric matrices for which the equilibrium measure is μ. (<ref>) can be integrated to obtain V and the condition S_V[μ]=c (a constant) on (μ) determines V uniquely on (μ). Next observe that, for y∈\(μ) there exists some constant R>0 such that |x-y| ≤ R + |y|, since μ is compactly supported, and so log|x-y| ≤ |y| + R. Therefore S_V[μ](y) ≥ V(y) - |y| - R. V must be chosen on ℝ\supp(μ) to satisfy S_V[μ](y) ≥ c, which can be achieved by ensuring V(y) ≥ |y| + R + c. Additionally, V must be defined for large y such that it defines an legitimate invariant ensemble on symmetric real matrices, i.e. V must decay sufficiently quickly at infinity to give an integrable probability density. Finally, V must be sufficiently smooth, and certainly continuous, so there are boundary conditions at the boundary of supp(μ). Suppose supp(μ) is composed of K disjoint intervals, then there are 2K boundary conditions on V, and the bound (<ref>) imposes one further condition. Sufficiently fast decay at infinity can be satisfied by any even degree polynomial V of degree at least 2, therefore a degree 2K + 2 polynomial can be found with sufficiently fast decay at infinity, satisfying all the boundary conditions and (<ref>). § UNIVERSAL COMPLEXITY OF LOSS SURFACES §.§ Extension of a key result and prevalence of minima Let's recall Theorem 4.5 from <cit.>. H_N(u) is our random matrix ensemble with some parametrisation u∈^m and its limiting spectral measure is μ_∞(u). Define 𝒢_-ϵ = {u∈^m |μ_∞(u) ( (-∞, 0) ) ≤ϵ}. So 𝒢_-ϵ is the event that μ_∞(u) is close to being supported only on (0, ∞). Let l(u), r(u) be the left and right edges respectively of the support of μ_∞(u). Fix some 𝒟⊂^m and suppose that 𝒟 and the matrices H_N(u) satisfy the following. * For every R>0 and every ϵ>0, we have lim_N→∞1/Nlog Nlog[sup_u∈ B_R(d_BL(μ̂_H_N(u), μ_∞(u) ) > ϵ] = -∞. * Several other assumptions detailed in <cit.>. Then for any α>0 and any fixed p∈, we have lim_N→∞1/Nlog∫_𝒟 e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) = 0}]du = sup_u∈𝒟∩𝒢{∫_ℝlog|λ| dμ_∞(u) (λ) - α u^2}. We claim the following extension Under the same assumptions as the above theorem and for any integer sequence k(N) > 0 such that k/N → 0 as N→∞, we have lim_N→∞1/Nlog∫_𝒟 e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) ≤ k}]du = sup_u∈𝒟∩𝒢{∫_ℝlog|λ| dμ_∞(u) (λ) - α u^2}. Firstly note that 1/Nlog∫_𝒟 e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) ≤ k}] du ≥ 1/Nlog∫_𝒟 e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) = 0}] du, so it suffices to establish a complementary upper bound. The proof in of Theorem 4.5 in <cit.> establishes an upper bound using lim_N→∞1/Nlog∫_(𝒢_-ϵ)^c e^-Nα u^2[|(H_N(u)|{i(H_N(u)) = 0}] du = -∞ which holds for all ϵ > 0. Indeed, 𝒟 = (𝒟∩𝒢_-ϵ) ∪ (𝒟∩ (𝒢_-ϵ)^c), so ∫_𝒟 e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) ≤ k}] du ≤ ∫_𝒟∩𝒢_-ϵ e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) ≤ k}] du + ∫_(𝒢_-ϵ)^c e^-(N+p)α u^2[|(H_N(u))|{i(H_N(u)) ≤ k}] du , so our proof is complete if we can prove the analogous result lim_N→∞1/Nlog∫_(𝒢_-ϵ)^c e^-Nα u^2[|(H_N(u)|{i(H_N(u)) ≤ k}] du = -∞. As in <cit.>, let f_ϵ be some 1/2-Lipschitz function satisfying ϵ/2_x ≤ -ϵ≤ f_ϵ(x)≤ϵ/2_x≤ 0. Suppose u∈(𝒢_-ϵ)^c and also i(H_N(u)) ≤ k. Then we have 0 ≤∫ dμ̂_H_N(u)(x)   f_ϵ(x) ≤kϵ/2N and also ϵ^2/2≤∫ dμ_∞(u)(x)   f_ϵ(x) ≤ϵ/2. We have d_BL(μ̂_H_N(u), μ_∞(u) ) ≥|∫ dμ̂_H_N(u)(x)   f_ϵ(x) - ∫ dμ_∞(u)(x)   f_ϵ(x) | ≥||∫ dμ̂_H_N(u)(x)   f_ϵ(x)| - |∫ dμ_∞(u)(x)   f_ϵ(x) ||, so if we can choose kϵ/2N≤ϵ^2/2 - η for some η > 0, then we obtain d_BL(μ̂_H_N(u), μ_∞(u) ) ≥η. Then applying (<ref>) yields the result (<ref>). (<ref>) can be satisfied if ϵ≥k/2N + 1/2k^2/N^2 + 8η. So, given ϵ>0, we can take N large enough such that, say, k(N)/N < ϵ/4. By taking η < ϵ^2/128 we obtain k/2N + 1/2k^2/N^2 + 8η < ϵ/8 + 1/2max(8η, ϵ/4) < 1 + √(2)/8ϵ < ϵ and so (<ref>) is satisfied. Now finally (<ref>) can be applied (with η in place of ϵ) and so we conclude (<ref>). Overall we see that the superexponential BL condition (<ref>) is actually strong enough to deal with any o(N) index not just index-0. This matches the GOE (or generally invariant ensemble) case, in which the terms with {i(H_N(u))=k} are suppressed compared to the exact minima terms {i(H_N(u))=0}. Note that Corollary <ref> establishes that, on the exponential scale, the number of critical points of any index k(N) = o(N) is no more than the number of exact local minima. §.§ The dichotomy of rough and smooth regions Recall the batch loss from Section <ref>: 1/b∑_i=1^bℒ(f_w⃗(x⃗_i), y_i),    (x⃗_i, y_i)i.i.d.∼ℙ_data. As with the Hessian in Section <ref>, we use the model L≡ L_batch(w⃗) = L_true(w⃗) + V(w⃗), where V is a random function ℝ^N→ℝ. Now let us define the complexity for sets ⊂^N C_N() = |{w⃗∈|∇ L(w⃗) = 0}|. This is simply the number of stationary points of the training loss in the region of weight space. A Kac-Rice formula applied to ∇ L gives C_N = ∫_ dw⃗ ϕ_w⃗(-^-1∇ L_true) | (A + X)| where ϕ_w⃗ is the density of ∇ V at w⃗. A rigorous justification of this integral formula would, for example, have to satisfy the conditions of the results of <cit.>. This is likely to be extremely difficult in any generality, though is much simplified in the case of Gaussian V (and X) - see <cit.> Theorem 12.1.1 or Chapter <ref>, Lemma <ref> (<cit.> Theorem 4.4). Hereafter, we shall take (<ref>) as assumed. The next step is to make use of strong self-averaging of the random matrix determinants. Again, we are unable to establish this rigorously at present, but note that this property has been proved in some generality by <cit.>, although we are unable to satisfy all the conditions of those results in any generality here. Self-averaging and using the addition results above gives 1/Nlog | (A + X)| = ∫ d(μ_b ⊞ν)(λ) log |λ| + o(1) where μ_b, ν depend in principle on w⃗. We are concerned with N^-1log C_N, and in particular its sign, which determines the complexity of the loss surface in : positive ↔ exponentially many (in N) critical points, negative ↔ exponentially few (i.e. none). The natural next step is to apply the Laplace method with large parameter N to determine the leading order term in C_N, however the integral is clearly not of the right form. Extra assumptions on ϕ_w⃗ and ∇ L_true could be introduced, e.g. that they can be expressed as functions of only a finite number of combinations of coordinates of w⃗. Suppose that ϕ_w⃗ has its mode at 0, for any w⃗, which is arguably a natural property, reflecting in a sense that the gradient noise has no preferred direction in ^N. The sharp spike at the origin in the spectral density of deep neural network Hessians suggests that generically ∫ d(μ_b ⊞ν)(λ) log |λ| < 0. We claim it is reasonable to expect the gradient (and Hessian) variance to be increasing in w⃗_2. Indeed, consider the general form of the simplest deep neural network, a multi-layer perceptron: f_w⃗(x⃗) = σ(b⃗^(L) + W^(L)σ(b⃗^(L-1) + W^(L-1)…σ(b⃗^(1) + W^(1)x⃗ )… )) where all of the weight matrices W^(l) and bias vectors b⃗^(l) combine to give the weight vector w⃗. Viewing x⃗ as a random variable, making f a random function of w⃗, we expect from the above that the variance in f_w⃗ is generally increasing in w⃗_2, and so therefore similarly with L_batch. Overall it follows that ϕ_w⃗(-^-1∇ L_true) is generally decreasing in ∇ L_true, but the maximum value at ϕ_w⃗(0) is decreasing in w⃗_2. The picture is therefore that the loss surface is simple and without critical points in regions for which ∇ L_true is far from 0. In neighbourhoods of ∇ L_true = 0, the loss surface may become complex, with exponentially many critical points, however if w⃗_2 is too large then the loss surface may still be without critical points. In addition, the effect of larger batch size (and hence larger ^-1) is to simplify the surface. These considerations indicate that deep neural network loss surfaces are simplified by over-parametrisation, leading to the spike in the Hessian spectrum and thus (<ref>). The simple fact that neural networks' construction leads gradient noise variance to increase with w⃗_2 has the effect of simplifying the loss landscape far from the origin of weight space, and even precluding the existence of any critical points of the batch loss. § IMPLICATIONS FOR CURVATURE FROM LOCAL LAWS Consider a general stochastic gradient update rule with curvature-adjusted preconditioning: w⃗_t+1 = w⃗_t - α B_t^-1∇ L(w⃗_t) where recall that L(w⃗) is the batch loss, viewed as a random function on weight space. B_t is some preconditioning matrix which in practice would be chosen to somehow approximate the curvature of L. Such methods are discussed at length in <cit.> and also describe some of the most successful optimisation algorithms used in practice, such as Adam <cit.>. The most natural choice for B_t is B_t = ∇^2 L(w⃗_t), namely the Hessian of the loss surface. In practice, it is standard to include a damping parameter δ>0 in B_t, avoid divergences when inverting. Moreover, typically B_t will be constructed to be some positive semi-definite approximation to the curvature such as the generalised Gauss Newton matrix <cit.>, or the diagonal gradient variance form used in Adam <cit.>. Let us now suppose that B_t = B_t(δ) = Ĥ_t + δ, where Ĥ_t is some chosen positive semi-definite curvature approximation and δ>0. We can now identify B_t(δ)^-1 as in fact the Green's function of Ĥ_t, i.e. B_t(δ)^-1 = -(-δ - Ĥ_t)^-1 = -G_t(-δ). But G_t is precisely the object used in the statement of a local law on for Ĥ_t. Note that ∇ L(w⃗_t) is a random vector and however Ĥ_t is constructed, it will generally be a random matrix and dependent on ∇ L(w⃗_t) in some manner that is far too complicated to handle analytically. As we have discussed at length hitherto, we conjecture that a local law is reasonable assumption to make on random matrices arising in deep neural networks. In particular in Chapter <ref> <cit.> we demonstrated universal local random matrix theory statistics not just for Hessians of deep networks but also for Generalised Gauss-Newton matrices. Our aim here is to demonstrate how a local law on Ĥ_t dramatically simplifies the statistics of (<ref>). Note that some recent work <cit.> has also made use of random matrix local laws to simplify the calculation of test loss for neural networks. A local law on Ĥ_t takes the precise form (for any ξ, D>0 sup_u⃗,v⃗ = 1, z∈S⃗( |u⃗^TG(z)v⃗ - u⃗^TΠ(z)v⃗| > N^ξ(1/Nη + g_μ(z)/Nη)) ≤ N^-D where S⃗ = {E + iη∈| |E| ≤ω^-1,   N^-1 + ω≤η≤ω^-1} μ is the limiting spectral measure of Ĥ_t and, crucially, Π is a deterministic matrix. We will use the following standard notation to re-express (<ref>) |u⃗^TG(z)v⃗ - u⃗^TΠ(z)v⃗| ≺Ψ_N(z),    u⃗,v⃗ = 1, z∈S⃗, where Ψ_N(z) = 1/Nη + g_μ(z)/Nη and the probabilistic statement, valid for all ξ, D>0 is implicit in the symbol ≺. In fact, we will need the local law outside the spectral support, i.e. at z = x + iη where x∈ℝ\supp(μ). In that case Ψ_N(z) is replaced by 1/N(η + κ) where κ is the distance of x from supp(μ) on the real axis, i.e. |u⃗^TG(z)v⃗ - u⃗^TΠ(z)v⃗| ≺1/N(η + κ),    u⃗,v⃗ = 1,   x∈ℝ\supp(μ). For δ>0 this becomes |u⃗^TG(-δ)v⃗ - u⃗^TΠ(-δ)v⃗| ≺1/Nδu⃗_2 v⃗_2 for δ>0 and now any u⃗, v⃗. Applying this to (<ref>) gives |u⃗^TB_t^-1∇ L(w⃗_t) - u⃗^TΠ_t(-δ)∇ L(w⃗_t) | ≺1/Nδu⃗_2 ∇ L(w⃗_t)_2. Consider any u⃗ with u⃗_2 = α, then we obtain |u⃗^TB_t^-1∇ L(w⃗_t) - u⃗^TΠ_t(-δ)∇ L(w⃗_t) | ≺α∇ L(w⃗_t)_2/Nδ. Thus with high probability, for large N, we can replace (<ref>) by w⃗_t+1 = w⃗_t - αΠ_t(-δ) ∇ L(w⃗_t) incurring only a small error, provided that δ >> ∇ L(w⃗_t)_2/Nα. Note that the only random variable in (<ref>) is ∇ L (w⃗_t). If we now consider the case ∇ L (w⃗_t) = ∇L̅(w⃗_t) + g⃗(w⃗_t) for deterministic L̅, then w⃗_t+1 = w⃗_t - αΠ_t(-δ) ∇L̅(w⃗_t) - αΠ_t(-δ)g⃗(w⃗_t) and so the noise in the parameter update is entirely determined by the gradient noise. Moreover note the linear dependence on g⃗ in (<ref>). For example, a Gaussian model for g⃗ immediately yields a Gaussian form in (<ref>), and e.g. if g⃗ = 0, then (w⃗_t+1 - w⃗_t) = -αΠ_t(-δ) ∇ L(w⃗_t). A common choice in practice for Ĥ is a diagonal matrix, e.g. the diagonal positive definite curvature approximation employed by Adam <cit.>. In such cases, Ĥ is best viewed as an approximation to the eigenvalues of some positive definite curvature approximation. The next result establishes that a local law assumption on a general curvature approximation matrix can be expected to transfer to an analogous result on a diagonal matrix of its eigenvalues. Suppose that Ĥ obeys a local law of the form (<ref>). Define the diagonal matrix D such that D_i d=λ_i where {λ_i}_i are the sorted eigenvalues of Ĥ. Let G_D(z) = (z - D)^-1 be the resolvent of D. Let 𝔮_j[μ] be the j-th quantile of μ, the limiting spectral density of Ĥ, i.e. ∫_-∞^𝔮_j[μ] dμ(λ) = j/N. Then D obeys the local law |(G_D)_ij - δ_ij(z - 𝔮_j[μ])^-1| ≺1/N^2/3 (κ + η)^2,    z = x + iη,  x∈ℝ\supp(μ), where κ is the distance of x from supp(μ). Naturally, we can redefine D_i = λ_σi for any permutation σ∈ S_N and the analogous statement replacing 𝔮_j[μ] with 𝔮_σ(j) will hold. As in <cit.>, the local law (<ref>), (<ref>) is sufficient to obtain rigidity of the eigenvalues in the bulk, i.e. for any ϵ, D > 0 (∃ j  |  |λ_j - 𝔮_j[μ]| ≥ N^ϵ[min(j, N-j+1)]^-1/3N^-2/3) ≤ N^-D. Then we have |1/z - λ_j - 1/z - 𝔮_j[μ]|=| λ_j - 𝔮_j[μ]/(z - λ_j)(z - 𝔮_j[μ])|. For z=x+iη and x at a distance κ>0 from supp(μ) |z-𝔮_j[μ]|^2 ≥η^2 + κ^2 ≥1/2(η + κ)^2, and the same can be said for |z - 𝔮_j[μ]|^2 with high probability, by applying the rigidity (<ref>). A second application of rigidity to |λ_j - 𝔮_j[μ]| gives |1/z - λ_j - 1/z - 𝔮_j[μ]| ≺1/N^2/3min(j, N-j+1)^1/3 (κ + η)^2 which yields the result. With this result in hand, we get the generic update rule akin to (<ref>), with high probability w⃗_t+1 = w⃗_t -α diag(1/π_j+ δ) ∇L̅(w⃗_t) - α diag(1/π_j + δ) g⃗(w⃗_t) where {π_j}_j=1^N are the eigenvalues of Π_t(0) and we emphasise again that the π_j are deterministic; the only stochastic term is the gradient noise g⃗(w⃗_t). Implications for preconditioned stochastic gradient descent The key insight from this section is that generic random matrix theory effects present in preconditioning matrices of large neural networks can be expected to drastically simplify the optimisation dynamics due to high-probability concentration of the pre-conditioning matrices around deterministic equivalents, nullifying the statistical interaction between the pre-conditioning matrices and gradient noise. Moreover, with this interpretation, the damping constant typically added to curvature estimate matrices is more than a simple numerical convenience: it is essential to yield the aforementioned concentration results. As an example of the kind of analysis that the above makes possible, consider the results of Chapter <ref> (or see <cit.> for more details). The authors consider a Gaussian process model for the noise in the loss surface, resulting in tractable analysis for convergence of stochastic gradient descent in the presence of statistical dependence between gradient noise in different iterations. Such a model implies a specific form of the loss surface Hessian and its statistical dependence on the gradient noise. This situation is a generalisation of the spin glass model exploited in various works <cit.> and in Chapters <ref> and <ref>, except that in those cases the Hessian can be shown to be independent of the gradients. Absent the very special conditions that lead to independence, one expects the analysis to be intractable, hence why in Chapter <ref> we restrict to stochastic gradient descent without preconditioning, or simply assume a high probability concentration on a deterministic equivalent. To make this discussion more concrete, consider a model L = L_true + V where V is a Gaussian process with mean 0 and covariance function K(x⃗, x⃗') = k(1/2x⃗ - x⃗'_2^2) q( 1/2(x⃗_2^2 + x⃗'_2^2) ), where k is some decreasing function and q some increasing function. The discussion at the end of the previous section suggests that the covariance function for loss noise should not be modelled as stationary, hence the inclusion of the function q in (<ref>). For convenience define Δ = 1/2(x⃗ - x⃗'_2^2) and S = 1/2(x⃗_2^2 + x⃗'_2^2). Then it is a short exercise in differentiation to obtain Cov(∂_i V(w⃗), ∂_j V(w⃗)) = Cov(∂_i V(w⃗), ∂_j V(w⃗'))|_w⃗=w⃗' = ∂^2/∂ w_i∂ w_j'K(w⃗, w⃗')|_w⃗=w⃗' = -k'(0)q(w⃗_2)δ_ij + k(0)q”(w⃗_2^2) w_iw_j. and moreover Cov(∂_il V(w⃗), ∂_j V(w⃗)) = Cov(∂_il V(w⃗), ∂_j V(w⃗'))|_w⃗=w⃗' = ∂^3/∂ w_i∂ w_l∂ w_j'K(w⃗, w⃗')|_w⃗=w⃗' = -k'(0)q'(w⃗_2^2)w_lδ_ij + q”'(w⃗_2^2)k(0) w_iw_lw_j' - k'(0)q'(x⃗_2)w_iδ_jl. Hence we see that the gradients of L and its Hessian are statistically dependent by virtue of the non-stationary structure of V. Putting aside issues of positive definite pre-conditioning matrices, and taking δ such that (∇^2 L + δ)^-1 exists (almost surely) for large N, it would appear that the distribution of (∇^2 L + δ)^-1∂ V will be complicated and non-Gaussian, assuming no extra information about the statistical interaction between the resolvent matrix and the gradient. This example concretely illustrates our point: even in almost the simplest case, where the gradient noise is Gaussian, the pre-conditioned gradients are generically considerably more complicated and non-Gaussian. Moreover, centred Gaussian noise on gradient is transformed into generically non-centred noise by pre-conditioning. Continuing the differentiation above, it is elementary to obtain the covariance structure of the Hessian ∇^2 V, though the expressions are not instructive. Crucially, however, the Hessian is Gaussian and the covariance of any of its entries is 𝒪(1) (in large N), so the conditions in Example 2.12 of <cit.> apply to yield an optimal local law on the Hessian, which in turn yields the above high-probability concentration of (∇^2 L + δ)^-1 provided that δ is large enough. This argument ratifies an intuition from random matrix theory, that for large N the resolvent matrix (∇^2 L + δ)^-1 is self-averaging and will be close, with high probability, to some deterministic equivalent matrix. § CONCLUSION In this chapter we have considered several aspects of so-called universal random matrix theory behaviour in deep neural networks. Motivated by prior experimental results, we have introduced a model for the Hessians of DNNs that is more general than any previously considered and, we argue, actually flexible enough to capture the Hessians observed in real-world DNNs. Our model is built using random matrix theory assumptions that are more general than those previously considered and may be expected to hold in quite some generality. By proving a new result for the addition of random matrices, using a novel combination of quantum unique ergodicity and the supersymmetric method, we have derived expressions for the spectral outliers of our model. Using Lanczos approximation to the outliers of large, practical DNNs, we have compared our expressions for spectral outliers to data and demonstrated strong agreement for some DNNs. As well as corroborating our model, this analysis presents indirect evidence of the presence of universal local random matrix statistics in DNNs, extending earlier experimental results. Our analysis also highlights a possibly interesting distinction between some DNN architectures, as Resnet architectures appear to better agree with our theory than other architectures and Resnets have been previously observed to have better-behaved loss surfaces than many other architectures. We also presented quite general arguments regarding the number of local optima of DNN loss surfaces and how `rough' or `smooth' such surfaces are. Our arguments build on a rich history of complexity calculations in the statistical physics and mathematics literature but, rather than performing detailed calculations in some specific, highly simplified toy model, we instead present general insights based on minimal assumptions. Finally we highlight an important area where random matrix local laws, an essential aspect of universality, may very directly influence the performance of certain popular optimisation algorithms for DNNs. Indeed, we explain how numerical damping, combined with random matrix local laws, can act to drastically simplify the training dynamics of large DNNs. Overall this chapter demonstrates the relevance of random matrix theory to deep neural networks beyond highly simplified toy models. Moreover, we have shown how quite general and universal properties of random matrices can be fruitfully employed to derive practical, observable properties of DNN spectra. This work leaves several challenges for future research. All of our work relies on either local laws for e.g. DNN Hessians, or on matrix determinant self-averaging results. Despite the considerable progress towards establishing local laws for random matrices over the last decade or-so, it appears that establishing any such laws for, say, the Hessians of any DNNs is quite out of reach. We expect that the first progress in this direction will come from considering DNNs with random i.i.d. weights and perhaps simple activation functions. Based on the success of recent works on random DNNs <cit.>, we conjecture that the Gram matrices of random DNN Jacobians may be the simplest place to establish a local law, adding to the nascent strand of nonlinear random matrix theory <cit.>. We also believe that there is more to be gained in further studies of forms of random matrix universality in DNNs. For example, our ideas may lead to tractable analysis of popular optimisation algorithms such as Adam <cit.> as the problem is essentially reduced to deriving a local law for the gradient pre-conditioning matrix and dealing with the gradient noise. section CHAPTER: NEURAL NETWORKS WITH GENERAL ACTIVATION FUNCTIONS: SUPPLEMENTARY This appendix provides supporting material for Chapter <ref>. § SPECIFIC EXPRESSION FOR THE LOW-RANK PERTURBATION MATRIX The the rank-2 N-1× N-1 matrix S arises throughout the course of Sections <ref> and <ref> and Lemma <ref>. The specific value of S is not required at any point during our calculations and, even though its eigenvalues appear in the result of Theorem <ref>, it is not apparent that explicit expressions for its eigenvalues would affect the practical implications of the theorem. These considerations notwithstanding, in this supplementary section we collate all the expressions involved in the development of S from the modeling of the activation function in Section <ref> through to Lemma <ref>. Beginning at the final expression for S in Lemma <ref> S_ij = 1/2(N-1)H(H-1)(ξ_3 + ξ_2(δ_i1 + δ_j1) + ξ_1δ_i1δ_j1), where, recalling the re-scaling (<ref>), ξ_0 = ∑_ℓ=1^HN^-ℓ/2ρ_ℓ^(N) ξ_1 = ∑_ℓ=1^H-2N^-ℓ/2ρ_ℓ^(N)[(H-ℓ)(H-ℓ -1) +1 ] ξ_2 =∑_ℓ=1^H-2N^-ℓ/2ρ_ℓ^(N)(H-ℓ - 2) ξ_3 = ∑_ℓ=1^H-2N^-ℓ/2ρ_ℓ^(N) The ρ_ℓ were defined originally in (<ref>) and re-scaled around (<ref>) so that ρ_ℓ = A_i,j^(ℓ)/ A_i,j where A_i,j are discrete random variables taking values in 𝒜{∏_i=1^H α_j_i  :  j_1,…, j_H ∈{1,…, L}} and A^(ℓ)_i,j take values in 𝒜^(ℓ){β_k∏_r=1^H-ℓα_j_r :  j_1,…, j_H-ℓ, k ∈{1,…, L}} but we have not prescribed the mass function of the A_i,j or A_i,j^(ℓ). Lastly recall that the α_j, β_j are respectively the slopes and intercepts of the piece-wise linear function chosen to approximate the activation function f. § EXPERIMENTAL DETAILS In this section we give further details of the experiments presented in Section <ref>. The MLP architecture used consists of hidden layers of sizes 1000, 1000, 500, 250. The CNN architecture used is a standard LeNet style architecture: * 6 filters of size 4× 4. * Activation. * Max pooling of size 2× 2 and stride 2. * 16 filters of size 4× 4. * Activation. * Max pooling of size 2× 2 and stride 2. * 120 filters of size 4× 4. * Activation. * Dropout. * Fully connected to size 84. * Activation * Dropout. * Fully connected to size 10. The activation functions used were the ubiquitous defined by (x) = max(0, x), and defined by (x) = x    for x∈(-1,1), -1    for x≤ -1, 1    for x≥ 1, and a custom 5 piece function f_5 with gradients 0.01,0.1, 1, 0.3, 0.03 on (-∞, -2), (-2,-1), (-1,1), (1,2), (2, ∞) respectively, and f_5(0) = 0. We implemented all the networks and experiments in PyTorch <cit.> and our code is made available in the form of a Python notebook capable of easily reproducing all plots[<https://github.com/npbaskerville/loss-surfaces-general-activation-functions>.]. CHAPTER: A SPIN GLASS MODEL FOR GENERATIVE ADVERSARIAL NETWORKS: SUPPLEMETARY This appendix provides supporting material for Chapter <ref>. § BIPARTITE SPIN-GLASS FORMULATION Recalling the expression for , one could argue that a more natural formulation would be (, ) = ∑_i_1,…, i_p=1^N_D∑_j_1,…, j_q=1^N_G Z_i_1,…, i_p, j_1,…, j_q∏_k=1^p _i_k∏_l=1^q _j_l for i.i.d. Gaussian Z. In this case, each term in the sum contains exactly p weights from the discriminator network and q weights from the generator. This object is known as a bipartite spin glass. We will now present the Gaussian calculations. We need the joint distributions (, _i , _jk),   (, _i , _jk, _l , _mn) where the two groups are independent from of each other. As in <cit.>, we will simplify the calculation by evaluating in the region of the north poles on each hyper-sphere. behaves just like a single spin glass, and so we have <cit.>: Var() = 1, Cov(_i , _jk) = 0, _ij | {=x_D} ∼(N_D-1)p(p-1)GOE^N_D - 1 - x_DpI, Cov(_i, _j ) = pδ_ij. To find the joint and thence conditional distributions for , we first compute the covariance function, which follows from the independence of the Z: Cov((, ), (', ')) = ∑_i_1,…, i_p=1 i_1',…, i_p'=1^N_D  ∑_j_1,…, j_q=1 j_1',…, j_q'=1^N_G𝔼 Z_i⃗i⃗Z_i⃗'j⃗'∏_k=1^p _i_k_i_k'' ∏_l=1^q _j_l_j_l'' = ∑_i_1,…, i_p=1^N_D  ∑_j_1,…, j_q=1^N_G∏_k=1^p _i_k_i_k' ∏_l=1^q _j_l_j_l' = (·')^p(·')^q The product structure of the covariance function implies that we can write down the following covariances directly from the simple spin-glass case, as the and derivatives act independently on their respective terms: Var() = 1, Cov(_ij, ) = -qδ_ij, Cov(_ij, ) = -pδ_ij, Cov(_ij, _kl) = q(q-1)(δ_ikδ_jl + δ_ilδ_jk) + q^2 δ_ijδ_kl, Cov(_ij, _kl) = p(p-1)(δ_ikδ_jl + δ_ilδ_jk) + p^2 δ_ijδ_kl, Cov(_ij, _kl) = pq δ_ijδ_kl, Cov(_i_j, _k_l) = pq δ_ikδ_jl, Cov(_ij, _k_l) = 0 Cov(_ij, _k_l) = 0, Cov(_i_j , ) = 0. Also, all first derivatives of are clearly independent of and its second derivatives by the same reasoning and Cov(_i, _j) = qδ_ij, Cov(_i, _j) = pδ_ij, Cov(_i, _j) = 0. We caw deduce the full gradient covariances, recalling that and are independent: Cov(∂^(D)_i L^(D), ∂^(D)_j L^(D)) = p(1 + σ_z^2)δ_ij Cov(∂^(G)_iL^(G), ∂^(G)_j L^(G)) = σ^2_z qδ_ij Cov(∂^(D)_iL^(D), ∂^(G)_j L^(G)) = 0 and so φ_(∇_D L^(D), ∇_G L^(G))(0) = (2π)^-N-2/2(p + σ_z^2p)^-N_D - 1/2(σ_z^2 q)^-N_G-1/2. We need now to calculate the joint distribution of (_ij, _kl) conditional on { = x_G}. Denote the covariance matrix for (_ij, _kl, ) by Σ = ([ Σ_11 Σ_12; Σ_21 Σ_22 ]) where Σ_11 = ([ p(p-1)(1 + δ_ij) + p^2δ_ij pqδ_ijδ_kl; pq δ_ijδ_kl q(q-1)(1 + δ_kl) + q^2δ_kl ]), Σ_12 = -([ pδ_ij; qδ_kl ]), Σ_21 = -([ pδ_ij qδ_kl ]), Σ_22 = 1. The conditional covariance is then Σ̅ = Σ_11 - Σ_12Σ_22^-1Σ_21 = ([ p(p-1)(1+δ_ij) 0; 0 q(q-1) (1 + δ_kl) ]). Repeating this calculation for (_ij, _kl, ) demonstrates that ∇_G^2|{ = x_G} has independent entries, up-to symmetry. The result (<ref>) demonstrates that, conditional on { = x_G}, ∇_G^2 and ∇_D^2 are independent GOEs. In summary, from (<ref>) and (<ref>-<ref>) we obtain ([ -∇_D^2 -∇_G∇_D; ∇_D∇_G ∇^2 ])  | { = x_G} d=2([ N_D -1p(p-1)M^(D) -2^-1/2pqG; 2^-1/2pqG^T N_G - 1q(q-1)M^(G) ])       - x_G ([ -pI_N_D 0; 0 qI_N_G ]) where M^(D)∼ GOE^N_D - 1 and M^(G)∼ GOE^N_G - 1 are independent GOEs and G is an independent N_D - 1 × N_G - 1 Ginibre matrix with entries of unit variance. At this point a problem becomes apparent. Suppose that q≤ p, then the variance of the lower-right block is strictly less than that of the off diagonal blocks. If we proceed with the strategy in the main text, there is no way of decomposing the lower-right block as a sum of two independent smaller variance GOEs with one matching the variance of the off diagonal blocks. Similarly, if q>p, then the final Hessian involving , will have lower-variance in the upper-left block than the off-diagonals unless very specific undesirable conditions hold on p,q and σ_z. In either of these cases, we cannot decompose the final Hessian as a sum of a large N-2× N-2 GOE and some smaller GOEs in the upper-left or lower-right blocks. We would therefore have to truly compute the Ginibre averages in the supersymmetric method, which we believe is intractable. We could complete the complexity calculation via the methods of chapter <ref> supposing that the appropriate conditions hold on p, q and σ_z. It would look much the same as the calculation in the main text, though the resulting polynomial for the spectral density would be different. Since this work was completed, the complexity results for bipartite spin glasses were obtained in <cit.> using an entirely new method developed in the companion paper <cit.>. Applying this method arguably presents more technical hurdles than the supersymmetric approach to complexity calculations, however it is much more general and can be applied to the above model for any p, q and σ_z. § EXTRA PLOTS This section contains some extra plots to back up the comparisons between our model's predictions and the experimental DCGAN results in Section <ref>. In particular, we produce versions of the plots in Figures <ref> and <ref> but for various values of p and q other than p=q=5. Since p=q=5 is the structurally correct choice for the DCGAN, it is natural to ask if any agreement between theory and experiment is most closely obtained with p=q=5. Figure <ref> shows that the model has the same deficiency in κ for all p,q values tested. Figure <ref> shows best agreement for p=q=5, p=3, q=7 and p=7, q=3, and similarly in Figure <ref>. There is perhaps weak evidence that the role of p and q as representing the number of layers in the networks has some merit experimentally. CHAPTER: APPEARANCE OF LOCAL RANDOM MATRIX STATISTICS: SUPPLEMENTARY This appendix provides supporting material for Chapter <ref>. § EXTRA FIGURES AND DEGENERACY INVESTIGATION Figure <ref> compares the effect of degeneracy on unfolded spacings in each of the 3 cases considered. We see that the logistic MNIST models (trained and untrained) have a much greater level of degeneracy, whereas the CIFAR10-Resnet34 spectra clearly have GOE spacings even without any cut-off. Figures <ref>–<ref> show further unfolded spacing and spacing ratio results like those in the main text. § EXPERIMENTAL DETAILS §.§ Network architectures Logistic regression (MNIST) * Input features 784 to 10 output logits. 2-layer MLP (MNIST) * Input features 784 to 10 neurons. * 10 neurons to 100 neurons. * 100 neurons to 10 output logits. 3-layer MLP (MNIST) * Input features 784 to 10 neurons. * 10 neurons to 100 neurons. * 100 neurons to 100 neurons. * 100 neurons to 10 output logits. Logistic regression on ResNet features (CIFAR10) * Input features 513 to 10 neurons. LeNet (CIFAR10) * Input features 32x32x3 through 5x5 convolution to 6 output channels. * 2x2 max pooling of stride 2. * 5x5 convolution to 16 output channels. * 2x2 max pooling of stride 2. * Fully connection layer from 400 to 120. * Fully connection layer from 120 to 84. * Fully connection layer from 84 to output 10 logits. MLP (CIFAR10) * 3072 input features to 10 neurons. * 10 neurons to 300 neurons. * 300 neurons to 100 neurons. MLP (Bike) * 13 input features to 100 neurons. * 100 neurons to 100 neurons. * 100 neurons to 50 neurons. * 50 neurons to 1 regression output. §.§ Other details All networks use the same (default) initialisation of weights in PyTorch, which is the `Kaiming uniform' method of <cit.>. All networks used ReLU activation functions. §.§ Data pre-processing For the image datasets MNIST and CIFAR10 we use standard computer vision pre-processing, namely mean and variance standardisation across channels. We refer to the accompanying code for the precise procedure The Bike dataset has 17 variables in total, namely: , , , , , , , , , , , , , , , , . All variables are either positive integers or real numbers. It is standard to view as the regressand, so one uses some or all of the remaining features to predict . This is the approach we take, however we slightly reduce the number of features by dropping , , , since is just an index and +=, so including those features would render the problem trivial. We map to a integer uniquely representing the date and we standardise by dividing by its mean. CHAPTER: UNIVERSAL CHARACTERISTICS OF LOSS SURFACES: SUPPLEMENTARY This appendix provides supporting material for Chapter <ref> including full details of the experimental set-up and analysis for the outlier experiments. § ARCHITECTURES AND TRAINING OF MODELS. We use the GPU powered Lanczos quadrature algorithm <cit.>, with the Pearlmutter trick <cit.> for Hessian vector products, using the PyTorch <cit.> implementation of both Stochastic Lanczos Quadrature and the Pearlmutter. We then train a 16 Layer VGG CNN <cit.> with P=15291300 parameters and the 28 Layer Wide Residual Network <cit.> architectures on the CIFAR-100 dataset <cit.> (45,000 training samples and 5,000 validation samples) using SGD. We use the following learning rate schedule: α_t = α_0, if t/T≤ 0.5 α_0[1 - (1 - r)(t/T - 0.5)/0.4] if 0.5 < t/T≤ 0.9 α_0r, otherwise. We use a learning rate ratio r=0.01 and a total number of epochs budgeted T=300. We further use momentum set to ρ=0.9, a weight decay coefficient of 0.0005 and data-augmentation on PyTorch <cit.>. § IMPLEMENTATION OF CONSTRAINTS As mentioned in the main text, one of the three weights of the linear model fit in the outlier analysis, β, is constrained to be positive, as it corresponds to a second cumulant, i.e. a variance, of a probability measure. Recall that the linear model's parameters are solved exactly as functions of the unknown θ^(i), and these parameters are in turn optimised using gradient descent. β is unconstrained during the linear solve, but its value is determined by the θ^(i), so to impose the constraint β>0 we add to the mean squared error loss the term β = 1000 max(0, -β) which penalises negative β values and is minimised at any non-negative value. The factor 1000 was roughly tuned by hand to give consistently positive values for β. There is also the constraint that θ^(i) > θ^(i+1)>0 for all i. This is imposed simply using a re-parametrisation. We introduce unconstrained raw value t^(i) taking values in and define θ^(i) = ∑_j=1^ilog ( 1 + exp(t^(j)) ), then the gradient descent optimisation is simply performed over the t^(i). § FITTING OF OUTLIER MODEL We optimise the mean squared error with respect to the raw parameters t^(i) using 200 iterations of Adam <cit.> with a learning rate of 0.2. The learning rate was chosen heuristically by increasing in steps until training became unstable. The number of iterations was chosen heuristically as being comfortably sufficient to obtain convergence of Adam. The raw parameters t^(i) were initialised by drawing independently from a standard Gaussian. The t^(i) were initialised and trained using the above method 20 times and the values with the lowest mean squared error were chosen. CHAPTER: A RANDOM MATRIX APPROACH TO DAMPING IN DEEP LEARNING: SUPPLEMENTARY This appendix provides some extra training plots in support of Chapter <ref>.
http://arxiv.org/abs/2306.17841v1
20230630175814
Domain wall interpretation of the PTA signal confronting black hole overproduction
[ "Yann Gouttenoire", "Edoardo Vitagliano" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-ph", "hep-th" ]
[email protected] School of Physics and Astronomy, Tel-Aviv University, Tel-Aviv 69978, Israel Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel Recently, NANOGrav has reported the observation of a stochastic gravitational wave background (SGWB) at nano-Hertz frequencies. String-wall networks and domain walls have been proposed as possible sources. To be cosmologically viable, these topological defect networks must annihilate before they dominate the energy budget of the universe, producing a SGWB. However, a part of the network can copiously produce primordial black holes that exceed current bounds. Performing a Bayesian analysis of pulsar timing residual datasets we find that the SGWB detected in PTA data is therefore hardly compatible with such an origin. This lends credibility to other interpretations, including supermassive black hole mergers, first order phase transitions, Nambu-Goto strings, and curvature-induced gravitational waves. Domain wall interpretation of the PTA signal confronting black hole overproduction Edoardo Vitagliano 0000-0001-7847-1281 July 31, 2023 =================================================================================== § INTRODUCTION The North American Nanohertz Observatory for Gravitational Waves (NANOGrav), a pulsar timing array (PTA) part of the International Pulsar Timing Array (IPTA), comprising the European Pulsar Timing Array (EPTA), the Parkes Pulsar Timing Array (PPTA) in Australia, and the Indian Pulsar Timing Array Project (InPTA), has recently reported the observation of a stochastic gravitational wave background (SGWB) with a strain of 2.7^+0.7_-0.6× 10^-15 (median with 90% credible interval) at the frequency yr^-1, corresponding to a total energy density in the sensitivity band of Ω_ GWh^2 = 6.5^+4.1_-2.8× 10^-9 <cit.>. This result confirms the hint to a SGWB observed in previous years <cit.>, by EPTA <cit.>, PPTA <cit.> and IPTA <cit.>. Sources of nanoHz GWs could be a population of supermassive black hole binaries <cit.> or could be related to early universe phenomena <cit.>, such as first order phase transitions <cit.>, second-order gravitational waves produced during the formation of primordial black holes <cit.>, and topological defects <cit.>. The simplest mechanism forming Domain walls (DW) is the spontaneous breaking of a discrete symmetry, e.g. 𝒵_2 <cit.>. DW dilute slower than radiation in expanding cosmology <cit.>. To be viable, there must exist an energy bias between distinct vacua so that DW are pulled toward annihilating with each other. Upon annihilation, the DW system can abundantly produce GWs <cit.> with a nanoHz peak frequency within PTA window if the system annihilates at a temperature T_ ann∼ 10 MeV <cit.>. A another DW formation scenario is when a global U(1) symmetry is first spontaneously broken to form cosmic strings and then is later explicitly broken when the Goldstone mode receives mass corrections <cit.>. A network of global strings alone cannot be the source of the observed PTA signal, as the string tension needed to source such a large amplitude would imply the abundant production of Goldstone bosons <cit.> in conflict with the Big-Bang Nucleosynthesis (BBN) bound on the effective number of neutrino species N_ eff. The evolution of the string-DW network can be more complicated and depends on the number N of minima along the orbit of vacua. For N=1, DW gets bounded by strings and rapidly annihilate. The presence of DW only weakly enhance the GW signal from global strings <cit.> and we conclude that interpretation of PTA signal in term of string-wall network are excluded by N_ eff bounds <cit.>. If N>1, a stable string-wall network is produced, and the evolution is similar to the discrete symmetry breaking domain-wall system described above. Such system, often considered in the context of the QCD axion <cit.> and more recently in the context of axion-like particles <cit.> and high-quality QCD axion models <cit.>, has also been considered as a source of a signal compatible with PTA observations (see e.g. <cit.>). In this Letter, we perform a Bayesian analysis of PTA datasets NG12.5 <cit.> and IPTADR2 <cit.> in presence of GW from DW annihilation. We find that the interpretation of pure domain-wall and N>1 string-wall systems as a possible source of the PTA signal are in tension with the B overproduction of primordial black holes (PBHs). In both cases, the system can feature spherical domains which collapse to PBHs when they shrink below their Schwarzschild radius <cit.>. While this mechanism might potentially be related to the production of PBH dark matter <cit.> or of supermassive black holes <cit.>, we show that the same mechanism would overproduce PBHs if the annihilation temperature and energy stored in the system are tuned to produce the amplitude and frequency of the SGWB observed by PTAs. Assuming a toy model of a real scalar field ϕ, ℒ=-1/2∂_μϕ∂^μϕ -λ/4(ϕ^2-v^2)^2, the potential V (ϕ) has two degenerate minima at ϕ=± v. This lagrangian is invariant under the discrete Z_2 symmetry which is spontaneously broken when the scalar field acquires a vacuum expectation value (VEV). The scalar field takes one of the two discrete values after the spontaneous symmetry breaking, and domain walls with size ∼ t× t× (√(λ)v)^-1 are produced as a boundary of two different domains. The domain-wall system would come to dominate the universe if it were not for a bias, an energy difference between fake vacua and a true vacuum <cit.>. If such bias exists, the domain-wall systems annihilate when the energy density of the surface σ/t∼ v^3/t is comparable to the volume energy density due to the bias, V_ bias∼ϵ_b v^4, where we parametrize the bias through the small dimensionless number ϵ_b. Upon annihilation, the domain-wall system abundantly produces GWs <cit.>, and the frequency can match the one needed to explain the IPTA signal if the system annihilates at a temperature T_ ann∼ 10 MeV. While Nambu-Goto strings, which are produced upon the breaking of a gauge symmetry, can be the source of the IPTA signal, e.g. <cit.>, For different symmetry breaking patterns, a different topological defect systems is produced. The spontaneous breaking of a U(1) symmetry implies the existence of cosmic strings that are either global or local (see e.g. <cit.>). While Nambu-Goto strings, which are produced upon the breaking of a gauge symmetry, can be the source of the IPTA signal <cit.>, global strings cannot be the source of the observed signal, as the tension needed to source such a large amplitude would imply the abundant production of Goldstone bosons <cit.>. Therefore, the effective number of neutrino species N_ eff exclude global strings as a source of the detected signal. Domain-wall systems can feature closed domain walls. The existence of such spherical walls is possible also in string-wall systems, depending on the number of minima along the orbit of vacua after the breaking of the U(1) symmetry. In N=1 models the walls appear in the form of ribbons, which become the more and more narrow due to surface tension and eventually rapidly disappear. On the other hand, in N>1 models, closed walls are expected to arise and collapse keeping an approximately spherical shape. In the following, we refer for further details to the discussions of Ref. <cit.> (see also <cit.>). § GWS FROM DW ANNIHILATION Friction vs scaling regime. Denoting by v their typical DW velocity, we can estimate that DW have typical curvature radius R ≃ v t. Initially, the work of their surface tension σ with equivalent pressure 𝒫_T = σ / R toward straightening the DW is dampened by friction pressure 𝒫_V ≃ β T^4 where the dimensionless β sets the strength of DW-plasma interactions with the plasma. DW starts moving with relativistic velocity v≃𝒪(0.1) below the temperature T_ rel≃ 0.8 g_⋆^1/4√(σ/β M_ pl)≃530  MeV/√(10vβ)g_⋆^1/4(σ^1/3/10^5  GeV)^3/2, where M_ pl≃ 2.44 × 10^18  GeV, g_* the number of relativistic degrees of freedom and where we used Friedmann's equation T=1.2√(M_ pl/t)/g_*^1/4. The size of the friction coefficient β is model dependent <cit.>. In the present work, we set β≪ 1 and briefly discuss its implication at the end. Numerical simulations have shown that the energy stored in friction-less DW reaches the scaling regime as cosmic strings do, <cit.> ρ_ DW = σ/R, with R≃ t/𝒜, where 𝒜≃ 0.8 ± 0.1 is fitted on numerical simulations <cit.>. This results in the DW energy density redshifting slower than the main background fluid ρ_ bkg≃ M_ pl^2/t^2 such that DW rapidly dominate the energy density of the universe <cit.> below the temperature T_ dom≃1.4/g_⋆^1/4√(𝒜 σ/M_ pl)≃ 30  MeV 𝒜^1/2 g_⋆^1/4(σ^1/3/10^5  GeV)^3/2. Bias potential terms. We assume the presence of high dimensional operators that explicitly break U(1) symmetry. This transforms the flat direction into a discrete collection of vacua. The vacuum energy difference 𝒫V=V bias between these new minimum points acts as a source of pressure which make DW repel or attract each other until their eventual annihilation. DW annihilate when the vacuum pressure surpasses the pressure 𝒫_T=σ / R arising from their surface tension σ, below the temperature T_ ann≃ 100  MeV/𝒜^1/2g_⋆^1/4(10^5  GeV/σ^1/3)^3/2(V_ bias^1/4/40  MeV)^2. GW signal. During the annihilation process, DWs are driven to relativistic speed and radiate GW <cit.>. The GW power spectrum today produced by long-lived DWs annihilating at T_ ann can be expressed as <cit.> Ω_ GWh^2 = Ω_ peakh^2 S_ DW(f) where the peak amplitude today follows from the quadrupole formula <cit.> Ω_ peakh^2 ≃ 7.2 × 10^-10ϵ̃_ gw𝒜^2 ( 10/g_*s(T_ ann))^4/3 ×( σ^1/3/100  TeV)^6 ( 100  MeV/T_ ann)^4. while ϵ̃_ gw≃ 0.7 ± 0.4 is fitted on lattice simulations. <cit.> The peak frequency today is given by f_ peak = a(t_ ann)/a(t_0) H(t_ ann) ≃ 1.1  nHz( g_⋆(T_ ann)/10) ×( 10/g_*s(T_ ann))^1/3(T_ ann/10  MeV). We model the spectral function S_ DW(f) = 2/(f/f_ peak)+(f_ peak/f)^3, where the IR slope is Ω_ GW∝ f^3 to respect causality <cit.> and the UV slope is Ω_ GW∝ f^-1 as suggested by lattice simulations results <cit.>. § BAYESIAN ANALYSIS OF PTA DATA We performed a comprehensive Bayesian analysis of the DW interpretation of PTA signal. Waiting for NANOGrav 15  yr <cit.> to release their data publicly, we used the first 5 frequency bins of NANOGrav 12.5  yr <cit.> and the first 13 frequency bins of IPTA DR2 <cit.>. To extract the GW signal from the various source of noises, we closely followed the methodologies employed by the NANOGrav <cit.> and IPTA <cit.> research groups, with additional insights from other relevant literature <cit.>. We modified the software tools known as enterprise <cit.> and enterprise_extensions <cit.> to include the spectrum from DW annihilation. The parallel-tempering Markov Chain Monte-Carlo sampler, PTMCMC <cit.>, was used to explore the posterior distribution and, see the mean values in Tab. <ref>, and GetDist tool <cit.> was used to visualize it, see Fig. <ref>. § BBN CONSTRAINTS Ref. <cit.> has shown that the DW interpretation of the PTA GW signal is in slight tension with BBN if DW annihilate into a hidden sector. We now revisit the argument and minoring slight numerical differences, we reach similar conclusions. The energy density fraction in DWs at temperature T_ ann, normalized to radiation background reads: α_ DW≡ρ_ DM/ρ_ rad = √(g_⋆(T_ ann)/10.75)(σ^1/3/100  TeV)^3( 14  MeV/T_ ann)^2, where we approximated g_*,s=g_*. The presence of extra number Δ N_ eff of relativistic degrees of freedom at BBN and CMB would change the expansion rate of the universe and impact the CMB data or the abundance of light elements <cit.>. In App. <ref>, we show that relics with energy fraction α_ DW contribute to the number of relativistic degrees of freedom by: Δ N_ eff = 7.4 α_ DW ≲  0.3, We must distinguish two scenarios. If DW annihilate into a secluded sector, then Eq. (<ref>) applies for all T_ ann. Instead if DW annihilate dominantly into Standard Model (SM) degrees of freedom, then Eq. (<ref>) relaxes for T_ ann≳ 1  MeV <cit.>. Another bounds from the possibility for DWs to dominate the energy budget of the universe. In this case the universe starts expanding as a∝ t^2 <cit.> and it is uncertain if DWs can efficiently annihilate in such a rapidly expanding universe. To be conservative, we impose that DW must disappear before dominating the universe: T_ ann ≳  T_ dom. The corresponding N_ eff and DW domination constraints are shown in purple and gray in Figs. <ref> and <ref>. § PBH CONSTRAINTS Both pure domain-wall and N>1 string-wall systems feature closed configurations. During the scaling regime, DW have a size comparable to the cosmic horizon ≃ t. Closed DW collapse into PBHs if they shrink below their Schwarzschild radius which can happen if the ratio p(t)= R_ Sch(t)/t =2G M(t)/t becomes smaller than one <cit.>. Close to the start of the annihilation process, the mass within a closed wall reads M(t)≃4/3π t^3 V_ bias + 4π t^2 σ. DW annihilate at t_ ann when the volume term in Eq. (<ref>) dominates over the surface term. Therefore, the ratio p(t) increases with time as t^2. The temperature at which PBH collapse is defined by p(T_ PBH)=1, which implies T_ PBH≃ 120  MeV𝒜^1/4 g_ eff,1^1/8(T_ ann/1  GeV)^1/2( σ^1/3/10^5  MeV)^3/4 where g_ eff,1≡ g_*^2(T_ ann)/g_*(T_ PBH), and corresponding time is t_ PBH = 1/2 H(T_ PBH). The PBH mass is M(t_ PBH)≃4 π/3V_ bias t_ PBH^3: M_ PBH≃19  M_⊙/𝒜^1/2 g_*^1/4(T_ ann) ( 1  GeV/T_ ann)( 10^5  GeV/σ^1/3)^3/2. The PBH contribution to the DM abundance is: f_ PBH =ρ_ DW(T_ PBH)/ρ_ DM(T_ PBH) = ρ_ DW(T_ PBH)/ρ_ DM(T_ ann)ρ_ DW(T_ ann)/ρ_ DM(T_ PBH). The first factor describes how fast the energy stored in the DW network disappears as annihilation proceeds which according to lattice simulations appears to follow a power-law <cit.> ρ_ DW(T)/ρ_ DW(T_ ann)=(T/T_ ann)^α . Results collected in Tab. VI of <cit.> suggests that α, which parameterizes how fast the network annihilates, can take values between 9 and and 28 <cit.> (though smaller values like α =7 have also been considered in the literature <cit.>). The second factor in Eq. (<ref>) can be evaluated from evolving DW, DM and radiation energy densities until today and we get f_ PBH≃ g_ eff,2^1/2 T_ PBH^(α -3)  T_ dom^2/T_0   T_ ann^(α-2)(ρ_ rad/ρ_ DM)_0, where g_ eff,2≡(g_s⋆(T_0)/g_⋆(T_0))^2g_⋆(T_ ann) g_⋆(T_ dom)/g_s⋆^2(T_ PBH). In Fig. <ref>, we show the constraints due to PBH overclosure f_ PBH≳ 1, but also from distortion of the Cosmic Microwave Background <cit.>, LIGO-Virgo-Kagra (LVK) bounds <cit.> and microlensing limits from Eros datasets <cit.>. We collect all the constraints in Fig. <ref> and vary the exponent α which sets how fast the DW network is annihilating (ρ^2_ DW∝ 1/t^α). We conclude that the DW interpretation of PTA signal is excluded by PBH overproduction. Impact of friction. We know briefly discuss the impact of friction which we have neglected in our analysis. The PBH abundance might be strongly impacted in presence of friction. However this does not relieve the DW interpretation of the PTA signal. In fact, in presence of friction the GW signal gets suppressed <cit.> so that the confidence levels (blue and orange ellipses) rendering the PTA signals will move to the DW domination region in gray in Fig. <ref>, excluding the DW interpretation of PTA signal without having to study the PBH abundance in friction-dominated DW network. § DISCUSSION AND OUTLOOK Several PTA have reported the observation of a SGWB with an energy fraction of 5 × 10^-9 at nano-Hertz frequencies. The annihilation of topological defect systems has been listed among the possible sources. In this paper, we have shown that pure domain-wall and N>1 string-wall systems is in tension with the overproduction of primordial black holes (PBHs). Parameterizing the annihilation of the DW network by a power-law ρ_ DW^2 ∝ 1/t^α, values α≲ 50 result in a tension between the amplitude and frequency of the SGWB observed in the different PTA datasets and the overproduction of PBHs. This has been missed by previous works <cit.> claiming a DW interpretation of PTA signals. To further strengthen these results, dedicated simulations of the late evolution of domain-wall and string-wall networks should be realized. Hence, we add the DW to the graveyard of early universe phenomena failing short at explaining PTA GW signal, together with global strings (see introduction) and scalar-induced GW <cit.> in the Gaussian limit <cit.>. Recent works have shown that first-order phase transition can produce PBHs abundantly in the supercooled limit <cit.>. Further studied are needed to infer whether 1stOPT interpretation of the PTA signal <cit.> is in the PBH graveyard too. Our conclusions suggest that the only viable topological-defect origin of the PTA signal is one arising from Nambu-Goto strings. § ACKNOWLEDGMENTS YG thanks Simone Blasi, Alberto Mariotti, Oriol Pujolàs and Fabrizio Rompineve for useful discussions. YG is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship. EV acknowledges support by the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement No. 101040019). § OLD TEXT As the annihilation happens at T_ ann when the surface energy density and the volume energy density are comparable, Eq. (<ref>) implies M(t_ ann)≃16/3π t_ ann^3 V_ bias, so that the ratio p(t) in Eq. (<ref>) becomes p(T_ ann)≃30/π^2V_ bias/g_⋆(T_ ann) T_ ann^4. Notice that M(t_ ann) and p(T_ ann) only depend on the parameters V_ bias and σ, and there is no explicit information on the number of minima along the orbit of vacua. Nevertheless, the annihilation process might depend on the details of the network. After t_ ann, the volume contribution becomes dominant over the surface contribution, and M(t)≃4/3π t^3 V_ bias, and p(t)≃p(T_ ann)/4(t/t_ ann)^2 . Since p(t) must be smaller than 1, we can estimate the PBH formation temperature T_ PBH as the temperature at which p(T_ PBH)=1, which in terms of T_ ann is p(T_ PBH) ≃p(T_ ann)/4g_⋆(T_ ann)/g_⋆(T_ PBH)(T_ ann/T_ PBH)^4 =1 . This relation defines T_ PBH and its corresponding time t_ PBH = 1/2 H(T_ PBH). The PBH mass is then M(t_ PBH), M_ PBH = M(t_ PBH) ≃4 π/3V_ bias t_ PBH^3 ≃2/[p(T_ ann)]^3/2 M(t_ ann). Notice that the temperature of PBH formation T_ PBH depends only on V_ bias (or, equivalently, M_ PBH) T_ PBH≃ 0.9  GeV[V_ bias/ GeV^4 g_⋆(T_ PBH)]^1/4≃0.5   GeV/[g_⋆(T_ PBH)]^1/4(M_⊙/M_ PBH)^1/2. From Eq. (<ref>) and Eq. (<ref>) one finds p(T_ ann)≃t_ ann^2M_ P^4/M_ PBH^2 = 90/32 π^31/g_⋆(T_ ann)M_ P^6/T_ ann^4 M_ PBH^2 . The PBH density at formation is given by the energy density left in the wall system when PBHs form times the probability of PBH formation, i.e. ρ_ PBH(T_ PBH) ≃ p^β(T_ PBH) ρ_ wall(T_ PBH), and the fraction f_ PBH of the total DM density ρ_ DM in PBHs is f_ PBH=ρ_ PBH(T_ PBH)/ρ_ DM(T_ PBH)≃ρ_ wall(T_ PBH)/ρ_ wall(T_ ann)ρ_ wall(T_ ann)/ρ_ DM(T_ PBH). We see that the abundance of PBHs depend only on T_ PBH and T_ ann. Simulations of the process of the string-wall system annihilation for N>1 models have been performed <cit.>, which provided measurements of the times at which the area density of walls (area per unit volume A/V) is 10% and 1% of what it would have been without a bias (i.e. if the string-wall system would still have been in the scaling regime). We call these times t(10%) and t(1%), and the corresponding temperatures T(10%) and T(1%). We can approximate the evolution of the string-wall network energy density as a power law, ρ_ wall(T)/ρ_ wall(T_ ann)= (T/T_ ann)^α , with α extracted from the mentioned simulations <cit.>. Therefore, the abundance of PBHs is f_ PBH≃( T_ PBH/T_ ann)^αρ_ wall(T_ ann)/ρ_ DM(T_ PBH). ρ_ DM(T_ PBH) is easily related by redshift to the present dark matter density. Therefore, the only quantity left to be evaluated is ρ_ wall(T_ ann). This can be related to the wall-domination temperature T_ wd (i.e., the temperature at which the string-wall energy density is comparable to the radiation energy density), T_ wd≃0.9 × 10^-9  GeV/[g_⋆(T_ wd)]^1/4(σ/ GeV^3)^1/2. Considering the redshift of the radiation density to the present, Eq. (<ref>) becomes f_ PBH≃g_s⋆(T_0)/g_⋆(T_0) [g_⋆(T_ ann) g_⋆(T_ wd)]^1/2/g_s⋆(T_ PBH) T_ PBH^(α -3)  T_ wd^2/T_0   T_ ann^(α-2)(ρ_ rad/ρ_ DM)_0 , or, using Eq. (<ref>), f_ PBH≃3.9 g_s⋆(T_0)/g_⋆(T_0) g_⋆(T_ PBH)/g_s⋆(T_ PBH) T_ PBH^(α +1)/T_0   T_ ann^α (ρ_ rad/ρ_ DM)_0 . § 1214.1em §.§ 1214.1em §.§.§ 1214)1em 1214:1em § BBN BOUND Domain walls (DW) form a component of the total energy density of the universe. As such, they contribute to increase the expansion rate of the universe which makes neutron freeze-out earlier, increase the n/p ratio which in turn increases the Helium abundance <cit.>. The presence of DW can be described in terms of an extra number of neutrino species N_ eff = 8/7( ρ_ DW/ρ_γ)( 11/4)^4/3, where ρ_γ is the photon number density. We introduce the DW energy fraction in unit of radiation energy density at annihilation temperature α_ DW(T) = ρ_ DW(T)/π^2/30g_*(T)T^4. where T is the SM photon temperature. From Eq. (<ref>) and Eq. (<ref>), the maximal DW contribution to N_ eff occurs at annihilation temperature Δ N_ eff(T) = 2.20g_*(T)α_ DW(T), To apply the BBN bound Δ N_ eff≲ 0.3 <cit.>, the effective number of extra relativistic degrees of freedom must be evaluated below the neutrino decoupling temperature where g_*(T< T_ dec)≡ 2+(7/8)· 6· (4/11)^4/3≃ 3.36. Hence, we obtain Δ N_ eff = 7.4 α_ GW ≲ 0.3, which is slightly different from <cit.>. As discussed in the main text, we must distinguish the scenario in which DW reheates to dark radiation, in which case Eq. <ref> is the BBN constraints, from the scenario in which DW reheates to SM, in which case Eq. <ref> applies only if DW annihilate below the neutrino decoupling temperature T_ ann≲ 1  MeV.
http://arxiv.org/abs/2306.09066v1
20230615114850
A Bayesian approach to uncertainty in word embedding bias estimation
[ "Alicja Dobrzeniecka", "Rafal Urbaniak" ]
cs.CL
[ "cs.CL", "cs.HC", "cs.LG", "stat.AP", "stat.ME" ]
Competitive effects between gravitational radiation and mass variation for two-body systems in circular orbits Cyril Renevey July 31, 2023 ============================================================================================================== Abstract. Multiple measures, such as WEAT or MAC, attempt to quantify the magnitude of bias present in word embeddings in terms of a single-number metric. However, such metrics and the related statistical significance calculations rely on treating pre-averaged data as individual data points and employing bootstrapping techniques with low sample sizes. We show that similar results can be easily obtained using such methods even if the data are generated by a null model lacking the intended bias. Consequently, we argue that this approach generates false confidence. To address this issue, we propose a Bayesian alternative: hierarchical Bayesian modeling, which enables a more uncertainty-sensitive inspection of bias in word embeddings at different levels of granularity. To showcase our method, we apply it to Religion, Gender, and Race word lists from the original research, together with our control neutral word lists. We deploy the method using Google, Glove, and Reddit embeddings. Further, we utilize our approach to evaluate a debiasing technique applied to the Reddit word embedding. Our findings reveal a more complex landscape than suggested by the proponents of single-number metrics. The datasets and source code for the paper are publicly available.[<https://github.com/efemeryds/Bayesian-analysis-for-NLP-bias>] introduction § INTRODUCTION It has been suggested[See for instance [1,4,5,9,12,13].] that language models can learn implicit biases that reflect harmful stereotypical thinking—for example, the (vector corresponding to the) word she might be much closer in the vector space to the word cooking than the word he. Such phenomena are undesirable at least in some downstream tasks, such as web search, recommendations, and so on. To investigate such issues, several measures of bias in word embeddings have been formulated and applied. Our goal is to use two prominent examples of such measures to argue that this approach oversimplifies the situation and to develop a Bayesian alternative. A common approach in natural language processing is to represent words by vectors of real numbers—such representations are called embeddings. One way to construct an embedding—we will focus our attention on non-contextual language models[One example of a contextualized representation is BERT. Another is GPT.]—is to use a large corpus to train a neural network to assign vectors to words in a way that optimizes for co-occurrence prediction accuracy. Such vectors can then be compared in terms of their similarity—the usual measure is cosine similarity—and the results of such comparisons can be used in downstream tasks. Roughly speaking, cosine similarity is an imperfect mathematical proxy for semantic similarity [16]. One response to the raising of the issue of bias in natural language models might be to say that there is not much point in reflecting on such biases, as they are unavoidable. This unavoidability might seem in line with the arguments to the effect that learning algorithms are always value-laden [10]: they employ inductive methods that require design-, data-, or risk-related decisions that have to be guided by extra-algorithmic considerations. Such choices necessarily involve value judgments and have to do, for instance, with what simplifications or risks one finds acceptable. Admittedly, algorithmic decision making cannot fulfill the value-free ideal, but this only means that even more attention needs to be paid to the values underlying different techniques and decisions, and to the values being pursued in a particular use of an algorithm. Another response might be to insist that there is no bias introduced by the use of machine learning methods here since the algorithm is simply learning to correctly predict co-occurrences based on what “reality” looks like. However, this objection overlooks the fact that we, humans, are the ones who construct this linguistic reality, which is shaped in part by the natural language processing tools we use on a massive scale. Sure, if there is unfairness and our goal is to diagnose it, we should do complete justice to learning it in the model used to study it. One example of this approach is [3], where the authors use language models to study the shape of certain biases across a century. However, if our goal is to develop downstream tools that perform tasks that we care about without further perpetuating or exacerbating harmful stereotypes, we still have good reasons to try to minimize the negative impact. Moreover, it is often not the case that the corpora mirror reality—to give a trivial example, heads are spoken of more often than kidneys, but this does not mean that kidneys occur much less often in reality than heads. To give a more relevant example, the disproportionate association of female words with female occupations in a corpus actually greatly exaggerates the actual lower disproportion in the real distribution of occupations [6]. In what follows, we focus on two popular measures of bias applicable to many existing word embeddings, such as GoogleNews,[GoogleNews-vectors-negative300, available at <https://github.com/mmihaltz/word2vec-GoogleNews-vectors>.] Glove[Available at <https://nlp.stanford.edu/projects/glove/>.] and Reddit Corpus[Reddit-L2 corpus, available at <http://cl.haifa.ac.il/projects/L2/>.]: Word Embedding Association Test () [9], and Mean Average Cosine Distance () [13]. We first explain how these measures are supposed to work. Then we argue that they are problematic for various reasons—the key one being that by pre-averaging data they manufacture false confidence, which we illustrate in terms of simulations showing that the measures often suggest the existence of bias even if by design it is non-existent in a simulated dataset. We propose to replace them with a Bayesian data analysis, which not only provides more modest and realistic assessment of the uncertainty involved, but in which hierarchical models allow for inspection at various levels of granularity. Once we introduce the method, we apply it to multiple word embeddings and results of supposed debiasing, putting forward some general observations that are not exactly in line with the usual picture painted in terms of or . Most of the problems that we point out generalize to any existing approach that focuses on chasing a single numeric metric of bias. (1) They treat the results of pre-averaging as raw data in statistical significance tests, which in this context is bound to overestimate significance. We show similar results can easily be obtained when sampling from null models with no bias. (2) The word list sizes and sample sizes used in the studies are usually small,[Depending on a list for [9] the range for protected words is between 13 and 100, and for attributes between 16 and 25; for [13] the range for protected words is between 14 and 18, and for attributes between 11 and 25.] (3) Many studies do not use any control predicates, such as random neutral words or neutral human predicates for comparison. On the constructive side, we develop and deploy our method, and the results are, roughly, as follows. (A) Posterior density intervals are fairly wide and the average differences between associated, different and neutral human predicates, are not very large. (B) A preliminary inspection suggests that the desirability of changes obtained by the usual debiasing methods is debatable. In Section <ref> we describe the two key measures discussed in this paper, and , explaining how they are calculated and how they are supposed to work. In Section <ref> we first argue in Subsection <ref>, that it is far from clear how results given in terms of or are to be interpreted. Second, in Subsection <ref> we explain the statistical problems that arise when one uses pre-averaged data in such contexts, as these measures do. In Section <ref> we explain the alternative Bayesian approach that we propose. In Section <ref> we elaborate on the results that it leads to, including a somewhat skeptical view of the efficiency of debiasing methods, discussed in Subsection <ref>. Finally, in Section <ref> we spend some time placing our results in the ongoing discussions.[Disclaimer: throughout the paper we will be mentioning and using word lists and stereotypes we did not formulate, which does not mean we condone any judgment made therein or underlying a given word selection. For instance, the Gender dataset does not recognize non-binary categories, and yet we use it without claiming that such categories should be ignored.] two-measures-of-bias-weat-and-mac § TWO MEASURES OF BIAS: WEAT AND MAC The underlying intuition is that if a particular harmful stereotype is learned in a particular embedding, then certain groups of words will be systematically closer to (or further from) each other. This gives rise to the idea of protected groups—for example, in guiding online search completion or recommendation, female words might require protection in that they should not be systematically closer to stereotypically female job names, such as “nurse”, “librarian”, “waitress”, and male words require protection in that they should not be systematically closer to toxic masculinity stereotypes, such as “tough”, “never complaining” or “macho”.[However, for some research-related purposes, such as the study of stereotypes across history [3], embeddings which do not protect certain classes may also be useful.] The key role in the measures to be discussed is played by the notion of cosine distance (or, symmetrically, by cosine similarity). These are defined as follows:[Here, “-” stands for point-wise difference, “·” stands for the dot product operation, and ‖ a‖ = √((a · a)).] ^ ,[Note that this terminology is slightly misleading, as mathematically cosine distance is not a distance measure, because it does not satisfy the triangle inequality, as generally 𝖼𝗈𝗌𝗂𝗇𝖾𝖣𝗂𝗌𝗍𝖺𝗇𝖼𝖾(A,C) ≰𝖼𝗈𝗌𝗂𝗇𝖾𝖣𝗂𝗌𝗍𝖺𝗇𝖼𝖾(A,B)+ 𝖼𝗈𝗌𝗂𝗇𝖾𝖣𝗂𝗌𝗍𝖺𝗇𝖼𝖾(B,C). We will keep using this mainstream terminology.] Sim𝖼𝗈𝗌𝗂𝗇𝖾𝖲𝗂𝗆𝗂𝗅𝖺𝗋𝗂𝗍𝗒(A,B) = A · B/‖ A ‖ ‖ B ‖ Distance𝖼𝗈𝗌𝗂𝗇𝖾𝖣𝗂𝗌𝗍𝖺𝗇𝖼𝖾(A,B) = 1 - 𝖼𝗈𝗌𝗂𝗇𝖾𝖲𝗂𝗆𝗂𝗅𝖺𝗋𝗂𝗍𝗒(A,B). One of the first measures of bias has been developed in [1]. The general idea is that a certain topic is associated with a vector of real numbers (the topic “direction”), and the bias of a word is investigated by considering the projection of its corresponding vector on this direction. For instance, in [1], the gender direction 𝗀𝖽 is obtained by taking the differences of the vectors corresponding to ten different gendered pairs (such as she - he or girl - boy), and then identifying their principal component.[Roughly, the principal component is the vector obtained by projecting the data points on their linear combination in a way that maximizes the variance of the projections.] The gender bias of a word w is then understood as w's projection on the gender direction: w⃗· gd (which, after normalizing by dividing by ‖ w ‖ ‖𝗀𝖽‖, is the same as cosine similarity). Given a list N of supposedly gender neutral words,[We follow the methodology used in the debate in assuming that there is a class of words identified as more or less neutral, such as ballpark, eat, walk, sleep, table, whose average similarity to the gender direction (or other protected words) is around 0. See our list in Appendix <ref> and a brief discussion in Subsection <ref>.] and the gender direction 𝗀𝖽, the direct gender bias is defined as the average cosine similarity of the words in N from 𝗀𝖽 (c is a parameter determining how strict we want to be): 𝖽𝗂𝗋𝖾𝖼𝗍𝖡𝗂𝖺𝗌_𝖼(𝖭,𝗀𝖽) = ∑_w∈ N|𝖼𝗈𝗌(w⃗,𝗀𝖽)|^c/| N | The use of projections in bias estimation has been criticized for instance in [5], where it is pointed out that while a higher average similarity to the gender direction might be an indicator of bias with respect to a given class of words, it is only one possible manifestation of it, and reducing the cosine similarity to such a projection may not be sufficient to eliminate bias. For instance, “math” and “delicate” might be equally similar to a pair of opposed explicitly gendered words (she, he), while being closer to quite different stereotypical attribute words (such as scientific or caring). Further, it is observed in [5] that most word pairs retain similarity under debiasing meant to minimize projection-based bias.[In [1] another method that involves analogies and their evaluations by human users on Mechanical Turk is also used. We do not discuss this method in this paper, see its criticism in [18].] A measure of bias in word embeddings that does not proceed by identifying bias directions (such as a gender vector), the Word Embedding Association Test (), has been proposed in [9]. The idea here is that the bias between two sets of target words, X and Y (we call them protected words), should be quantified in terms of the cosine similarity between the protected words and attribute words coming from two sets of stereotype attribute words, A and B (we will call them attributes). For instance, X might be a set of male names, Y a set of female names, A might contain stereotypically male-related, and B stereotypically female-related career words. The association difference for a particular word t (belonging to either X or Y) is: 𝗌(t,A,B) = ∑_a∈ A𝖼𝗈𝗌(t,a)/| A| - ∑_b∈ B𝖼𝗈𝗌(t,b)/| B| then, the association difference between A a B is: 𝗌(X,Y,A,B) = ∑_x∈ X𝗌(x,A,B) - ∑_y∈ Y𝗌(y,A,B) The intention is that large values of s scores suggest systematic differences between how X and Y are related to A and B, and therefore are indicative of the presence of bias. The authors use it as a test statistic in some tests,[Note their method assumes X and Y are of the same size.] and the final measure of effect size, , is constructed by taking means of these values and standardizing: 𝖶𝖤𝖠𝖳(A,B) = μ({𝗌(x,A,B)}_x∈ X) -μ({𝗌(y,A,B)}_y∈ Y) /σ({𝗌(w,A,B)}_w∈ X∪ Y) is inspired by the Implicit Association Test (IAT) [19] used in psychology, and in some applications it uses almost the same word sets, allowing for a prima facie sensible comparison with bias in humans. In [9] the authors argue that significant biases—thus measured— similar to the ones discovered by IAT can be discovered in word embeddings. In [12] the methodology is extended to a multilingual and cross-lingual setting, arguing that using Euclidean distance instead of cosine similarity does not make much difference, while the bias effects vary greatly across embedding models.[Interestingly, with social media-text trained embeddings being less biased than those based on Wikipedia.] A similar methodology is employed in [4]. The authors employ word embeddings trained on corpora from different decades to study the shifts in various biases through the century.[Strictly speaking, these authors use Euclidean distances and their differences, but the way they take averages and averages thereof is analogous, and so what we will have to say about pre-averaging leading to false confidence applies to this methodology as well.] Here is an example of calculations for Figure <ref>: s_1 = s(he,A,B) = (.6+.7)/2 - (.2+.1)/2 = .65-.15= .5 s_2 = s(man,A,B) = .3 s_3 = s(woman,A,B) = -.6 s_4 = s(she, A, B) = -.3 𝖶𝖤𝖠𝖳(A,B) = (s_1+s_2)/2 - (s_3+s_4)/2/sd({s_1,s_2,s_3,s_4})≈ 1.93 has been developed to investigate biases corresponding to a pair of supposedly opposing stereotypes, so the question arises as to how to generalize the measure to contexts in which biases with respect to more than two stereotypical groups are to be measured. Such a generalization can be found in [13]. The authors introduce Mean Average Cosine distance () as a measure of bias. Let T = {t_1, …, t_k} be a set of protected words, and let each A_j∈𝒜 be a set of attributes stereotypically associated with a protected word where 𝒜. For instance, when biases related to religion are to be investigated, they use a dataset of the format illustrated in Table <ref>. The measure is defined as follows: 𝗌(t, A_j) = 1/| A_j|∑_a∈ A_j𝖼𝗈𝗌𝗂𝗇𝖾𝖣𝗂𝗌𝗍𝖺𝗇𝖼𝖾(t,a) 𝖬𝖠𝖢(T,𝒜) = 1/| T | |𝒜|∑_t ∈ T ∑_A_j ∈𝒜𝗌(t,A_j) That is, for each protected word t∈ T, and each attribute set A_j, they first take the mean of distances for this protected word and all attributes in a given attribute class, and then take the mean of thus obtained means for all the protected words and all the protected classes.[The authors' code is available through their github repository at <https://github.com/TManzini/DebiasMulticlassWordEmbedding>.] An example of calculations for the situation depicted in Figure <ref> is as follows: s_1 = s(muslim,A_1) = 𝖼𝗈𝗌(muslim,dirty)+𝖼𝗈𝗌(muslim,terrorist)/2 s_2 = s(muslim,A_2) = 𝖼𝗈𝗌(muslim,familiar)+𝖼𝗈𝗌(muslim,conservative)/2 ⋮ 𝖬𝖠𝖢(T,A) = 𝗆𝖾𝖺𝗇({s_i | i ∈ 1, …, k}) Notably, the intuitive distinction between different attribute sets plays no real role in the calculations. Equally well one could calculate the mean distance of muslim to all the predicates, mean distance of christian to all the predicates, the mean distance of jew to all the predicates, and then to take the mean of these three means. Having introduced the measures, first, we will introduce a selection of general problems with this approach, and then we will move on to more specific but important problems related to the fact that the measures take averages and averages of averages. Once this is done, we move to the development of our Bayesian alternative and the presentation of its deployment. challenges-to-cosine-based-bias-metrics § CHALLENGES TO COSINE-BASED BIAS METRICS interpretability-issues §.§ Interpretability issues Table <ref> contains an example of scores (and p values, we explain how these are obtained in Subsection <ref>) before and after deploying two debiasing methods to the Reddit embedding, where the score is calculated using the Religion word lists from [13]. For our purpose the details of the debiasing method are not important: what matters is that is used in the evaluation of these methods. The first question we should ask is whether the initial values lower than 1 indeed are indicative of the presence of bias? Thinking abstractly, 1 is the ideal distance for unrelated words. But in fact, there is some variation in distances, which might lead to non-biased lists also having scores smaller than 1. How much smaller? What may attract attention is the fact that the value of cosine distance in “Biased” category is already quite high (i.e. close to 1) even before debiasing. High cosine distance indicates low cosine similarity between values. One could think that the average cosine similarity equal to approximately 0.141 is not large enough to claim the presence of a bias to start with. The authors, though, still aim to mitigate it by making the distances involved in the calculations even larger. The question is, on what basis is this small similarity still considered as proof of the presence of bias, and whether these small changes are meaningful. The problem is that the original paper did not employ any control group of neutral attributes for comparison to obtain a more realistic gauge on how to understand values. Later on, in our approach, we introduce such control word lists. One of them is a list of words we intuitively considered neutral. Moreover, it might be the case that words that have to do with human activities in general, even if unbiased, are systematically closer to the protected words than merely neutral words. This, again, casts doubt on whether comparing to the abstractly ideal value of 1 is a methodologically sound idea. For this reason, we also use a second list with intuitively non-stereotypical human attributes.[See Appendix <ref> for the word lists.] Another important observation is that calculations do not distinguish whether a given attribute is associated with a given protected word, simply averaging across all such groups. Let us use the case of religion-related stereotypes to illustrate. The full lists from [13] can be found in Appendix <ref>. In the original paper, words from all three religions were compared against all of the stereotypes. No distinction between cases in which the stereotype is associated with a given religion, as opposed to the situation in which it is associated with another one, is made. For example, the protected word jew is supposed to be stereotypically connected with the attribute greedy, while from the protected word quran the attribute greedy comes from a different stereotype, and yet the distances between these pairs contribute equally to the final score. This is problematic, as not all of the stereotypical words have to be considered harmful for all religions. To avoid the masking effect, one should pay attention to how protected words and attributes are paired with stereotypes. In Figures (<ref>-<ref>) we look at the empirical distributions, while paying attention to such divisions. The horizontal lines represent the values of 1 - 𝖬𝖠𝖢 (that is, we now talk in terms of cosine similarity rather than cosine distance) that the authors considered indicative of bias for stereotypes corresponding to given word lists. For instance, in religion, was .859, which was considered a sign of bias, so we plot 0± (1-.859)≈ .14 lines around similarity = 0 (that is, distance = 1). Notice that most distributions are quite wide, and the proportions of even neutral or human neutral words with similarities higher than the value of 1 - 𝖬𝖠𝖢 deserving debiasing according to the authors are quite high. Another issue to consider is the selection of attributes for bias measurement. The word lists used in the literature are often fairly small (5-50). The papers in the field do employ statistical tests to measure the uncertainty involved and do make claims of statistical significance. Yet, we will later on argue that these method are not proper for the goal at hand. By using Bayesian methods we will show that a more appropriate use of statistical methods leads to estimates of uncertainty which suggest that larger word lists would be advisable. To avoid the problem brought up in this subsection, we employ control groups and in line with Bayesian methodology, use posterior distributions and highest posterior density intervals instead of chasing single-point metrics based on pre-averaged data. Before we do so, we first explain why pre-averaging and chasing single-number metrics is a sub-optimal strategy. problems-with-pre-averaging §.§ Problems with pre-averaging The approaches we have been describing use means of mean average cosine similarities to measure similarity between protected words and attributes coming from harmful stereotypes. But once we take a look at the individual values, it turns out that the raw data variance is rather high, and there are quite a few outliers and surprisingly dissimilar words. This problem becomes transparent when we examine the visualizations of the individual cosine distances, following the idea that one of the first steps in understanding data is to look at it. Let's start with inspecting two examples of such visualizations in Figures <ref> and <ref> (we also include neutral and human predicates to make our point more transparent). Again, we emphasize that we do not condone the associations which we are about to illustrate. As transparent in Figures <ref> and <ref>, for the protected word , the most similar attributes tend to be the ones associated with it stereotypically, but then words associated with other stereotypes come closer than neutral or human predicates. For the protected word , the situation is even less as expected: the nearest attributes are human attributes, and all there seems to be no clear pattern to the distances to other attributes. The general phenomenon that makes us skeptical about running statistical tests on pre-averaged data is that raw datasets of different variance can result in the same pre-averaged data and consequently the same single-number metric. In other words, a method that proceeds this way is not very sensitive to the real sample variance. Let us illustrate how this problem arises in the context of . Once a particular s(X,Y,A,B) is calculated, the question arises as to whether a value that high could have arisen by chance. To address the question, each s(X,Y,A,B) is used in the original paper to generate a p-value by bootstrapping. The p-value is the frequency of how often it is the case that s(X_i,Y_i,A,B)>s(X,Y,A,B) for sampled equally sized partitions X_i, Y_i of X∪ Y. The WEAT score is then computed by standardizing the difference in means of means by dividing by the standard deviation of means, see equation (<ref>). The scores reported by [9] for lists of words for which the embeddings are supposedly biased range from 2.06 to 1.81, and the reported p-values are in the range of 10^-7-10^-2 with one exception for Math vs Arts, where it is .018. The question is, are those results meaningful? One way to answer this question is to think in terms of null generative models. If the words actually are samples from two populations with equal means, how often would we see scores in this range? How often would we reach the p-values that the authors reported? Imagine there are two groups of protected words, each of size 8, and two groups of stereotypical attributes, of the same size.[16 is the sample size used in the WEAT7 word list, which is not much different from the other sample sizes in word lists used by [9]).] Each such a collection of samples, as far as our question is involved, is equivalent to a sample of 16^2 cosine distances. Further, imagine there really is no difference between these groups of words and the model is in fact null. That is, we draw the cosine distances from the 𝖭𝗈𝗋𝗆𝖺𝗅(0,.08) distribution.[.08 is approximately the empirical standard deviation observed in fairly large samples of neutral words.] In Figure <ref> we illustrate one iteration of the procedure. We draw one such sample of size 16^2. Then we actually list all possible ways to split the 16 words in two equal sets (each such a split is one bootstrapped sample) and for each of them we calculate the values and . What are the resulting distributions of scores and what p-values do they lead to? What are the resulting effect sizes for each bootstrapped sample, and how often can we get an effect size as large as the ones reported in the original paper? In the bootstrapped samples we would rather expect low values and low : after all, these are just random permutations of random distances all of which are drawn from the same null distribution. Let's take a look at one such a bootstrapped sample. On purpose, we picked a rather unusual one: the observed test statistic is 0.39 and 1.27. The bootstrapped distributions of the test statistics and effect sizes are illustrated in Figure <ref>, together with this particular example. Quite notably both (two-sided) p values for our example are rather low (Figure <ref>). These facts might suggest that we ended up with a situation where “bias” is present (albeit, due to random noise). The reason why we picked it is that it is an example of a word list that ends up with relatively low p-value and a relatively unusual effect size, but nevertheless, its closer inspection shows that even for a word list with such properties there is no clear reason to think that the bias is present. At this point, we might think that we just stumbled into a bootstrapped sample that randomly happened to display strong bias. We decide to double-check this by visual inspection expecting exactly this: a strong, clearly visible bias (Figure <ref>). In fact, while there might be some outliers here and there, saying that a clear bias on which one group is systematically closer to As than another is definitely a stretch. What happened? In the calculations of means are taken twice. The -values themselves are means and then means of -values are compared between groups. Statistical troubles start when we run statistical tests on sets of means, for at least two reasons. * By pre-averaging data we throw away information about sample sizes. For the former point, think about proportions: 10 out of 20 and 2 out of 4 give the same mean, but you would obtain more information by making the former observation rather than by making the latter. And especially in this context, in which the word lists are not huge, sample sizes should matter. * When we pre-average, we disregard variation, and therefore pre-averaging tends to manufacture false confidence. Group means display less variation than the raw data points and the standard deviation of a set of means of sets of means is bound to be lower than the original standard deviation in the row data. Now, if you calculate your effect size by dividing by the pre-averaged standard deviation, you are quite likely to get something that looks like a strong effect size, but the results of your calculations might not track anything interesting. Let us think again about the question that we are ultimately interested in. Are the X terms systematically closer to (further from) the A attributes (B attributes) than the Y words? But now let's use the raw data points to try to answer these questions. To start with, let us run two quick t-tests to gauge what the raw data illustrated in Figure <ref> tell us. First, distances to A attributes for X words and Y words. Well, the result is—strictly speaking—statistically significant. The p-value is 0.02 (more than ten times higher than the p-valued obtained by the bootstrapping procedure. So the sample is in some sense unusual. But the 95% confidence interval for the difference in means is [.0052, .061], clearly nothing that a reader would expect given that the calculated effect size seemed quite large. How about the distances to the B attributes? Here the p-value is .22 and the 95% confidence interval is [-0.03, .009], even less of a reason to think a bias is present. The difficulties are exacerbated by the fact that statistical tests are based on bootstrapping from a relatively small data sets, which is quite likely to underestimate the population variance. To make our point clear, let us avoid bootstrapping and work with the null generative model with 𝖭𝗈𝗋𝗆(0,.08) for both word groups. We keep the sizes the same: we have eight protected words in each group, sixteen in total, and for each we randomly draw 8 distances from hypothetical A attributes, and 8 distances from hypothetical B attributes. Calculate the test statistic and effect size the way [9] did. Do this 10000 times, each time calculating and values, and look at what the distributions of these values are on the assumption of the null model with realistic empirically motivated raw data point standard deviation. The first observation is that the supposedly large effect size we obtained is not that unusual even assuming a null model. Around 38% of samples result in score at least as extreme. This illustrates the point that it does not constitute a strong evidence of bias. Second, the distribution of values is much more narrow, which means that if we use it to calculate p-values, it is not too difficult to obtain a supposedly significant test statistic which nevertheless does not correspond to anything interesting happening in the data set. We have seen that seemingly high effect sizes might arise even if the underlying processes actually have the same mean. The uncertainty resulting from including the raw data point variance in considerations is more extensive than the one suggested by the low p-values obtained from taking means or means of means as data points. In the section we discussed the performance of the measure, but since the [13] one is a generalization thereof, including the method of running statistical tests on pre-averaged data, our remarks, mutatis nutandis, apply. What is the alternative? As we already emphasized: focusing on what the real underlying question is and trying to answer it using a statistical analysis of the raw data using meaningful control groups, to ensure interpretability. Moreover, since the data sets are not too large and since multiple evaluations are to be made, we will pursue this method from the Bayesian perspective. Now we have to describe it. a-bayesian-approach-to-cosine-based-bias § A BAYESIAN APPROACH TO COSINE-BASED BIAS model-construction §.§ Model construction Bayesian data analysis takes prior probability distributions, a mathematical model structure and the data, and returns the posterior probability distributions over the parameters of interest, thus capturing our uncertainty about their actual values. One important difference between such a result and the result of classical statistical analysis is that classical confidence intervals (CIs) have a rather complicated and somewhat confusing interpretation, which has little to do with the posterior probability distribution.[Here are a few usual problems. CIs are often mistakenly interpreted as providing the probability that a resulting confidence interval contains the true value of a parameter. CIs bring confusion also with regard to precision, it is a common mistake to interpret narrow intervals as the ones corresponding to more precise knowledge. Another fallacy is to associate CIs with likelihood and to state that values within a given interval are more probable than the ones outside it. The theory of confidence intervals does not support the above interpretations. CIs should be plainly interpreted as a result of a certain procedure (there are many ways to obtain CIs from a given set of data) that will in the long run contain the true value if the procedure is performed a fixed amount of times. For a nice survey and explanation of these misinterpretations, see [17]. For a psychological study of the occurrence of such misinterpretations, see [8]. In this study, 120 researchers and 442 students were asked to assess the truth value of six false statements involving different interpretations of a CI. Both researchers and students endorsed, on average, more than three of these statements.] In fact, Bayesian highest posterior density intervals (HPDIs, the narrowest intervals containing a certain ratio of the area under the curve) and CIs end up being numerically the same only if the prior probabilities are uniform. This illustrates that (1) classical analysis is unable to incorporate non-trivial priors, and (2) is therefore more susceptible to over-fitting, unless regularization (equivalent to a more straightforward Bayesian approach) is used. In contrast with CIs, the posterior distributions are easily interpretable and have direct relevance to the question at hand. Moreover, Bayesian data analysis is better at handling hierarchical models and small datasets, which is exactly what we will we dealing with. In standard Bayesian analysis, the first step is to understand the data, think hard about the underlying process, and select potential predictors and the outcome variable. The next step is to formulate a mathematical description of the generative model of the relationships between the predictors and the outcome variable. Prior distributions must then be chosen for the parameters used in the model. Next, Bayesian inference must be applied to find posterior distributions over the possible parameter values. Finally, we need to check how well the posterior predictions reflect the data with a posterior predictive check. In our analysis, the outcome variable is the cosine distances between the protected words and attribute words. The predictor is a factor determining whether a given attribute word is a neutral word, a human predicate is stereotypically associated with the protected word, or comes from a different stereotype connected with another protected word. The idea is really straightforward: if bias is present in the embedding, distances to associated attribute words should be systematically lower than to other attribute words. Furthermore, conceptually there are two levels of analysis in our approach (see Figure <ref>). On the one hand, we are interested in the general question of whether related attributes are systematically closer across the dataset. On the other hand, we are interested in a more fine-grained picture of the role of the predictor for particular protected words. Learning in hierarchical Bayesian models involves using Bayesian inference to update the parameters of the model. This update is based on the observed data, and estimates are made at different levels of the data hierarchy. We use hierarchical Bayesian models in which we simultaneously estimate parameters at the protected word level and at the global level, assuming that all lower-level parameters are drawn from global distributions. Such models can be thought of as incorporating adaptive regularization, which avoids overfitting and leads to improved estimates for unbalanced datasets (and the datasets we need to use are unbalanced). To be more specific, the underlying mathematical model is as follows. First, we assume that distances are normally distributed: 𝖽𝗂𝗌𝗍𝖺𝗇𝖼𝖾_𝗂 ∼𝖽𝗇𝗈𝗋𝗆(μ_i,σ_i) Second, for each particular protected word 𝗉𝗐 there are four parameters to be estimated. Its mean distance to associated stereotypes a[pw], its mean distance to attributes coming from different stereotypes, d[pw], its mean distance to human attributes, h[pw], and its mean distance to neutral attributes, n[pw]: μ_i = d_𝗉𝗐[𝗂]×𝖽𝗂𝖿𝖿𝖾𝗋𝖾𝗇𝗍_i + a_𝗉𝗐[𝗂]×𝖺𝗌𝗌𝗈𝖼𝗂𝖺𝗍𝖾𝖽_i + h_𝗉𝗐[𝗂]×𝗁𝗎𝗆𝖺𝗇_i + n_𝗉𝗐[𝗂]×𝗇𝖾𝗎𝗍𝗋𝖺𝗅_i where 𝖽𝗂𝖿𝖿𝖾𝗋𝖾𝗇𝗍, 𝖺𝗌𝗌𝗈𝖼𝗂𝖺𝗍𝖾𝖽,𝗁𝗎𝗆𝖺𝗇 and 𝗇𝖾𝗎𝗍𝗋𝖺𝗅 are binary variables. This completes our description of the simple underlying process that we would like to investigate. Now the priors and the hierarchy. We assume all the a parameters come from one distribution, that is normal around a higher-level parameter a̅ and so on for the other three groups of parameters. That is, a_𝗉𝗐[𝗂] is the average distance of a given particular protected word to attributes stereotypically associated with it, while a̅ is the overall average distance of protected words to attributes associated with them.[For a thorough introduction to the concepts we're using, see [11,15].] d_𝗉𝗐[𝗂]∼𝖭𝗈𝗋𝗆(d̅, σ_d) a_𝗉𝗐[𝗂]∼𝖭𝗈𝗋𝗆(a̅, σ_a) h_𝗉𝗐[𝗂]∼𝖭𝗈𝗋𝗆(h̅, σ_h) n_𝗉𝗐[𝗂]∼𝖭𝗈𝗋𝗆(n̅, σ_n) According to our priors, the group means a̅, d̅, h̅ and n̅ all come from one normal distribution with mean equal to 1 and standard deviation equal to .3. The standard deviations σ̅_̅a̅, σ̅_̅d̅, σ̅_̅h̅ and σ̅_̅n̅ to be estimated, according to our prior, come from one distribution, exponential with rate parameter equal to 2. Our priors are slightly skeptical. They do reflect our knowledge and intuition on the probable distribution of the cosine distances in the data. We know that the cosine distances lie in the range 0-2, and we expect two randomly chosen vectors from the embedding to have rather small similarity, so we expect the distances to be centered around 1. However, we use a rather wide standard deviation (.3) to easily account for cases where there is actually much higher similarity between two vectors (especially in cases where the embedding is supposed to be biased). Our priors for the standard deviations are also fairly weak. d̅, a̅, h̅, n̅ ∼𝖭𝗈𝗋𝗆(1, .3) σ_d, σ_a, σ_h, σ_n ∼𝖤𝗑𝗉(2) posterior-predictive-check §.§ Posterior predictive check A posterior predictive check is a technique used to evaluate the fit of a Bayesian model by comparing its predictions with observed data. The underlying principle is to generate simulated data from the posterior distribution of the model parameters and compare them with the observed data. If the model is a good fit to the data, the simulated data should resemble the observed data. In Figure <ref> we illustrate a posterior predictive check for one corpus (Reddit) and one word list. The remaining posterior predictive checks are in section <ref>. results-and-discussion § RESULTS AND DISCUSSION observations §.§ Observations In brief, despite one-number metrics suggesting otherwise, our Bayesian analysis reveals that insofar as the short word lists usually used in related research projects are involved, there usually are no strong reasons to claim the presence of systematic bias. Moreover, comparison between the groups (including control word lists) leads to the conclusion that the effect sizes (that is, the absolute differences between cosine distances between groups) tend to be rather small, with few exceptions. Moreover, the choice of protected words is crucial — as there is a lot of variance when it comes to the protected word-level analysis. In a bit more detail, the visualizations in Appendix <ref> show that the situation is more complicated than merely looking at one-number summaries might suggest. Note that the axes are sometimes in different scales to increase visibility. To start with, let us look at the association-type level coefficients (illustrated in the top parts of the plots). Depending on the corpus used and word class, there is a large variety as to posterior densities. Quite aware of this being a crude approximation, let's compare the HPDIs and whether they overlap for different attribute groups. * In Weat 7 (Reddit) there is no reason to think there are systematic differences between cosine distances (recall that words from Weat 7 were mostly not available in other embeddings). * In Weat 1 (Google, Glove and Reddit) associated words are somewhat closer, but the cosine distance differences from neutral words are very low, and surprisingly it is human attributes, not neutral predicates that are systematically the furthest. * In Religion (Google, Glove, Reddit) and Race (Google, Glove), the associated attributes are not systematically closer than attributes belonging to different stereotypes, and the difference between neutral and human predicates is rather low, if noticeable. The situation is interestingly different in Race (Reddit) where both human and neutral predicates are systematically further than associated and different attributes—but even then, there is no clear difference between associated and different attributes. * For Gender (Google, Glove), despite the superficially binary nature, associated and opposite attributes tend to be more or less in the same distances, much closer than neutral words (but not closer than human predicates in Glove). Reddit is an extreme example: both associated and opposite attributes are much closer than neutral and human (around .6 vs. .9), but even then, there seems to be no reason to think that cosine distances to associated predicates are much different from distances to opposite predicates. Moreover, when we look at particular protected words, the situation is even less straightforward. We will just go over a few notable examples, leaving the visual inspection of particular results for other protected words to the reader. One general phenomenon is that—as we already pointed out—the word lists are quite short, which contributes to large uncertainty involved in some cases. * For some protected words the different attributes are somewhat closer than the associated attributes. * For some protected words, associated and different attributes are closer than neutral attributes, but so are human attributes. * In some cases, associated attributes are closer, but so are neutral and human predicates, which illustrates that just looking at average cosine similarity as compared to the theoretically expected value of 1, instead of running a comparison to neutral and human attributes is misleading. * The only group of protected words where differences are noticeable at the protected word level is Gender-related words– as in Gender (Google) and in Gender (Reddit) — note however that in the latter, for some words, the opposite attributes seem to be a little bit closer than the associated ones. rethinking-debiasing §.§ Rethinking debiasing Bayesian analyses and visualizations thereof can be also handy when it comes to the investigation of the effect that debiasing has on the embedding space. In Figures <ref> and <ref> we see an example of two visualizations depicting the difference in means with 89% highest posterior density intervals before and after applying debiasing (the remaining visualizations are in the Appendix). * In Gender (Reddit), minor differences between different and associated predicates end up being smaller. However, this is not achieved by any major change in the relative positions of associated and different predicates with respect to protected words, but rather by shifting them jointly together. The only protected word for which a major difference is noticeable is hers. * In Religion (Reddit) debiasing aligns general coefficients for all groups together, all of them getting closer to where neutral words were prior to debiasing (this is true also for human predicates in general, which intuitively did not require debiasing). For some protected words such as christian, jew, the proximity ordering between associated and different predicates has been reversed, and most of the distances shifted a bit towards 1 (sometimes even beyond, such as predicates associated with the word quran), but for most protected words, the relative differences between the coefficient did not change much (for instance, there is no change in the way the protected word muslim is mistreated). * For Race (Reddit), general coefficients for different and associated predicates became aligned. However, most of the changes roughly preserve the structure of bias for particular protected words with minor exceptions, such as making the proximities of different predicates for protected words asian and asia much lower than associated predicates, which is the main factor responsible for the alignment of the general level coefficients. In general, debiasing might end up leading to lower differences between general level coefficients for associated and different attributes. But that usually happens without any major change to the structure of the coefficients for protected words, sporadic extreme and undesirable changes for some protected words, usually with the side-effect of changing what happens with neutral and human predicates. We wouldn't be even able to notice these phenomena had we restricted our attention to or scores only. To be able to diagnose and remove biases at the right level of granularity, we need to go beyond single metric chasing. In Figures <ref>-<ref> we inspect the empirical distributions for the debiased embeddings. Comparing the results to the original embedding, one may notice that for the Religion group, the neutral and human distribution has changed slightly. Before within the “correct” cosine similarity boundaries, there were 56% of neutral and 55% of human word lists. After the debiasing, the values changed to 59% (for neutral) and 59% (for human). The different and associated word lists were more influenced. The general shape of both distributions is less stretched. Before debiasing 43% of the different word lists and 35% of the associated word lists were within the accepted boundaries. After the embedding manipulation, the percentage has increased for both lists to 63%. Visualization for Gender group illustrates almost no change for the neutral and human word lists before and after debiasing. The values for different and associated word lists are also barely impacted by the embedding modification. In the Race group, the percentage within the boundaries for neutral and associated word lists has increased. The opposite happened for human and different word lists, where the percentage of “correct” cosine similarity dropped from 67% to 55% (human) and from 39% to 36% (different). related-works-and-conclusions § RELATED WORKS AND CONCLUSIONS There are a few related papers, whose discussion goes beyond the scope of this paper: * [21] employ Bayesian methods to estimate uncertainty in NLP tasks, but they apply their Bayesian Neural Networks-based method to sentiment analysis or named entity recognition, not to bias. * [2] correctly argues that a bias estimate should not be expressed as a single number without taking into account that the estimate is made using a sample of data and therefore has intrinsic uncertainty. The authors suggest using Bernstein bounds to gauge the uncertainty in terms of confidence intervals. We do not discuss this approach extensively, as we think that confidence intervals are quite problematic for several reasons, among others the confusing interpretability. We do not think that Bernstein bounds provide the best solution to the problem. Applying this method to a popular WinoBias dataset leads to the conclusion that more than 11903 samples are needed to claim a 95% confidence interval for a bias estimate. This amount vastly exceeds the existing word lists for bias estimation. We propose a more realistic Bayesian method. Our conclusion is still that the word lists are sometimes too small, but at least they allow for gauging uncertainty as we go on to improve our methodology and extend the lists gradually. * [20] criticize some existing bias metrics such as or on the grounds of them not satisfying some general formal principles, such as magnitude-comparability, and they propose a modification, called . * [14] develop a generalization of meant to apply to sets of sentences, , which basically applies to vector representations of sentences. The authors, however, still pre-average and play the game of finding a single-number metric, so our remarks apply. * [7] introduce the Contextualized Embedding Association Test Intersectional, meant to apply to dynamic word embeddings and, importantly develop methods for intersectional bias detection. The measure is a generalization of the WEAT method. The authors do inspect a distribution of effect sizes that arises from the consideration of various possible contexts, but they continue to standardize the difference in averaged means and use a single-number summary: the weighted mean of the effect sizes thus understood. The method, admittedly, deserves further evaluation which goes beyond the scope of this paper. To summarize, a Bayesian data analysis with hierarchical models of cosine distances between protected words, control group words, and stereotypical attributes provides more modest and realistic assessment of the uncertainty involved. It reveals that much complexity is hidden when one instead chases single bias metrics present in the literature. After introducing the method, we applied it to multiple word embeddings and results of supposed debiasing, putting forward some general observations that are not exactly in line with the usual picture painted in terms of or (and the problem generalizes to any approach that focuses on chasing a single numeric metric): the word list sizes and sample sizes used in the studies are usually small. Posterior density intervals are fairly wide. Often the differences between associated, different and neutral human predicates, are not very impressive. Also, a preliminary inspection suggests that the desirability of changes obtained by the usual debiasing methods is debatable. The tools that we propose, however, allow for a more fine-grained and multi-level evaluation of bias and debiasing in language models without losing modesty about the uncertainties involved. The short, general, and somewhat disappointing lesson here is this: things are complicated. Instead of chasing single-number metrics, we should rather devote attention to more nuanced analysis. references § REFERENCES tocsectionReferences refs 00 preref-Bolukbasi2016man [1] Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. CoRR abs/1607.06520, (2016). Retrieved from <http://arxiv.org/abs/1607.06520> preref-Ethayarajh2020Bernstein [2] Kawin Ethayarajh. 2020. Is your classifier actually biased? Measuring fairness under uncertainty with bernstein bounds. CoRR abs/2004.12332, (2020). Retrieved from <https://arxiv.org/abs/2004.12332> preref-Garg2017hundredYears [3] Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2017. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences 115, (November 2017). DOI:https://doi.org/https://doi.org/10.1073/pnas.172034711510.1073/pnas.1720347115 preref-Garg2018years [4] Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences 115, 16 (April 2018), E3635–E3644. DOI:https://doi.org/https://doi.org/10.1073/pnas.172034711510.1073/pnas.1720347115 preref-Gonen2019lipstick [5] Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 conference of the north American chapter of the association for computational linguistics: Human language technologies, volume 1 (long and short papers), Association for Computational Linguistics, Minneapolis, Minnesota, 609–614. DOI:https://doi.org/https://doi.org/10.18653/v1/N19-106110.18653/v1/N19-1061 preref-gordon2012reporting [6] Jonathan Gordon and Benjamin Durme. 2013. Reporting bias and knowledge acquisition. In AKBC 2013 - Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, Co-located with CIKM 2013, 25–30. DOI:https://doi.org/https://doi.org/10.1145/2509558.250956310.1145/2509558.2509563 preref-Guo2021CEAT [7] Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM conference on AI, ethics, and society, ACM. DOI:https://doi.org/https://doi.org/10.1145/3461702.346253610.1145/3461702.3462536 preref-Hoekstra2014Misinterpretation [8] Rink Hoekstra, Richard D. Morey, Jeffrey N. Rouder, and Eric-Jan Wagenmakers. 2014. Robust misinterpretation of confidence intervals. Psychonomic Bulletin & Review 21, 5 (October 2014), 1157–1164. DOI:https://doi.org/https://doi.org/10.3758/s13423-013-0572-310.3758/s13423-013-0572-3 preref-Caliskan2017semanticsBiases [9] Aylin Caliskan Islam, Joanna J. Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. CoRR abs/1608.07187, (2016). Retrieved from <http://arxiv.org/abs/1608.07187> preref-JohnsonValueFree [10] Gabbrielle Johnson. forthcoming. Are algorithms value-free? Feminist theoretical virtues in machine learning. Journal Moral Philosophy (forthcoming). preref-kruschke2015bayesian [11] John Kruschke. 2015. Doing bayesian data analysis (second edition). Academic Press, Boston. preref-Lauscher2019multidimensional [12] Anne Lauscher and Goran Glavas. 2019. Are we consistently biased? Multidimensional analysis of biases in distributional word vectors. CoRR abs/1904.11783, (2019). Retrieved from <http://arxiv.org/abs/1904.11783> preref-Manzini2019blackToCriminal [13] Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. Retrieved from <https://arxiv.org/abs/1904.04047> preref-may-etal-2019-measuring [14] Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 conference of the north American chapter of the association for computational linguistics: Human language technologies, volume 1 (long and short papers), Association for Computational Linguistics, Minneapolis, Minnesota, 622–628. DOI:https://doi.org/https://doi.org/10.18653/v1/N19-106310.18653/v1/N19-1063 preref-statrethinkingbook2020 [15] Richard McElreath. 2020. Statistical rethinking: A bayesian course with examples in r and stan, 2nd edition (2nd ed.). CRC Press. Retrieved from <http://xcelab.net/rm/statistical-rethinking/> preref-Mikolov2013efficient [16] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. DOI:https://doi.org/https://doi.org/10.48550/ARXIV.1301.378110.48550/ARXIV.1301.3781 preref-Morey2015confidenceFallacy [17] Richard Morey, Rink Hoekstra, Jeffrey Rouder, Michael Lee, and EJ Wagenmakers. 2015. The fallacy of placing confidence in confidence intervals. Psychonomic Bulletin & Review (September 2015). preref-Nissim2020fair [18] Malvina Nissim, Rik van Noord, and Rob van der Goot. 2020. Fair is better than sensational: Man is to doctor as woman is to doctor. Computational Linguistics 46, 2 (June 2020), 487–497. DOI:https://doi.org/https://doi.org/10.1162/coli_a_0037910.1162/coli_a_00379 preref-Nosek2002harvesting [19] Brian A. Nosek, Mahzarin R. Banaji, and Anthony G. Greenwald. 2002. Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice 6, 1 (2002), 101–115. DOI:https://doi.org/https://doi.org/10.1037/1089-2699.6.1.10110.1037/1089-2699.6.1.101 preref-schruxf6der2021evaluating [20] Sarah Schröder, Alexander Schulz, Philip Kenneweg, Robert Feldhans, Fabian Hinder, and Barbara Hammer. 2021. Evaluating metrics for bias in word embeddings. Retrieved from <https://arxiv.org/abs/2111.07864> preref-DBLP:journalsux2fcorrux2fabs-1811-07253 [21] Yijun Xiao and William Yang Wang. 2018. Quantifying uncertainties in natural language processing tasks. CoRR abs/1811.07253, (2018). Retrieved from <http://arxiv.org/abs/1811.07253> appendix § APPENDIX visualizations §.§ Visualizations < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > posterior-predictive-checks §.§ Posterior predictive checks < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > word-lists §.§ Word lists lists-used-in-previous-research §.§.§ Lists used in previous research The lists from [13] are available here: * Gender: <https://github.com/TManzini/DebiasMulticlassWordEmbedding/blob/master/Debiasing/data/vocab/gender_attributes_optm.json> * Race: <https://github.com/TManzini/DebiasMulticlassWordEmbedding/blob/master/Debiasing/data/vocab/race_attributes_optm.json> * Religion: <https://github.com/TManzini/DebiasMulticlassWordEmbedding/blob/master/Debiasing/data/vocab/religion_attributes_optm.json> The lists are: * Gender: Pairs of protected attributes: [“he”, “she”], [“his”, “hers”], [“son”, “daughter”], [“father”, “mother”], [“male”, “female”], [“boy”, “girl”], [“uncle”, “aunt”] Lists of stereotypes per gender: man: [“manager”, “executive”, “doctor”, “lawyer”, “programmer”, “scientist”, “soldier”, “supervisor”, “rancher”, “janitor”, “firefighter”, “officer”], woman: [“secretary”, “nurse”, “clerk”, “artist”, “homemaker”, “dancer”, “singer”, “librarian”, “maid”, “hairdresser”, “stylist”, “receptionist”, “counselor”] } * Race: Sets of protected attributes: [“black”, “caucasian”, “asian”], [“african”, “caucasian”, “asian”], [“black”, “white”, “asian”], [“africa”, “america”, “asia”], [“africa”, “america”, “china”], [“africa”, “europe”, “asia”] Lists of stereotypes per race: “caucasian”: [“manager”, “executive”, “redneck”, “hillbilly”, “leader”, “farmer”], “asian” : [“doctor”, “engineer”, “laborer”, “teacher”], “black” : [“slave”, “musician”, “runner”, “criminal”, “homeless”] * Religion: Sets of protected attributes: [“judaism”, “christianity”, “islam”], [“jew”, “christian”, “muslim”], [“synagogue”, “church”, “mosque”], [“torah”, “bible”, “quran”], [“rabbi”, “priest”, “imam”] Lists of stereotypes per race: “jew” : [“greedy”, “cheap”, “hairy”, “liberal”], “christian” : [“judgemental”, “conservative”, “familial”], “muslim” : [“violent”, “terrorist”, “dirty”, “uneducated”] custom-lists-used-in-this-paper §.§.§ Custom lists used in this paper * Neutral: [`ballpark', `glitchy', `billy', `dallas', `rip', `called', `outlooks', `floater', `rattlesnake', `exports', `recursion', `shortfall', `corrected', `solutions', `diagnostic', `patently', `flops', `approx', `percents', `lox', `hamburger', `engulfed', `households', `north', `playtest', `replayability', `glottal', `parable', `gingers', `anachronism', `organizing', `reach', `shtick', `eleventh', `cpu', `ranked', `irreversibly', `ponce', `velociraptor', `defects', `puzzle', `smasher', `northside', `heft', `observation', `rectum', `mystical', `telltale', `remnants', `inquiry', `indisputable', `boatload', `lessening', `uselessness', `observes', `fictitious', `repatriation', `duh', `attic', `schilling', `charges', `chatter', `pad', `smurfing', `worthiness', `definitive', `neat', `homogenized', `lexicon', `nationalized', `earpiece', `specializations', `lapse', `concludes', `weaving', `apprentices', `fri', `militias', `inscriptions', `gouda', `lift', `laboring', `adaptive', `lecture', `hogging', `thorne', `fud', `skews', `epistles', `tagging', `crud', `two', `rebalanced', `payroll', `damned', `approve', `reason', `formally', `releasing', `muddled', `mineral', `shied', `capital', `nodded', `escrow', `disconnecting', `marshals', `winamp', `forceful', `lowes', `sip', `pencils', `stomachs', `goff', `cg', `backyard', `uprooting', `merging', `helpful', `eid', `trenchcoat', `airlift', `frothing', `pulls', `volta', `guinness', `viewership', `eruption', `peeves', `goat', `goofy', `disbanding', `relented', `ratings', `disputed', `vitamins', `singled', `hydroxide', `telegraphed', `mercantile', `headache', `muppets', `petal', `arrange', `donovan', `scrutinized', `spoil', `examiner', `ironed', `maia', `condensation', `receipt', `solider', `tattooing', `encoded', `compartmentalize', `lain', `gov', `printers', `hiked', `resentment', `revisionism', `tavern', `backpacking', `pestering', `acknowledges', `testimonies', `parlance', `hallucinate', `speeches', `engaging', `solder', `perceptive', `microbiology', `reconnaissance', `garlic', `neutrals', `width', `literaly', `guild', `despicable', `dion', `option', `transistors', `chiropractic', `tattered', `consolidating', `olds', `garmin', `shift', `granted', `intramural', `allie', `cylinders', `wishlist', `crank', `wrongly', `workshop', `yesterday', `wooden', `without', `wheel', `weather', `watch', `version', `usually', `twice', `tomato', `ticket', `text', `switch', `studio', `stick', `soup', `sometimes', `signal', `prior', `plant', `photo', `path', `park', `near', `menu', `latter', `grass', `clock'] * Human-related: [`wear', `walk', `visitor', `toy', `tissue', `throw', `talk', `sleep', `eye', `enjoy', `blogger', `character', `candidate', `breakfast', `supper', `dinner', `eat', `drink', “carry”, “run”, “cast”, “ask”, “awake”, “ear”, “nose”, “lunch”, “coalition”, “policies”, “restaurant”, “stood”, “assumed”, “attend”, “swimming”, “trip”, “door”, “determine”, “gets”, “leg”, “arrival”, “translated”, “eyes”, “step”, “whilst”, “translation”, “practices”, “measure”, “storage”, “window”, “journey”, “interested”, “tries”, “suggests”, “allied”, “cinema”, “finding”, “restoration”, “expression”,“visitors”, “tell”, “visiting”, “appointment”, “adults”, “bringing”, “camera”, “deaths”, “filmed”, “annually”, “plane”, “speak”, “meetings”, “arm”, “speaking”, “touring”, “weekend”, “accept”, “describe”, “everyone”, “ready”, “recovered”, “birthday”, “seeing”, “steps”, “indicate”, “anyone”, “youtube”]
http://arxiv.org/abs/2306.10028v1
20230605070434
Graph Based Long-Term And Short-Term Interest Model for Click-Through Rate Prediction
[ "Huinan Sun", "Guangliang Yu", "Pengye Zhang", "Bo Zhang", "Xingxing Wang", "Dong Wang" ]
cs.IR
[ "cs.IR", "cs.LG" ]
Meituan Beijing China [email protected] Huinan Sun and Guangliang Yu contributed equally to this research Meituan Beijing China [email protected] Meituan Beijing China [email protected] Meituan Beijing China [email protected] Meituan Beijing China [email protected] Meituan Beijing China [email protected] Click-through rate (CTR) prediction aims to predict the probability that the user will click an item, which has been one of the key tasks in online recommender and advertising systems. In such systems, rich user behavior (viz. long- and short-term) has been proved to be of great value in capturing user interests. Both industry and academy have paid much attention to this topic and propose different approaches to modeling with long-term and short-term user behavior data. But there are still some unresolved issues. More specially, (1) rule and truncation based methods to extract information from long-term behavior are easy to cause information loss, and (2) single feedback behavior regardless of scenario to extract information from short-term behavior lead to information confusion and noise. To fill this gap, we propose a Graph based Long-term and Short-term interest Model, termed GLSM. It consists of a multi-interest graph structure for capturing long-term user behavior, a multi-scenario heterogeneous sequence model for modeling short-term information, then an adaptive fusion mechanism to fused information from long-term and short-term behaviors. Comprehensive experiments on real-world datasets, GLSM achieved SOTA score on offline metrics. At the same time, the GLSM algorithm has been deployed in our industrial application, bringing 4.9% CTR and 4.3% GMV lift, which is significant to the business. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Analytics and machine learning data mining [500]Neural Information and knowledge processing neural recommendation Music Mode: Transforming Robot Movement into Music Increases Likability and Perceived Intelligence Dong Wang Accepted XXX. Received YYY; in original form ZZZ ================================================================================================== § INTRODUCTION Recommender systems (RS) play a key role in online services. The recommender system includes three modules: Matching, Strategy and Click-through Rate Prediction (CTR). The matching stage deploys some simple but effective recommendation algorithms (such as collaborative filtering<cit.>) to pick out a small subset of relevant items from all items.The CTR stage is to predict the probability of the user clicking on the item, which is the core module of the recommender system<cit.>. With the rapid growth of user historical behavior data, learning the intent representation of user interest through user historical behavior has been widely introduced into CTR prediction models. According to statistics, about 70% of users' behavior sequences are longer than two hundred in our APP. Yet, the massive amount of user behavior information not only brings information gain, but also brings new problems: how to efficiently process user historical behavior sequences (including long-term behavior and short-term behavior). The traditional processing method<cit.> is to truncate the user's historical behavior sequence, and only retain a certain number of user behaviors to meet the online performance. This solution is relatively simple and crude. Although it solves the performance problem of online estimation services, it brings about information loss and reduces the estimation accuracy. In view of the insufficiency of the truncation scheme, various solutions have been proposed in the industry. One idea is to compress the user's historical behavior sequence as much as possible without losing information. The representative work is MIMN<cit.>. MIMN uses the UIC structure to compress and store the user's sequence of lifecycle actions. UIC embeds the user's different interest increments into a fixed-size memory matrix that is updated with each new behavior. In this way, the computation of user modeling is decoupled from CTR prediction, avoiding the problem of online service delays. However, encoding the historical behaviors of all users into a fixed-size memory matrix results in a lot of noise in the memory cells, which cannot accurately capture user interests. Another idea is to build a two-stage indexing scheme that filters sub-behavior sequences in real-time from a user's full historical behavior. The representative work is SIM<cit.>, which adopts the method of category retrieval, and uses category attributes to select and locate from the user behavior sequence. Retrieving user historical behaviors based on category attributes will inevitably miss some behavioral items with different attributes but related to the current candidate, resulting in lack of user information. In addition, users' recent behaviors carry recent interests <cit.>. Industrial models using different neural network architecture such as CNN, RNN<cit.>, Transformer<cit.>, Capsule<cit.> and attention-based <cit.>. These models are often applied to certain types of action sequences, such as click sequences. However, the user's decision-making process generates various types of behaviors, such as clicks, loads, and searches. The extraction of certain types of behaviors separates the user's sequential actions and is not conducive to capturing the full intent of the user. At the same time, the behavior of users in different scenarios is not consistent. For example, the food click preferences for breakfast and lunch are completely different. The behavior information of all scenes is mixed together, and the information easily affects each other and generates noise. In this paper, in order to maximize the use of user behavior information to improve CTR performance, we propose the GLSM algorithm, which can efficiently and accurately retrieve information from users' long-term historical behavior information. Use scenario-based modeling solutions for short-term user behavior. At the same time, a scheme of user long-term and short-term behavior fusion is proposed. The main contributions of this paper are as follows: * Long-term behavior retrieval: In view of the large amount of long-term behavior data, a new long-term behavior retrieval scheme is proposed, which utilizes graph connectivity and multi-interest center nodes to achieve efficient multi-interest soft retrieval. * Short-term interest extraction: Multi-scenario heterogeneous sequence modeling is used to extract users' short-term interests in different scenarios. * Combination of long-term and short-term interests: An interest fusion network is designed to combine long-term and short-term interests according to the user's own characteristics. § RELATED WORK Rich user behavior data has proven valuable in CTR prediction. The main reason is that user historical behavior (long-term and short-term) reflects user interests. If all user behavior data is added to the model, it cannot meet the performance requirements of online CTR prediction services due to the large amount. The industry has proposed some information extraction schemes (such as MIMN<cit.>, and SIM<cit.>) for long-term behavior sequences. For short-term behavior sequences, models such as DIN<cit.>, DIEN<cit.>, BST<cit.>, and CapsNet<cit.> are also proposed to extract users' short-term interests. §.§ Long-Term User Interest MIMN showed that considering long-term behavior sequences in the user interest model can significantly improve the performance of the CTR model. Although long-term behavior sequences bring useful information for user interest modeling, there are also two disadvantages: it greatly increases the latency and storage burden of online service systems and the sequence contains a lot of noise. Researchers have proposed many approaches to address the challenge of modeling long-term user behavior sequences. MIMN used a fixed additional storage module NTM<cit.> to store the user's long-term interest vector in a compressed form, which solves the problem of a large amount of user behavior data storage. The user's long-term interest vector is updated offline in an asynchronous manner. Since there is no inference time limit offline, MIMN can theoretically model any sequence length. Yet, MIMN cannot learn various user interest vectors for different target items, resulting in information loss. SIM proposes an online two-stage retrieval method. It retrieves relevant behaviors from users' long-term behaviors based on current candidate item features such as categories. It is easy to cause information loss only through the same feature with candidate to retrieval. §.§ Short-Term User Interest Classical CTR models mainly focus on extracting user interests from short-term user behavior sequences, and various neural network architectures have been proposed, such as DIN<cit.>, DIEN<cit.>, MIND<cit.>, and BST<cit.>. DIN emphasizes that user interests are diverse, and the user's current behavior is only related to part of the historical behavior, so an attention mechanism is introduced to capture the user's different interests in different target items. DIEN points out the temporal relationship between historical behaviors and simulates the evolution of user interests. The dynamic routing method of capsule network is introduced in MIND to learn multiple interest points of user behavior. Furthermore, inspired by self-attention in the NLP domain, Transformer is introduced in BST, which extracts deeper representations for each item and passes Transformer model using Transformer model. To address these issues, we propose a comprehensive model for modeling users' historical behavior, called a graph retrieval-based long-term and short-term interest fusion model. In the following sections, we first introduce the GLSM algorithm framework, then briefly introduce the deployment scheme of GLSM in the industry, and finally compare GLSM with the classic CTR estimation method in the experimental part, and further do some open discussions at the end of this article. § GRAPH BASED LONG-TERM AND SHORT-TERM INTEREST MODEL The overall workflow of GLSM is shown in Figure 1. GLSM consists of three parts: a graph-based long-term interest retrieval module, a short-term multi-intent interest recognition module, and a long-term and short-term interest recognition module. In the following, we briefly introduce the CTR paradigm, and then introduce the role of each module of GLSM in CTR. §.§ Click-Through Rate Prediction For CTR prediction task, there are M users in U = { u_1, u_2, ...... , u_M } and N items in V = {v_1, v_2, ...... , v_N}. We define user behavior events as b=(u,t,v,c,q). In the context of c, user u performs q action on item v at time t. c includes category, time, location, etc. The behavior sequence of user u is S_u = (b_u,1,b_u,2,b_u,3,......,b_u,n) , where b_u,i represents the i-th user behavior of the user. So, The CTR prediction of an user u clicking on a target item v is calculated via: y = DNN(F_e(S_u), G_e(u_p, v_p)) where F_e(S_u) means to extract the related behavior from S_u and convert it into embedding expression, G_e(u_p, v_p) represents the extraction of relevant auxiliary information from user profiles (u_p) and item profiles (v_p) with embedding representation. Generally speaking, F_e is the core of the model. The reason is that S_u reflect the intrinsic and multi-facet user's interests. It is generally accepted in the industry that long-term behavior tends to reflect the user's stable interests, short-term behavior tends to include the user's early adopters and changeable intentions<cit.> <cit.>. Therefore, this paper models long-term and short-term as two parts, so Eqn (1) is changed to: y = DNN(F_e^l(S_u^l), F_e^s(S_u^s), M_e(S_u^l, S_u^s), G_e(u_p, v_p)) S_u^l and S_u^s represent the long-term and short-term parts of S_u, respectively, and F_e^l and F_e^s are the corresponding embedding expression functions. M_e represents the fusion of long-term and short-term user behavior.In the following sections, we will discuss in detail the implementation of F_e^l, F_e^s, and M_e in GLSM. §.§ Graph-based Long-Term Interest Retrieval Module The user's long-term behavior contains both valuable and noisy information. If it added to the model without distinction, and the irrelevant information can interfere with model estimation. Therefore, the most critical capability of the F_e^l function is to efficiently and accurately retrieve relevant information from a large amount of information. Assuming that there are a total of N behaviors in the user's long-term behavior sequence, traversing whether the N behaviors are related one by one, the time complexity is O(N). This method is obviously time-consuming and labor-intensive. Therefore, there is a need to further reduce the complexity. To tackle this challenge, we propose a graph-based retrieval structure(GRS) as the core module of F_e^l, as shown in Figure 2. In GRS,based on graph connectivity, efficiency, correlation, the input is the target item, and it is extended to its one-hop (or multi-hop) neighbors via the center node. These neighbors are output as the related behavior. The construction and retrieval process of the GRS will be described below. §.§.§ Graph-based retrieval structure - Construction (Global graph) G_g(global-graph) = (V, E). V and E denote node sets and edge sets. For V, node represents an item. For E, if in any S_u, v_i and v_j appear in b_u,i and b_u,i+1, then v_i and v_j forms an edge. In this way, we can build a isomorphic global graph G_g based on S_u of all users. (Local graph) G_l(local-graph) = (V_u, E_u). Compared with the global graph, A local graph is a special case of a global graph on a single user. Both V_u and E_u are constructed within the scope of a single S_u. (Center Nodes) V_c. In G_l, the center node is the key representative node of the graph. Each user has its own GRS, which consists of G_l and center nodes. In GRS, the target item passes through center nodes to efficiently find its related nodes. To find center nodes from G_l, we measure each node's local importance(l_im) and global importance(g_im). For l_im, In G_l we measure l_im with degree centrality<cit.> as: l_im = Degree Centrality(C_deg(v))=d_v/|N_s|-1 where N_s is the set of nodes in G_l, d_v is the degree of node v. In G_l, the higher the degree of the node, the more it can reflect the main interest of the user. For g_im, g_im mainly refers to the correlation between nodes and user interests in global. Therefore, node and user interest expression at the global level is required.To calculate mainly refers to the correlation between nodes and user interests in global. To calculate g_im, we first construct G_g using S_u of all users. In G_g, we use a graph algorithm Graphsage <cit.> to generate embedding expressions for each item v. Based on global graph embedding, K-means is used to cluster user behaviors into multiple interest clusters for each user. The reciprocal distance between each item v and the k-means cluster center is taken as the global importance g_im of the node v After normalizing the global importance and local importance, the fusion result is taken as the node importance. union_im = l_im + g_im Finally, we select the top N nodes V_c={ v_1, v_2, ......, v_n } as center nodes according to the union_im from user graph G_l.Algorithm 1 describes this process. §.§.§ Graph-based retrieval structure - Retrieval For center nodes V_c, they are part of the nodes in the user graph G_l. So, We find the top K most relevant nodes to the target item from the center nodes, then we can extend the top K center nodes to the neighbor nodes V_next_hop_nodes={ v_c,1, v_c,2, v_c,3 ...... } according to the connectivity of the graph. Further, We can expand each element of V_next_hop_nodes in the same way to get second-hop neighbors. We can expand as many hops as we want, in this study, we get second-hop neighbors of the center nodes. Algorithm 2 describes this process. §.§.§ Long-term user behavior interest aggregation unit The center node V_c represents the user interest cluster, and its neighbor nodes are the precise description of the user cluster. So, we aggregate all the neighbor nodes of the center node together through the attention mechanism. Specifically, in the aggregation process, in addition to the Embedding of the neighbor node itself, the influence of the behavior sideinfo of this node is also considered: E_v_i = E_v_i + E_v_i, sideinfo E_v_i, sideinfo = E_cate_i + E_behaviortype_i + E_discrete_time_i sideinfo includes categories, behavior types, and time. Categories and behavior types map IDs to embeddings by way of embedding lookup. Due to the continuity of temporal features, we discretize time for looking up embedding: E_discrete_time_i = E(int(log(t_now - t_v))) where t_now represents the current time, t_v represents the behavior time of the current node, and the time difference represents the influence of the node over time. The influence of E_v on the target item is different. For example, when predicting the user's preference for the barbecue restaurant (target item), the relevant information of barbecue, beer, etc. is more important. Therefore, we calculate the influence weight of E_v: α_i = attention(E_v_i, E_v_t) = σ (W_1(W_2([E_v_i, E_v_t]))) where the σ is the sigmoid activation function, E_v_t is the embedding of target item and [A, B] means concat two vectors. By aggregating the neighbor nodes of the center node, the representation of the current center node can be obtained. Because we have done personalized weights on neighbor nodes, we use pooling aggregation here. E_center = ∑_i=0^n α_i * E_v_i §.§.§ Long-term user behavior interest activation unit We have calculated the basic representation of the interest cluster. To distinguish the influence of different interest clusters on the target item, we use an attention mechanism to calculate the contribution of each interest cluster to CTR prediction. Similar as Eqn(7) and Eqn(8),We calculate the weight β_j for each center node: β_j = attention(E_center_i, E_v_t) = σ (W_1(W_2([E_center_i, E_v_t]))) Then we aggregate multiple center nodes by weight β_j: E_long = ∑_j = 0^k β_j * E_center_j The complete model diagram is shown in the figure 3. §.§ Short-term multi-intent recognition module User decision-making is a continuous process in the scene <cit.>. Decision-making process include click, cart, favorite, order, etc. Extracting only specific types of subsequences such as S_u(click) or S_u(cart) will lead to the splitting of scene decision-making process, which is not conducive to capturing the user's full intent. Furthermore, if the model mixes all scene behavior information, the information in different scenes interacts with each other and brings noise to the model. So we model short-term behavior as shown in Figure 4. §.§.§ Short-term interest representation unit Scenarios are the key to short-term scenario-based models. Generally speaking, time and location are good criteria for scene division.For example, in catering platform, user behavior is greatly affected by meal segments, behaviors at different meal segments are quite different. We divide user short-term behaviors S_u^s are into multiple segments: S_u^s(Breakfast) = (b_breakfast_click,b_breakfast_order) S_u^s(Lunch) = (b_lunch_click,b_lunch_add_cart,b_lunch_click) S_u^s(Supper) = (b_supper_click,b_supper_add_cart,b_supper_click) Since we want to capture behavioral temporal changes while taking into account model performance, we model users' short-term interests using continuous behaviors within scenario via an attention-based GRU model. E_v_i = E_v_i + E_v_i,sideinfo z_t = σ (W_z[E_v_i - 1, E_v_i] r_t = σ (W_r[E_v_i - 1, E_v_i] h_t = tanh(W[r_t ⊙E_v_i - 1, E_v_i] E_v_i = (1 - z_t) ⊙E_v_i - 1 + z_t ⊙ h_t where ⊙ is element-wise multiplication.The user's decision-making behavior is continuous in the scene. If we only use one GRU in scene sequence, we can only get the final state of the user and cannot capture the intermediate state. Therefore, we retain the state of each action, and aggregate these states as the scene representation. E_scene = ∑_i^nE_v_i §.§.§ Short-term interest activation unit The user's interest distribution in different scenarios has different effects on the current target item. For example, interest in breakfast (soy milk) has little effect on the user's desire to eat barbecue at night. So, according to the current scene, candidate item is used as target to activate each short-term intention E_scene by the attention mechanism. Thus, the short-term intentions of users in different scenarios will be introduced into the model with different importance: γ_i = attention(E_scene_i, E_v_t) = σ (W_1(W_2([E_scene_i, E_v_t]))) After calculating the Embedding expression of each scene, aggregate the Embedding of the scene to generate a short-term Embedding expression: E_short = ∑_i^m γ_i * E_scene_i §.§ Long-Term and Short-Term Interests Fusion Module The effect of long-term and short-term behavior on user decisions is individualized by the user. The long-term behavior of users with more stable interests has a greater impact on the representation of user interests, while the short-term behavior of users who like early adopters can better reflect user interests. So, we propose a user-personalized network to fuse users' long-term and short-term interests as shown in Figure 5. §.§.§ User personalized gate network User profiles is the input to the gated neural network Gate_u. The output is gated Embedding through multi-layer E_gate: E_gate_u = W_1 (σ(W_2 (E_user profile))) To balance the weight influence of each interest component, E_gate_u is normalized, for each dimension of E_gate_u: E_gate_u^i = exp(E_gate_u^i)/∑_j^d exp(E_gate_u^j) E_gate_u's embedding dimension is equal to long-term interest embedding and short-term interest embedding. It should be noted that the embedding of all input user profiles of the network does not accept the back-propagation gradient of Gate_u. The purpose of this operation is to reduce the impact of Gate_u on the convergence of existing feature embeddings. By adding a personalized bias term to the input of the neural network layer through Gate_u, computed by E_u = concat(E_gate_u⊙E_long, (1-E_gate_u)⊙E_short) where ⊙ is element-wise multiplication. With the personalized E_u, the target prediction ability of the model can be improved. § IMPLEMENTATION FOR ONLINE SERVING In this section, we will introduce our practical experience of implementing GLSM in our industrial application. §.§ Online Performance Challenges Industrial recommender systems need to process each traffic request in tens of milliseconds. Requests need to be processed through key processes such as match and CTR, as well as other business rules including material serving and category filtering, etc. At the same time, the amount of user's historical behavior data is large, further increasing the load of the system. The CTR module needs to efficiently and quickly get important information from a large amount of historical operational information within a limited time. This is the main challenge in deploying online systems. §.§ Implementation Scheme of Online Service System Based on Graph Retrieval Offline storage part: To reduce the load pressure of online feature acquisition, we pre-convert users' long-term behaviors into a subgraph structure, which centered on the center nodes, and store them offline. In this way, during online retrieval, nodes retrieval can be performed directly, which saves time for construction. Architecture optimization:As shown in Figure 6, in order to further improve the retrieval efficiency of users' long-term behavior, we prepend the retrieval process before the regular CTR module, and run in parallel with some other intermediate processes (material service, category filtering, etc.) In CTR module, only some other general features (user portraits, etc.) need to be obtained here. § EXPERIMENTS In this section, we conduct experiments with the aim of answering the following three research questions: * RQ1: Does our GLSM model outperform the baseline model? * RQ2: How does each part of our GLSM model work? * RQ3: What is the impact of the different components in GLSM? Before presenting the evaluation results, we first introduce the dataset, baseline model, metrics, and experimental setup. §.§ DataSet We adopt public datasets and industrial datasets to comprehensively compare GLSM models and baseline models. The statistics of the datasets are shown in Table 1. Taobao dataset: This dataset was first released by Alibaba-Taobao and is widely used as a common benchmark in CTR estimation tasks. It is a user behavior log for Taobao mobile application, including click, purchase, add-to-cart and favorite behaviors. There are approximately 101 actions per user, and an average of 24 actions per item, bringing the total number of actions to 100 million. We choose the closest 30 behavior as the short-term user behavior sequence, the others as long-term behavior. Industrial dataset: This dataset is an industrial dataset collected by our own App which is one of the top-tier mobile Apps in our country. It is much larger than the Taobao public data set, and the maximum user behavior sequence can reach 5000. In our business, taking into account the characteristics of the business, we use the behavior of users in the past two weeks as short-term behavior, no more than 80, and the rest are long-term behavior. §.§ Baselines and Metrics §.§.§ Baseline : We evaluate the performance of GLSM against the following state-of-the-art CTR methods. * DIN<cit.>: Based on the attention mechanism, the user behavior sequence is given different weights through the relevance of the behavior to the target advertisement. * DIEN<cit.>: Based on DIN, an interest extraction layer is designed to obtain timely interests from user behavior sequence. Meanwhile, an interest evolution layer is proposed, which uses GRU with attention update gates to simulate the interest evolution process related to the target item. * SIM<cit.>: SIM proposes a two-stage retrieval model for long-term user behavior. It retrieves the user's long-term behavior according to the category id, selects the top K historical behaviors of the same category as the candidate, and then adds it to the model through attention mechanism. In this study, we choose the model served online in SIM paper as our baseline. §.§.§ Metrics : We evaluate the CTR prediction performance with three widely used metrics. The first one is area under ROC curve (AUC) which reflects the pairwise ranking performance between click and non-click samples. In addition, in the recommender system the comparison of item rankings within the same user is a more important indicator. so we introduce GAUC<cit.> to observe the model's ability to rank different items for the same user. The other metric is log loss. Log loss is to measure the overall likelihood of the test data and has been widely used for the classification tasks. At the same time, We deploy models online in real industrial systems and use CTR as an evaluation metric in our A/B tests. §.§ RQ1: Does our GLSM model outperform the baseline model? We evaluate the performance of GLSM and baseline models on two datasets. From Table 2, we have the following observations: Long-term behavior analysis. GLSM+long outperforms the other two models because there are a large number of actions in long-term action sequences, which are not related to the target item. These behaviors bring noise to the model. DIN/DIEN+long does not specifically deal with this part of the noise information, So its performance is lowest among the three models. SIM+long achieves the purpose of filtering noise by only retrieving items of the same category as the target item. But its filter condition is a hard match, which leads to excessive noise removal. For example, barbecue and beer are common pairing categories, and hard filtering of categories will lose the matching between categories. Therefore, a complete retrieval scheme should filter noise from the long-term behavior of a large number of users and find effective information. It can be seen from the results that the effect of GLSM+long is better than that of the model SIM + long, indicating that the long-term behavior based on graph retrieval can remove redundant noise and enhance the information gain to improve the prediction effect. Short-term multi-intent recognition. We compared DIN+short, DIEN+short and GLSM+short, the effect is DIN < DIEN < GLSM. DIEN captures user interest transfer through time series models such as GRU, which improves the model performance, but does not solve the phenomenon of multi-action types and multi-intent mixing in short-term behavior sequences, while GLSM not only identifies user interest transfer through multi-intent GRU modules, but also make the user's intention clearer and split out to further improve the effect. In our business, better modeling of short-term behavior can effectively capture users' recent interests and improve user experience. Online performance. We deploy SIM and GLSM models on a real industrial recommender system, From figure 7 we can see that the most important online metrics CTR (CTR=#click/#pages) and GMV (GMV=1000 * #pay amount/#pages) have an overall average improvements of 4.9%, 4.3% respectively, which are significant improvement on our platform. §.§ RQ2: How does each part of our GLSM model work? §.§.§ A. Graph-based long-term interest retrieval module Number of clusters: When calculating g_im, it involves using clustering to capture the long-term interest clusters of users. Different numbers of long-term interest clusters will affect the g_im weight calculation of each node, which in turn affects the selection of the final center node. So, we tried multiple numbers of clusters on user long-term behavior sequences, and use the Silhouette Coefficient: s=b-a/max(a,b) (a: The mean distance between a sample and all other points in the same class, b: The mean distance between a sample and all other points in the next nearest cluster) to measure the clustering effect. From figure 8 we observed that when the number of clusters is 28, the clustering quality is the highest, so we select 28 as the number of clusters of long-term user behavior in the industrial data set. Clustering Visualization:In Figure 9, in order to observe the clustering effect in GRS, we show the dimensionality reduction of three cluster centers, each of which represents a user's interest. We found that each interest contains a certain amount of behavior, and the differences between interests are more obvious. Multiple Interest Weights: The importance of the association between target items and user interests is not the same. GLSM dynamically activates multiple interests through the interest activation unit. It can be seen from Figure 10 that during long-term interest matching, the weights of different interests have obvious personalized distributions.This matches people's common sense perception. For example, when the target item is a T-shirt, the relevance of clothing interests should be much higher than that of food interests. §.§.§ B. Short-term multi-intent recognition The short-term behavior represents the user's recent interest distribution. Meanwhile, users’ short-term intentions usually include multiple scenarios (such as home, company, morning, noon), which consist of users’ ongoing behaviors. Our proposed short-term multi-intent is only able to capture the multi-scene interests of users. As can be seen from Figure 11, GLSM decreased exposure to soy milk (usually at breakfast) at lunchtime and increased exposure to braised chicken rice (usually at lunch) compared to the base. It can be seen from this data that GLSM can better fit the actual interests and preferences of users in different dining scenarios. §.§ RQ3: Ablation and Hyperparameter Studies §.§.§ A.Numbers of topk center nodes in long-term module For the long-term interest module of the GLSM model, we have analyzed the clustering quality for different numbers of clusters in the previous section. Here, we analyze the impact of choosing different topk center nodes on model performance. It can be seen from Figure 12 that with the increase of the number of TopK center nodes, the model effect shows a trend of first rising and then decreasing. Therefore, we set topk = 15 which means to select 15 center nodes from the center node set. §.§.§ B.Long-term and short-term interests fusion In GLSM, we propose User Personalized Gate Network to fuse users' long-term and short-term interests. And we also compare it with several common fusion methods. Table 3 shows the long-term and short-term fusion method we designed improves AUC by over 0.002 compared with add, weight, multiply and concat. In conclusion, we have conducted extensive experiments and multiple experimental comparisons, our proposed GLSM model shows significant results both offline and online. § CONCLUSIONS In this paper, we focus on modeling users' long-term and short-term behavior. For long-term behaviors, we propose to build a graph retrieval structure to extract user interests and retrieve relevant long-term behaviors through center nodes. Compared with the SOTA baseline, we can extract various interests of users more personalized and reduce the information loss of long-term behavior. At the same time, graph retrieval can run in parallel with other processes to meet the online performance. For short-term behaviors, we split behaviors by scene to reduce confounding effects between scenes. In a single scene, GRU is used to extract the evolution of user interests, and in multiple scenes, user interests that match the target item are selected by an attention mechanism. For long Short-term fusion, the fusion network infers the influence of long-term and short-term behaviors individually based on user characteristics. Finally, we deployed the GLSM model on our platform. The GLSM model has brought significant business improvement and served mainstream traffic. ACM-Reference-Format
http://arxiv.org/abs/2306.10292v2
20230617082355
A new look at the theory of point interactions
[ "R. Figari", "H. Saberbaghi", "A. Teta" ]
math-ph
[ "math-ph", "math.MP", "quant-ph" ]
0pt 0pt 40pt 10pt -20pt 10pt 8.7in 6.65in 1.2 theoremTheorem[section] propositionProposition[section] corollaryCorollary[theorem] lemma[theorem]Lemma conjecture[theorem]Conjecture notationNotation remarkRemark
http://arxiv.org/abs/2306.17722v1
20230630150829
Impact of the phonon environment on the nonlinear quantum-dot-cavity QED. I. Path-integral approach
[ "L. S. Sirkina", "E. A. Muljarov" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "quant-ph" ]
dvipsnamesxcolor decorations.pathmorphing arrows, backgrounds fig-paper-1/
http://arxiv.org/abs/2306.03272v1
20230605214520
Better Write Amplification for Streaming Data Processing
[ "Andrei Chulkov", "Maxim Akhmedov" ]
cs.DC
[ "cs.DC" ]
1.0 Federal State Autonomous Educational Institution for Higher Education National Research University Higher School of Economics Faculty of Computer Science Applied Mathematics and Information Science BACHELOR'S THESIS Program project “Better Write Amplification for Streaming Data Processing” 1.0 16cm Submitted by Chulkov Andrey Sergeevich student of group 175, 4th year of study Approved by Supervisor: Akhmedov Maxim Basirovich Moscow 2021 § ABSTRACT tocsectionAbstract Many current applications have to perform data processing in a streaming fashion. Doing so at a large scale requires a parallel system that must be equipped to handle straggling workers and different kinds of failures. YT is the main driver behind distributed systems at Yandex, home to its distributed file system, lock service, key-value storage, and internal MapReduce platform. We implement a new component of this system designed for performing streaming MapReduce operations, utilizing different core YT solutions to achieve fault-tolerance and exactly-once semantics while maintaining efficiency and low write amplification factors. § KEYWORDS streaming data processing, map-reduce, write amplification, fault-tolerance, exactly-once, distributed systems § INTRODUCTION In the modern world, many large-scale data processing problems require handling a continuous and quickly shifting stream of data. This is especially true at a company as prominent as Yandex, which is handling exabytes of data from web search and its other services. Classic batch approaches involve running many parallel computations on a static and usually large pool of data. Batch processing systems have been flourishing since the introduction of the MapReduce paradigm <cit.> by Google. However, real-time streaming conditions impose a certain amount of limitations for which the batch programming model is not a fit. The foremost consideration is that real-time processing on a stream of data demands much lower latencies than batch executions, which can typically take many hours. This implies that persisting data is now much more costly relative to the execution time but still needed due to potential failures. It is thus an overly important task to reduce write amplification, which is the phenomenon associated with the same data being written to storage multiple times. Consequently, it is essential to keep as much data in memory as possible, so all of the consumers of a stream have to move along at a similar speed. This makes straggling workers even more of a challenge than before. We implemented a streaming version of MapReduce optimized to handle the aforementioned problems. It is part of Yandex's YT system and heavily utilizes its other components. Though there have been many developments in this field over the past decade, we think that taking advantage of Yandex's unique infrastructure will benefit internal users and external services greatly, while also bringing something new to the table of streaming data processing. This thesis is structured as follows: a broad synopsis is given in the subsections below, chapter <ref> discusses related work in this area, chapter <ref> gives a short overview of the YT components used; chapter <ref> describes the system architecture, chapter <ref> evaluates the results, chapter <ref> proposes designs for several future enhancements and chapter <ref> concludes and summarizes the work. §.§ Relevance and significance Big companies like Google, Yandex, Facebook, Amazon, Twitter and others deal with ginormous amounts of data, currently on the order of billions of gigabytes. Even smaller companies need to analyze and act on different kinds of logs and metrics from their services, often using cloud services which provide tools for efficient data processing, such as Amazon Web Services or the Google Cloud Platform. In all of the cases above doing the processing on a single machine becomes economically unfit. This led to distributed storage and processing systems running on commodity hardware flourishing since the early 2000s, with great influence from Google's GFS <cit.> and MapReduce papers, which stood at the foundation of the Apache Hadoop ecosystem. Since then, dozens of products have emerged in this field, improved and adapted for different needs. One of such needs is reliably processing large streams of data in real-time, which is especially crucial considering the vast amount of internet traffic nowadays <cit.>. For example, a video streaming service could want to analyze which videos are most popular in specific regions at any given moment so that they could automatically cache them on close by servers for better loading times. Even though there are quite a few existing distributed streaming processing systems, they usually have their own caveats and inefficiencies, which will be discussed in further chapters. It must be also noted that integrating an open-source solution into an existing infrastructure is often problematic and ineffective. Devising an efficient streaming processing algorithm and building a flexible system around it could benefit a lot of internal teams at Yandex and improve its services, which is directly related to expenses and revenue. Meanwhile, the need to tolerate inevitable machine failures in large clusters and to provide various consistency guarantees ads algorithmic complexity and academic merit to this work. §.§ Problem overview, goals and achieved results In the realms of this thesis we will work in the computation model described in the paragraphs below and displayed in figure <ref>. The input is given as a stream of rows consisting of multiple partitions. Similar to Kafka <cit.>, each of the partitions constitutes a queue of rows. Producers can append rows to the end of these queues and consumers can read the partitions at their own pace. There are multiple services used throughout Yandex that efficiently support this interface. As in classic MapReduce, each of the partitions is read by a mapper worker executing a user-provided function. For each produced row an index of the reducer worker that is supposed to handle it can be computed. We call this the shuffle function, which is required to be deterministic. Reducers execute a user-provided function, which will typically interact transactionally with some reliably stored internal state. Even though there are more expressive models and systems supporting large computation graphs, designing such a system was out of scope of this project. Conversely, we focused on an efficient and flexible implementation of the aforementioned - stage, usually called the shuffle stage, which is often a weak point of the larger products. More specifically, we worked to reduce write amplification and devise a solution with a low overhead on persistent storage, which is a somewhat novel approach in this field. Thus, the aim of this project was to design and implement the underlying infrastructure for a system operating within the interface described above and satisfying the following requirements: * Exactly-once semantics: the effect of processing each row should only be observed once, as part of a successful transaction commit to the reducer's internal state. * Fault-tolerance in regard to failures of any workers in the system. * The ability of healthy reducers to continue working successfully amidst failures of others. * The ability of the system to continue working successfully amidst slowdowns and failures of individual partitions. * General CPU and memory efficiency, including low write amplification factors. As a result, we have implemented a solution fulfills all of the above conditions, except being able to tolerate overly lengthy downtimes of reducers. Even so, we propose a design that would rectify this problem, along with other potential enhancements, in chapter <ref>. Our system can process gigabytes of streaming data per second and perform real-time analysis on it with sub-second latencies. § RELATED WORK As mentioned in the previous chapter, the work on real-time data processing systems has been very fruitful in the last ten years. Many open-source and commercial solutions are available on the market. Some of them, either prominent or specifically related to our planned approach, are described in more detail in the subsections below. Even though our approach is arguably quite novel, it is important to remember that due to the tremendous overheads and limitations of applying an existing open-source solution at a company as large as Yandex some similarities with existing systems are expected and even beneficial. We try to leverage the best ideas from state of the art systems with our own propositions and Yandex's unique infrastructure. §.§ MapReduce Google's MapReduce and its open-source implementation within Apache Hadoop are in no way equipped to handle streaming processing. However, their approach is at the foundation of our system and many others, so it is important to provide an overview of the paradigm and highlight its limitations. MapReduce handles input in the form of key-value pairs stored in a distributed file system as a large number of small-sized splits. During the map phase, workers each read a designated number of splits and execute a user-defined function on the collected key-value pairs. The produced results are partitioned by key and stored on local disks by the mappers. Each partition is assigned to a separate specific worker. These workers collect their corresponding partitions during the reduce phase, combine and sort them by key, and execute another user-provided function on the result. The output is written to a file in the distributed file system. It is ensured by the partitioning that pairs with the same key are handled by the same reducer. In case of failures, workers can be simply restarted. As with many other applications, this approach is heavily bounded by the latencies of I/O operations, such as reading and writing files to and from the distributed file system and local disks. It is also not easy to provide any reasonable consistency guarantees if reducers were to modify a global state or try to produce partial outputs. Thus, we take a different approach for delivering data between phases to accommodate for streaming processing needs, only borrowing the general model, which has proven to be adaptable to an abundance of diverse tasks. §.§ MapReduce Online MapReduce Online <cit.> is an attempt to specifically tackle the issues mentioned above to improve and expand the abilities of Hadoop MapReduce, including the option to run continuous jobs on a stream of data. The main enhancement comes in the form of pipelining. Instead of always writing data to disk, mappers now collect batches of key-value pairs and send them to the appropriate reducers before the whole input is mapped. To preserve fault-tolerance guarantees, these batches are still written to storage. However, they are typically retrieved soon after that, when they are presumably still resident in cache. Combined with periodic writes of reducer outputs to HDFS, an open-source implementation of the Google File System, this approach can be used to process data in a streaming fashion. Even so, the solution in question has many faults. It still persists data during both MapReduce phases, incurring high factors of write amplification. Moreover, it is not equipped to handle straggling reducers, which would cause unsent batches to build up, hindering the benefits of the approach above. It is also important to note that the delivery semantics of the proposed solution are unclear, and it does not seem to guarantee exactly-once processing without manual handling. We aim to build on the general idea of sending reasonably small batches of data from mappers to reducers as soon as possible and provide better write amplification factors and stronger processing guarantees. §.§ Apache Spark Streaming Spark Streaming <cit.> proposes the notion of discretized streams, which structure streaming computations as a series of stateless deterministic batch computations. The underlying units of these computations are stored in Resilient Distributed Datasets <cit.>, a data structure already utilized in regular Apache Spark <cit.>. RDDs are kept in memory and achieve fault-tolerance by storing the sequence of transformations that needs to be applied to the original data in order for the RDDs to reach their current state. In case of failure, these computations can simply be replayed. RDDs are internally stored as multiple partitions, which allows to perform data transformations and recovery in parallel across multiple nodes. Moreover, recovery can also be conducted independently across different RDDs. Spark Streaming supports map and reduce transformations among others and is well integrated with regular Spark batch processing. However, when performing reduce-like operations Spark employs a shuffle algorithm similar to the classic MapReduce implementation, collecting inputs for reducer tasks on disk, which is a big overhead and known weak point of the system. Additionally, while it is possible to achieve end-to-end exactly-once guarantees with Spark Streaming when using the Kafka Direct API for input, it requires manually implementing transactional outputs. While designing as large a system is out of the scope of this project, we capitalize on the idea of only persisting computation meta-state and keeping most of the handled data in memory, even during the shuffle operation. We also provide more out-of-the-box options for end-to-end exactly-once processing and atomic interactions with consecutive batches of data, which are heavily limited in Spark Streaming. §.§ Apache Storm Storm <cit.> is a distributed fault-tolerant real-time data processing system that was developed and open-sourced by Twitter. It allows to process streams of tuples flowing through computational graphs, called topologies. Topology nodes are subdivided into two categories: spouts, which represent data sources, and bolts, which represent processing operators. Storm supports parallelism in both of these kinds of tasks and is able to read from partitioned queues, such as Apache Kafka. Storm bolts can perform a variety of grouping operations, which essentially perform a shuffle and send data to different receiving instances of the same bolt. Internally, in-flight tuples are stored using in-memory queues and only sent to their corresponding receivers over the network without being persisted to disk, similar to the approach we take in our system. Storm seems to store its persistent state using Apache ZooKeeper <cit.>, even though it is not exactly clear from the paper cited above how exactly it is used to guarantee fault-tolerance. However, the biggest disadvantage of Storm comes with its weak message processing guarantees, only supporting either at-least-once or at-most-once semantics. Moreover, the mechanism to achieve at-least-once delivery is quite complex, implemented by tracking linage of each tuple and requiring the user to manually add these dependencies for a tuple and send acknowledgement events once it is processed. We use a similar tactic for performing shuffle operations without unnecessary write overheads, at the same time employing a novel approach for providing strong exactly-once processing guarantees without any additional implementation hassles for our system's clients. §.§ Apache Flink Flink <cit.> is a distributed fault-tolerant streaming and batch data processing system, with its creators and many committers employed at Ververica, formerly called data Artisans. It is one of the few systems, along with Spark Streaming, to provide a specialized API for both variations of data processing within one all-encompassing system. Internally, Flink express pipelines as dataflow graphs, consisting of sources, sinks and potentially stateful data operators, as well as data stream nodes that represent records produced by an operator or input source and available for consumption by other operators. Parallel execution is performed by splitting streams into multiple partitions and executing operators on each of them in different concurrent subtasks. This underlying representation is shared by both the batch DataSet API and the streaming DataStream API. As with many of the systems described, durable message queues like Apache Kafka are prominent and popular data sources for Flink. Flink provides several common processing abstractions that require the inputting data stream node to perform a shuffling operation between different partitions. In the case of batch processing, the shuffle is performed in a classical MapReduce fashion, with intermediate data designated for a certain partition being persisted to disk. For streaming processing, however, Flink utilizes a transient network-only shuffle approach. To guarantee fault-tolerance a unique method of checkpointing operator state to persistent storage like Apache Hadoop HDFS is implemented, called Asynchronous Barrier Snapshotting <cit.>. The novelty lies in the fact that for acyclic execution graphs no actual data records have to be stored in the persisted checkpoint. Flink data sources regularly insert special control barrier-records into streams, which trigger an asynchronous snapshotting operation when encountered by executing nodes. The system is then able to restart from these snapshots in case of failures and replay the processing with exactly-once guarantees. Our solution shares a lot of similarities with Flink, providing an efficient network-shuffle implementation that is also fault-tolerant and only commits external state updates exactly once per record. However, we take a somewhat different approach to achieving this, which makes our persisted state more compact, especially in cases of potential windowed aggregation where Flink does end up storing in-flight records in the checkpointed state. We also utilize YT's own persistent store, which is more robust than HDFS. Additionally, our system provides a wider range of transactional interactions with the output state, whereas Flink can only guarantee that effects will only be applied once if a Kafka transactional producer is used as a sink. § ABOUT YT The resulting effort is part of Yandex YT, which is the main driver behind all kinds of distributed systems at Yandex. In order to achieve the desired fault-tolerance and delivery semantics outlined in subsection <ref>, our algorithm takes advantage of a few other services offered by the YT ecosystem. In this subsection we provide some details about these products, which are necessary for understanding the proposed design. Most important for our case are YT's dynamic tables. They are architecturally similar to BigTable <cit.> and HBase <cit.> and guarantee fault-tolerance and consistency using Hydra, Yandex's original consensus protocol similar to Raft <cit.>. There are two types of dynamic tables offering different functionalities and interfaces. Ordered tables behave similarly to Kafka's topics, which were touched upon in subsection <ref>. These tables will be covered in more details in later subsections. Sorted tables provide a typical row-based strictly schematized storage supporting fine-grained reads and writes. Users can interact with these tables atomically by creating transactions, which can span across multiple rows and both kinds of tables. Transactions are implemented using two-phase commits, similar to the approach in Google's Spanner <cit.>. Additionally, we use Cypress, a filesystem-like metainformation store, which can also keep an attribute mapping in its nodes and supports transactions and locks. This allows it to be used similarly to Apache ZooKeeper. Internally it utilizes the same Hydra consensus algorithm mentioned above. § SYSTEM ARCHITECTURE In very general terms, a single streaming task, which we call a streaming processor, consists of endlessly running mapper and reducer jobs. Mappers read their corresponding partitions and keep a rolling window of ped rows in memory. These rows are split into small batches and re-read and re-mapped in case of failure. Mappers also compute the shuffle function for every row and store the results alongside these batches. Reducers, in turn, pull the corresponding rows from the mappers and process these rows using the specified function. The user-provided code can open a transaction while processing a batch of rows and modify a dynamic table of its choice. The system will then commit the required internal meta-state changes in the same transaction, guaranteeing that the effect of processing a batch of rows is applied exactly once. Mappers update their own persistent meta-state and move their window forward once rows are successfully processed by reducers. In the following subsections we will delve deeper into the design and components of the proposed solution. §.§ User API The whole system operates within a schematized key-value row-based data model, encapsulated in the class. It is stored as an array of strictly-typed data values, with a separate object used to map the array's indexes to the corresponding key strings. An object stores an array of objects along with a instance. This is the main abstraction users can interact with. To run their own streaming processor, users have to provide C++ implementations of the two following interfaces. §.§.§ Mapper struct PartitionedRowset UnversionedRowsetPtr Rowset; std::vector<int> PartitionIndexes; ; class IMapper public: virtual PartitionedRowset Map(UnversionedRowsetPtr rows) = 0; ; IMapperPtr CreateMapper(INodePtr configNode, IClientPtr client, TableSchemaPtr schema, MapperSpecPtr spec); The function receives a batch of rows and has to return a new, possibly empty, object along with a vector of the same size, indicating to which reducer each produced row should be sent to. The returned batch can have a different schema and contain more or fewer rows than the input. In other words, it represents a one-to-many mapping for each single input row. The function must be deterministic, otherwise exactly-once processing cannot be guaranteed. The function should create an instance of the user's derived class, given the user's own specified configuration (see subsection <ref>), a YT client that can be used to perform operations with other YT components, the schema of the input, as well as the specification of this mapper within the streaming processor. The last of these parameters contains the number of reducers, which is often useful for the user's partitioning logic. §.§.§ Reducer class IReducer public: virtual ITransactionPtr Reduce(UnversionedRowsetPtr rowset) = 0; ; IReducerPtr CreateReducer(INodePtr configNode, IClientPtr client, ReducerSpecPtr spec); The function receives a batch of rows designated to this reducer and can perform arbitrary user-defined processing. If the processing includes modifications of YT tables, the user can start a new transaction, act upon some table rows and return this transaction without committing. The reducer instance will modify the internal meta-state in the returned transaction and then try to commit both changesets atomically, guaranteeing exactly-once semantics for reducer processing. The user may also return , in which case the reducer instance will start a transaction itself. More details on this will follow in subsections <ref> and <ref>. The function should create an instance of the user's derived class, given the user's own specified configuration (see subsection <ref>), a YT client that can be used to start transactions and perform operations with other YT components, as well as the specification of this mapper within the streaming processor. §.§ Input model As mentioned earlier, our system accepts inputs presented in a Kafka-like fashion as multiple queues (partitions) organized into one stream (topic), which are assumed to be stored reliably. To be more exact, a viable input source needs to implement two methods of the interface, objects of which are responsible for retrieving the data from a single input partition. The first method, , takes two integers , and a more freely defined as parameters and should return the next batch of rows, along with a continuation token pointing to the next position in the stream. The parameter indicates a starting position in the input partition, from which this batch of rows should be read. The returned rows will be assigned indexes starting with in the corresponding mapper's input numbering. Thus it is essential that this method returns rows in a deterministic order, otherwise there is no way for our system to guarantee any reasonable delivery semantics. The parameter serves as a hint on the number of rows to read. The second method, , takes an integer and the aforementioned as parameters and should, in some way, mark entries before the or with index lower than as committed and thus safe to delete. Logically, this method must be idempotent. It is also allowed to perform this action asynchronously at some later time. The can be of any serializable type specific to the input source, however, it must be noted that it will be stored within the mapper's persistent state. Currently, our system supports the following two internal data delivery services: * Reading from an ordered dynamic table. It is internally divided into queue-like partitions called tablets. Each tablet is indexed from zero in an absolute fashion and can be read from and trimmed using these indexes. This makes it easy to use with the arguments in the methods above. * Reading from a LogBroker topic. It is internally divided into partitions. These partitions have their own offsets, which increase monotonically, but are not guaranteed to be sequential. Thus, it is necessary to use the argument to specify the next offset to read from in each cluster. The complexity of this design is largely due to the order in which the input sources listed above were supported. It will be somewhat remodeled and simplified in the future. §.§ Mapper workflow Below we will lay out the runtime of a single mapper instance. §.§.§ Internal state Mappers maintain two kind of absolute numberings that increase sequentially as the streaming processor is working: * The input numbering pertains to rows read by the mapper's partition reader instance. * The shuffle numbering pertains to rows produced by the user-provided function applied to the rows above. The following entities stored by a mapper instance are vital to the execution of our proposed algorithm: * An instance of the interface, described in subsection <ref>. It encapsulates all interactions with the input stream. * A queue of objects, which hold information about batches of read and mapped rows. These entries are indexed sequentially within the lifetime of the instance, thus we also store the absolute index of the first entry in the queue. * An array of objects, one for every reducer, which hold a queue of shuffle row indexes that will need to be shipped to said reducer, along with the window entry index in which the first of these rows is to be found. * , a local copy of the persistent state (see <ref>), which serves as a lower bound on yet unread row indexes. * , the current version of the persistent state as seen by this mapper. Each window entry also stores a bucket pointer count, which tallies the number of buckets for which this entry holds the first row in their queue. §.§.§ Persistent state The persistent state is stored in a sorted dynamic table shared by all mapper instances. Mappers are indexed starting from 0, and every mapper knows its index from its configuration, which will be later discussed in subsection <ref>. Each mapper only interacts with its single corresponding row of the table and doesn't interfere with other running mappers. The state table contains the following columns: * : the key column. * : the index in the input numbering of the first row that was not yet successfully processed and committed by its corresponding reducer. * : same as above, but in regards to the shuffle numbering. * : same as above, but in terms of the partition reader's continuation token. This state is used to guarantee consistency and exactly-once semantics in case of failures, which will be discussed in more details in subsection <ref>. §.§.§ Input ingestion procedure This procedure starts as soon as the mapper is alive. Initially, it fetches its corresponding row from the state table. As mentioned in <ref>, a local copy is stored in both the and field. This state is then read into variables , and . Afterwards, the following cycle is repeated continuously while the instance is working: * Wait for a configuration-defined amount of time if the previous iteration of this cycle didn't finish with appending a non-empty batch of rows to the internal state. * Wait for the next batch of rows from the mapper's partition reader instance. * Fetch the current remote persistent state, skipping to the next iteration in case of errors. If the result differs from the state stored in , we are in a split-brain situation and the mapper waits out a configurable delay, after which the internal state is dropped and the whole input ingestion procedure described here in <ref> is restarted. * If the batch is empty, skip to the next iteration of the cycle. Otherwise, the resulting rows now have sequential indexes starting with . * Run the user-provided function on the batch of rows and build a instance, which contains the returned rowset, the corresponding index ranges in both numberings, the continuation token returned from the partition reader, as well as additional information mentioned in <ref>. * Increment the memory usage semaphore and push the built window entry to the mapper's . Iterate over the mapped rows and push their (shuffle) indexes to the corresponding reducer buckets, incrementing the entry's bucket pointer count when adding the first element to a bucket. In the latter case, the bucket's first window entry index is set to the index of the current entry. * Update the and variables accordingly. * If the memory limit is exceeded, block on the semaphore, waiting for the usage to be below the threshold again. §.§.§ RPC methods Concurrently with the actions described above, each mapper responds to remote procedure calls from reducers. This method is described by the following protobuf-schema: message TReqGetRows optional int64 count = 1; optional int64 reducer_index = 2; optional int64 committed_row_index = 3; optional string mapper_id = 4; message TRspGetRows optional int64 row_count = 1; optional int64 last_shuffle_row_index = 2; With this call the reducer with index requests of its assigned rows. The parameter denotes the shuffle index of the last row successfully processed and committed by this reducer. The parameter is used to discard incorrect requests due to stale discovery information (see subsection <ref>). The actual rows are returned as attachments in a binary format. As a result, rows are returned, with indicating the shuffle index of the last of these rows. The latter of these response fields is needed due to the fact that rows assigned to each reducer don't necessarily have sequential indexes. The amount of returned rows can be smaller than the requested number or even zero. The execution of this procedure by the mapper boils down to the following steps: * If differs from the mapper's own GUID return an error response. * Pop from the reducer's corresponding while the first index in the queue is less than or equal to . Iterate across the beginning of the window entry queue to update the bucket's first window entry index and the windows' bucket pointer counts as necessary. * Schedule trimming operations if necessary, see <ref>. * Serialize rows, or as many as are available, from the beginning of the bucket's queue and return them as attachments along with the appropriate response fields. It is important to note that these rows are not deleted from the queue. §.§.§ Trimming In order to make progress a mapper needs to free up memory used by rows that were successfully processed by their corresponding reducers. This is done by the method, which pops window entries with bucket pointer counts equal to zero from the front of the window queue and increments the absolute index of the first window in the queue accordingly. The after-the-end indexes and continuation token of the last popped window entry, if there was one, are used to update the field. To support end-to-end exactly-once scenarios a mapper has to manually ensure that its input partition moves along as rows are processed by reducers, which also requires updating its persistent state so that the mapper doesn't try to read already-deleted rows. Timely updating persistent state is also necessary to reduce the number of already-processed rows that will be reread by the mapper when it restarts after a possible failure. This is implemented by the method. It opens a dynamic table transaction and fetches the current committed persistent state in it. If it is equal to the state stored in and is further along than the committed state, the method tries to update the remote state with within the same transaction. If the transaction was committed successfully, the method updates with the committed result and calls on the partition reader (see subsection <ref>), passing the input index and continuation token from the local state as arguments. By using the field we were able to separate trimming actions into two methods that can be executed independently, which allows for a more efficient asynchronous implementation. We call the first method when a call causes a bucket pointer count to become zero. We schedule the second method, which is more costly due to its transactional interactions with dynamic tables, to be called regularly with a configuration-defined period, usually on the order of a few seconds. §.§ Reducer workflow Below we will describe the runtime of a single reducer instance, which is arguably more straightforward than the life of the mappers. §.§.§ Persistent state The persistent state is stored in a sorted dynamic table shared by all reducer instances. Reducers are indexed starting from 0, and every reducer knows its index from its configuration, which will be later discussed in subsection <ref>. Each reducer only interacts with its single corresponding row of the table and doesn't interfere with other running reducers. The state table contains the following columns: * : the key column. * : a list of shuffle row indices, one for each mapper, indicating that all rows up to said index were reliably processed by the reducer. This state is used to guarantee consistency and exactly-once semantics in case of failures, which will be discussed in more details in subsection <ref>. §.§.§ Main procedure This procedure starts as soon as the reducer is alive, and performs the following cycle continuously while the instance is working: * Wait for a configuration-defined amount of time if the previous iteration of this cycle didn't finish with a successful persistent state update. * Fetch the current persistent state into . * Fetch a list of mappers from discovery (see subsection <ref>). Build asynchronous RPC requests to these mappers, one per mapper-index, passing the corresponding value from as . Wait for all of these requests to complete. Only one request per mapper index is made. * Create as a copy of with each array element set to the value returned by the corresponding mapper. If a mapper with a certain index returned an empty batch of rows, an error or was missing in discovery and wasn't polled, its entry is left unchanged. If all variables are equal zero the next steps are skipped. * Deserialize the attachments in RPC responses to the requests above into rows and run the user-provided function on all of these rows combined into one batch. * If the call returned null (see subsection <ref>), start a new transaction to commit updates to the persistent state. Otherwise, the actions in the following steps are performed within the transaction returned by . * Fetch the persistent state again within the transaction. If it differs from , we are in a split-brain situation and skip to the next iteration of the cycle without committing anything. * Write to the reducer's corresponding state row and try to commit the resulting transaction. Some of these steps can produce errors, such as a failed state fetch or transaction commit. If that happens, we just skip forward to the next iteration and wait out the back-off delay in step 1. §.§ Configuration, discovery and control The system is configured using YT's own JSON-like format, called YSON. There are quite a few parameters of the algorithm described above which can be tweaked by the user, which, however, are too minor to be discussed in this paper. Additionally, users can define their own similar configuration classes, which they can use to specify parameters for their own and implementations. Each mapper and reducer is also passed a separate system-generated specification file which contains the GUID of the streaming processor, the path of the corresponding state table, the worker's index and GUID, as well as the number of reducers or mappers respectively. We utilize an existing YT component for performing discovery, which is required for reducers to be able to resolve the mappers' addresses. Internally, it uses Cypress, described in chapter <ref>. Participants of a discovery group create and take a lock on key-named nodes in a shared Cypress directory, storing any necessary information in the node's attributes. The directory's name, therefore, represents the name of this discovery group. Other clients can fetch a list of nodes in this directory and retrieve the relevant attributes. Mappers all join the same discovery group, providing their GUID's as keys, and store their address, RPC port and index as node attributes. Reducers also join a similar group, sharing only their index. It is important to understand that in case of failures, or even on startup, the information in these discovery groups can be stale and taking some time to update. For example, a failed mapper and it's newly-alive replacement could temporarily both appear in discovery. Thus, we have to take the additional precautions in the main reducer procedure which were described in <ref>. The whole streaming processor is executed as a YT “vanilla” operation, which allows running user-specified binaries on a number of nodes, automatically restarting them in case of failures. Currently, there is a manual script that sets up such an operation given the appropriate configuration files. In the future, however, a controller currently under development will be used to start and monitor a streaming processor, correctly restarting the whole operation if it spuriously fails. §.§ Fault tolerance and exactly-once delivery In the examination of our system we consider that any worker can fail spontaneously. Moreover, since workers failed workers are automatically restarted (see subsection <ref>) we can temporarily end up with multiple instances of the same mapper or reducer if network partitions occur, producing a so-called split-brain scenario. The proposed solution can maintain correctness and exactly-once delivery semantics in the scenarios described above. This is guaranteed by the following simple conclusions: * A mapper's state is only advanced past a row once all of the rows that were produced from it by the function have been successfully processed by their designated reducers. * An input row is only trimmed once the corresponding mapper's state has advanced past it. Thus, rows will not be trimmed unless they were at least once. * A produced row is only sent to its designated reducer if the corresponding mapper's state was not modified by some other worker while the originating input row was being read. This, along with the function being deterministic, ensures that rows receive the same input and shuffle indices even in split-brain scenarios. * Reducers always process rows and modify their persistent state atomically, thus a row is guaranteed to be processed at most once even in split-brain scenarios. Even though we mostly talk about split-brain scenarios in the points above, simple failures are essentially a subset of these scenarios and usually more straightforward. If a mapper or reducer is restarted it looses some progress, but correctness is maintained for the same reasons as mentioned above. Besides being able to handle failures correctly, another important aspect is that healthy workers still achieve progress even when others have failed. In our current solution this is true for mappers: a failed or unavailable mapper or input partition is simply ignored by reducers until it comes back online, so the streaming processor is not hindered at all. When reducers fail, however, the mappers' won't be able to trim rows designated to this reducer, causing their row windows to build up and hit the memory limit. This would eventually cause the whole processor to stagnate. An improvement able to overcome this problem has already been designed and is described in chapter <ref>. § EVALUATION As a result of this thesis we implemented the design described above within YT. The core of the system itself was written in C++, and Python was used for tests. The current implementation contains more than 6000 lines of code, about 4000 of which are in C++. In the subsections below we describe how we have tested our solution to assess its correctness, performance and practicality. §.§ Local integration tests We check the system's correct behaviour in various testing scenarios by using a Python environment that sets up its own small local YT cluster and LogBroker installation. To allow for more intricate checks, we implemented mappers and reducers that interpret control strings within the stream being processed and either halt their execution for a specified amount of time or use Cypress nodes to halt and wait for an external signal to continue. Altogether, this allowed us to write integration tests that verify the correctness of the intermediate state during and after simulated job failures and automatic restarts. §.§ Performance testing To analyze performance we deployed a streaming processor on a production-like testing cluster to perform somewhat realistic analysis on real-time high-velocity stream of data. We chose to experiment with LogBroker as an input provider since it is the more widespread data delivery solution at Yandex. Our input source was a topic fed by logs from YT's replicated master nodes, which are responsible for almost all crucial internal YT coordination within a single cluster. The topic in question has 90 partitions, each actually representing 5 distinct partitions across different clusters, which makes a total of 450 independent unique input partitions. The write rate to the topic is steady at around 3.5 gigabytes of uncompressed data per second, with messages consisting of batched and joined master node log entries. Since every source partition has to be read by a designated mapper, our processor consisted of 450 mapper jobs which were pulling decompressed messages from the LogBroker topic. The mappers' implementation split each read message back into individual log messages. These messages were then parsed and hash partitioned by their respective user and cluster fields. Log messages that didn't have a user field were simply ignored, which eliminated around 80 to 90 percent of all messages. The remainder was processed by 10 reducer workers, which grouped messages by user and cluster, writing the timestamp of the user's last access to the cluster and a tally of the number of corresponding messages in the batch to a sorted dynamic table shared by all reducers. Since many of the incoming messages were dropped by mappers, the resulting input flow into the reducers was around 400 megabytes per second. This last number might not seem too big, but it must be noted that our setup shares a lot of characteristics and caveats with real-life processing tasks: the write rate into individual partitions varies with time and even more across clusters, mappers perform significant filtering work and the distribution of keys is uneven, with and a few other system users appearing in overwhelmingly more messages than regular users. As can be seen in figure <ref> above, reducers are able to process up to around 95 megabytes per second. The maximum input ingestion speed by reducers is the relevant metric here since we know that the data is quite uneven, causing the most loaded reducers to become bottlenecks for the whole processor. Another important metric to watch is the read lag, defined as the elapsed time between a message being written to the LogBroker topic and the moment when the message was read by the corresponding mapper. In our experiment mappers were able to work with a steady read lag of a few hundred milliseconds, which can be seen for ten mappers in figure <ref> on the next page. We chose these mappers evenly across partitions from different clusters since the graph with all 450 mappers is completely illegible. Nonetheless, the maximum average read lag for all mappers is about 400 milliseconds. To test how well the system recuperates from failures, we have enacted a few manual failure scenarios on the streaming processor discussed above. First, we paused a single mapper for around 10 minutes and killed it at the end of this period, allowing the controller to restart the job. As expected, this didn't cause any reducers to slow down. The important metric here was how fast this mapper could catch up with the stream. As it can be seen from figure <ref> on the next page, the read lag dropped to the same level as it was before the failure in around 15 seconds. This was made possible by the mapper's internal buffer, which temporarily grew to around 1.5 gigabytes out of its 8 gigabyte memory limit, as illustrated in figure <ref>. It took around 15 minutes for the mapper to shrink its buffer back to its pre-failure state. In a production scenario one would use more reducers so that their peak processing rate is more substantially higher than the input topic's throughput. The second scenario we tested was a 10 minute pause and later failure of a single reducer. As discussed previously, our system is not yet well-equipped to handle these kind of downtimes efficiently. Currently, a single unavailable reducer almost completely prevents mappers from trimming their internal buffers. A possible remedy for this drawback is outlined in section <ref>. Nonetheless, it is important to assess the current effect reducer halts have on a streaming processor. Predictably, the mappers' buffers grew while the reducer was out, as can be seen in figure <ref>. Again, we only present results from 10 evenly selected mappers so that the graph is legible. Once the reducer was back online the mappers quickly recuperated and their windows began shrinking back to their previous sizes in a matter of minutes. Thanks to the mappers' buffers, no other performance metrics were impacted during this test. From these scenarios it is clear that our system does, in fact, sustain real worker failures and downtimes while maintaining solid processing throughput. The next logical step would have been to test it within a real production workflow, which, sadly, didn't end up fitting in our timeline. § FUTURE WORK There are many directions in which this work can be continued, and some features that will likely be implemented soon. In this chapter we also want to outline the potential designs for some of the proposed functionality, as the ideas behind them are sophisticated enough to be worth mentioning. Most importantly, to deal with straggling workers, mappers will flush batches and advance their windows when most, but not necessarily all, reducers have processed the rows in these batches. When that happens, rows that are still needed by some reducers will be spilled to a designated table. By configuring thresholds in this approach we will be able to leverage low write amplification factors with sufficient straggler tolerance. Another goal is to allow a single mapper to read multiple input partitions. It would enable the system to use fewer resources when an input topic has many low-throughput partitions, which causes mappers to be underutilized. The challenge lies in the fact that the order in which data is delivered from distinct partitions is not deterministic. Two batches of rows might be read from two different partitions in one sequence, partially sent off and processed by reducers and then reread in the reverse order if the mapper fails. This would inevitably cause some rows to be lost and others to be processed more than once. To overcome this issue, mappers will read data in one of two modes. In the advancing mode a mapper will collect data from its multiple assigned partitions and persist the order and size of the received batches to a tablet of an ordered dynamic table. In the catch up mode a mapper will read rows from this tablet and wait to receive the same amount of rows from the corresponding partitions, returning them in exactly the same order. The latter mode will be used when a mapper finds that its state is behind the offsets stored in its designated tablet. Altogether, this would allow us to guarantee that data from multiple partitions will be received by a mapper in a deterministic order, solving the issue in question. A further limitation of the current system is the reducer interface, which permits working transactionally with only one batch of rows at a time. In this model one cannot perform windowed aggregation while maintaining true exactly-once guarantees. We would like to move to a persistent queue interface in which users can request batches of rows as needed, carry out their desired computations and invoke a commit method on batches that have been successfully processed. This method would, naturally, allow a transaction to be passed along, which would be used to update the reducer's persistent state and commit the user's processing side-effects atomically. To improve processing efficiency it is also possible to modify both the mapper and reducer workflows to run in a pipelined fashion. For example, a single cycle of the reducer's main procedure can be subdivided into three consecutive stages: fetch, process (combine row batches and run ) and commit. Thus, we can perform stages within different cycles concurrently, as long as executions of each individual stage are well-ordered. This is a generalization of instruction pipelining utilized in modern processors. Continuing with client-side features, not all tasks demand strict exactly-once guarantees. For example, jobs calculating generic statistics on a stream of data can usually handle minor losses of rows by reducers and small amounts of data duplication. Thus, we could also want to provide the ability to lift some of the requirements in favor of better processing times. There is also a long way to go on the usability front. Improvements could include providing an easier way of dealing with s and implementing some common functionality, such as hash partitioning, within designated base classes. In the long run, our design has the theoretical ability to support snapshotting the state of the system and restarting from said snapshots. It could also be integrated more deeply into a solution which would allow users to run streaming pipelines consisting of several streaming processors. § CONCLUSION Streaming data processing problems arise regularly at IT companies of all scales. It is a vast field of work, with many different solutions available in local and cloud environments. However, there is a lack of suitable systems at Yandex internally. Open-source solutions are usually locked to corresponding underlying infrastructures and cannot be adapted without tremendous overheads. As a result, we have used Yandex's native distributed infrastructure to build an efficient distributed fault-tolerant streaming processing system that achieves low write amplification factors and provides end-to-end exactly-once guarantees. [heading=bibintoc]
http://arxiv.org/abs/2306.02031v1
20230603071748
DOS: Diverse Outlier Sampling for Out-of-Distribution Detection
[ "Wenyu Jiang", "Hao Cheng", "Mingcai Chen", "Chongjun Wang", "Hongxin Wei" ]
cs.LG
[ "cs.LG" ]
ADMM-based Detector for Large-scale MIMO Code-domain NOMA Systems Vinjamoori Vikas^1, Kuntal Deka^1, Sanjeev Sharma^2, and A. Rajesh^1 ^1Indian Institute of Technology Guwahati, India, ^2Indian Institute of Technology (BHU) Varanasi, India July 31, 2023 ========================================================================================================================================================================================== Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world. It is common practice to leverage a surrogate outlier dataset to regularize the model during training, and recent studies emphasize the role of uncertainty in designing the sampling strategy for outlier dataset. However, the OOD samples selected solely based on predictive uncertainty can be biased towards certain types, which may fail to capture the full outlier distribution. In this work, we empirically show that diversity is critical in sampling outliers for OOD detection performance. Motivated by the observation, we propose a straightforward and novel sampling strategy named DOS (Diverse Outlier Sampling) to select diverse and informative outliers. Specifically, we cluster the normalized features at each iteration, and the most informative outlier from each cluster is selected for model training with absent category loss. With DOS, the sampled outliers efficiently shape a globally compact decision boundary between ID and OOD data. Extensive experiments demonstrate the superiority of DOS, reducing the average FPR95 by up to 25.79% on CIFAR-100 with TI-300K. § INTRODUCTION Modern machine learning systems deployed in the open world often fail silently when encounter out-of-distribution (OOD) inputs <cit.> – an unknown distribution different from in-distribution (ID) training data, and thereby should not be predicted with high confidence. A reliable classifier should not only accurately classify known ID samples, but also identify as "unknown" any OOD input. This emphasizes the importance of OOD detection, which determines whether an input is ID or OOD and allows the model to raise an alert for safe handling. To alleviate this issue, it is popular to assume access to a large auxiliary OOD dataset during training. A series of methods are proposed to regularize the model to produce lower confidence <cit.> or higher energy <cit.> on the randomly selected data from the auxiliary dataset. Despite the superior performance over those methods without auxiliary OOD training data, the random sampling strategy yields a large portion of uninformative outliers that do not benefit the differentiation of ID and OOD data <cit.>, as shown in Figure <ref> & <ref>. To efficiently utilize the auxiliary OOD training dataset, recent works <cit.> design greedy sampling strategies that select hard negative examples, i.e., outliers with the lowest predictive uncertainty. Their intuition is that incorporating hard negative examples may result in a more stringent decision boundary, thereby improving the detection of OOD instances. However, the OOD samples selected solely based on uncertainty can be biased towards certain classes or domains, which may fail to capture the full distribution of the auxiliary OOD dataset. As shown in Figure <ref>, the concentration of sampled outliers in specific regions will result in imbalanced performance of OOD detection across the feature space (see Section <ref> for more details). This motivates us to explore the importance of diversity in designing sampling strategies. In this work, we empirically show that diversity is critical in designing sampling strategies, by the observation that outlier subset comprising data from more clusters results in better OOD detection performance. It is noteworthy that the diverse outlier pool without considering the cost of development <cit.> might not directly transfer to the outlier subset, due to the deficient sampling strategy. Therefore, the sampling strategy should improve the diversity of selected hard negative samples, for a globally compact decision boundary as shown in Figure <ref>. Specifically, we propose a straightforward and novel sampling strategy named DOS (Diverse Outlier Sampling), which first clusters the candidate OOD samples, and then selects the most informative outlier from each cluster, without dependency on external label information or pre-trained model. For efficient and diverse clustering, we utilize the normalized latent representation in each iteration with K-means algorithm <cit.>. Trained with absent category loss, the most informative outlier can be selected from each cluster based on the absent category probability. In this way, diverse and informative outlier subset efficiently unlocks the potential of auxiliary OOD training dataset. To verify the efficacy of our sampling strategy, we conduct extensive experiments on common and large-scale OOD detection benchmarks, including CIFAR-100 <cit.> and ImageNet-1K <cit.> datasets. To the best of our knowledge, we are the first to evaluate outlier exposure methods under large-scale scenario. Empirical results show that our method establishes state-of-the-art performance over existing methods for OOD detection. For example, using CIFAR-100 dataset as ID and a limited TI-300K <cit.> as auxiliary OOD training dataset, our approach reduces the FPR95 averaged over a comprehensive collection of OOD test datasets from 50.15% to 24.36% – a 25.79% improvement over the competitive outlier exposure method NTOM using greedy sampling strategy <cit.>. Moreover, we show that our sampling strategy keeps consistent superiority over other sampling strategies across different auxiliary OOD training dataset and regularization term, such as energy loss <cit.>. Additionally, we contrast with alternative representations and varying number of clustering centers, which demonstrate the importance of normalized feature and diversity. Overall, the proposed sampling strategy DOS further boosts the performance of outlier exposure methods while maintaining the classification accuracy on ID data. Obviously, diverse, clean, and overwhelming auxiliary outliers would be extremely hard to develop in real-world scenarios, which hinders application of outlier exposure methods for OOD detection. However, our method shows superiority with a much limited outlier pool, and thus can be easily adopted in practice. § PRELIMINARIES §.§ Background Setup. In this paper, we consider the setting of supervised multi-class image classification. Let 𝒳 = ℝ^d denote the input space and 𝒴 = {1,...,K} denote the corresponding label space. The training dataset 𝒟^train_in = {(x_i, y_i)}^N_i=1 is drawn i.i.d from the joint data distribution ℙ_𝒳×𝒴. We use ℙ_𝒳^in to denote the marginal probability distribution on 𝒳, which represents the in-distribution (ID). Given the training dataset, we learn a classifier f_θ : 𝒳↦ℝ^|𝒴| with learnable parameter θ∈ℝ^p, to correctly predict label y of input x. Let z denote the intermediate feature of x from f_θ. Problem statement. During the deployment stage, the classifier in the wild can encounter inputs from unknown distribution, whose label set has no intersection with 𝒴. We term the unknown distribution out-of-distribution (OOD), denoted by ℙ_𝒳^out over 𝒳. The OOD detection task can be formulated as a binary-classification problem: determining whether an input x is from ℙ_𝒳^in or not (ℙ_𝒳^out). OOD detection can be performed by a level-set estimation: g(x)= in, if S(x) ≥τ out, if S(x) < τ where S(x) denotes a scoring function and τ is a threshold, which is commonly chosen so that a high fraction (e.g., 95%) of ID data is correctly distinguished. By convention, samples with higher scores are classified as ID and vice versa. Auxiliary OOD training dataset. To make the classifier distinguish ID from OOD data, it is popular to assume access to an auxiliary unlabeled OOD training dataset 𝒟^aux_out = {x}^M_i=1 from ℙ_𝒳^𝕠𝕦𝕥 at training stage (M ≫ N). In particular, the auxiliary dataset 𝒟^aux_out is typically selected independently of the specific test-time OOD datasets denoted by 𝒟^test_out. Several methods <cit.> propose to regularize the classifier to produce lower score on the randomly selected outliers from 𝒟^aux_out; Formally, the objective can be formulated as follows: ℒ = 𝔼_(x, y) ∼𝒟_in^train[ℒ(f(x), y)+λ𝔼_x∼𝒟_out^aux[ℒ_OE(f(x), y)]] However, the random sampling strategy yields a large portion of uninformative outliers that do not benefit the OOD detection <cit.>. Recent works <cit.> further design greedy strategies to sample outliers with the lowest predictive uncertainty, and thus resulting in a more stringent decision boundary. For terminology clarity, we refer to training-time OOD data as outlier and exclusively use OOD data to refer to test-time unknown inputs. Despite the superior performance of greedy strategies over those methods without auxiliary OOD training data, the OOD samples selected solely based on uncertainty can be biased towards certain classes or domains, which may fail to capture the full distribution of the auxiliary OOD dataset. In the following section, we empirically show the bias of the greedy sampling strategy, and reveal the importance of diversity in designing sampling strategies. §.§ Motivation To demonstrate the inherent bias of the greedy sampling strategy, we divide the auxiliary dataset into multiple groups based on semantic information. In particular, we adopt K-means <cit.> method to group similar outliers with their intermediate features, extracted by a pre-trained model. For the greedy sampling, we simply select outliers with the highest predictive confidence following ATOM <cit.>. For comparison, we provide two additional sampling strategies: uniform sampling that uniformly samples outliers from different groups and biased sampling that selects outliers from only a group. We construct three subsets with the same size using the three sampling strategies, respectively. In this part, we perform standard training with DenseNet-101 <cit.>, using CIFAR-100 <cit.> as ID dataset and TI-300K <cit.> as outlier pool. For evaluation, we use the commonly used six OOD test datasets (see <ref> for more details). To extract features for clustering, we use the pretrained WRN-40-2 <cit.> model <cit.>. For the clustering, we set the number of clusters as 6. The sampling bias of greedy strategy. Figure <ref> presents the clustering label distribution of outliers sampled by the greedy and uniform strategies. The x-axis denotes the ID of different clusters. The results show that the greedy strategy leads to a biased sampling, which exhibits an imbalanced distribution of outliers over the six clusters. For example, the number of outliers from the cluster C1 is nearly twice that of the cluster C6. With imbalanced distribution, the biased outliers from greedy sampling may fail to capture the full distribution of the auxiliary OOD training dataset, which degrades the performance of OOD detection. The importance of diversity in designing sampling strategies. To verify the effect of diversity in outlier sampling, we compare the OOD detection performance of models trained with the biased and uniform strategies, presenting in Figure <ref>. Here, we use the inverse of absent category probability as scoring function. Recall that the biased strategy is an extreme example that select outliers from only a cluster, the uniform strategy maximizes the diversity by uniformly selecting outliers from the six clusters. The results show that the uniform strategy with max diversity achieves much lower FPR95 than the biased strategy, which demonstrates the critical role of diversity in sampling strategies. To understand how the diversity of outlier affects OOD detection performance, we compare the score distribution of the biased and uniform strategies in Figure <ref>. We can observe that the biased sampling produce more OOD examples with high scores that are close to ID examples, making it challenging to differentiate the ID and OOD examples. This phenomenon aligns with the locally compact decision boundary shown in Figure <ref>. In contrast, diverse outliers selected by the uniform strategy result in smooth score distribution, and thus better differentiation of ID and OOD data. In this way, we show that the diversity of outlier is a critical factor in designing sampling strategies. § METHOD: DIVERSE OUTLIER SAMPLING From our previous analysis, we show that training with outliers that are sufficiently diverse, the neural network can achieve consistent performance of OOD detection across the feature space. For a compact boundary between ID and OOD examples, the selected outliers should be also informative, i.e., close to ID examples <cit.>. Inspired by the insights, our key idea in this work is to select the most informative outliers from multiple distinct regions. In this way, the selected outlier could contain sufficient information for differentiating between ID and OOD examples while maintaining the advantage of diversity. To obtain distinct regions in the feature space of outliers, a natural solution is to utilize the semantic labels of the auxiliary dataset. However, it is prohibitively expensive to obtain annotations for such large-scale datasets, making it challenging to involve human knowledge in the process of division. To circumvent the issue, we present a novel sampling strategy termed Diverse Outlier Sampling (DOS), which partitions outliers into different clusters by measuring the distance to the prototype of each cluster. In the following, we proceed by introducing the details of our proposed algorithms. Clustering with normalized features To maintain the diversity of selected outliers, we employ a non-parametric clustering algorithm - K-means, which partitions the outliers from the auxiliary dataset into k clusters 𝐂 = {C_1, C_2, …, C_k} so as to minimize the within-cluster sum of squares. Formally, the objective of vanilla K-means algorithm is to find: 𝐂min∑_i=1^k ∑_𝐱∈ C_i𝐳-μ_i^2, where μ_i is the centroid of outliers from the cluster C_i. Nevertheless, adopting the vanilla K-means algorithm will introduce bias towards features with larger scales, i.e., examples with confident predictions. In other words, those features with large scales may have a greater impact on the clustering process, which degrades the performance of outlier clustering. To address this issue, we propose to normalize the features before the clustering, thereby mitigating the negative effect of the feature scale. We provide an ablation study Section <ref> to validate the effect of the normalization in clustering. In particular, the new objective of normalized K-means algorithm is: 𝐂min∑_i=1^k ∑_𝐱∈ C_i𝐳/𝐳-μ_i^2 Now, we can partition outliers from the auxiliary dataset into k clusters with the normalized K-means algorithms. By uniformly sampling from these clusters, the diversity of the selected outlier can be easily upper bounded. In Section <ref>, we provide an ablation study to show the effect of the number of clustering centers. Active sampling in each cluster Despite that using diverse outliers can promote a balanced OOD detection performance across the feature space, the selected outliers might be too easy for the detection task, which cannot benefit the differentiation of ID and OOD data. Therefore, it is important to filter out those informative outliers from each cluster. Following the principle of the greedy sampling, we select the hard negative examples which are close to the decision boundary. Practically, we use the inverse absent category probability as scoring function and select the outlier with highest score in each clusters. For cluster C_i, the selected outlier x_j is sampled by 𝐣max [1.0 - p(K+1|x_j)]. With the diverse and informative outliers, the model could shape a globally compact decision boundary between ID and OOD data, enhancing the OOD detection performance. Mini-batch scheme Previous works <cit.> normally sample the outliers from candidate pool in epoch level. However, the overwhelming pool heavily slow down the clustering process. For efficient sampling, we design a mini-batch scheme by splitting the full candidate pool into small groups sequentially. In each iteration, we select outliers by the proposed sampling strategy to regularize the model. In Figure <ref>, we explicitly show the differences between the greedy strategy and our DOS method in diversity and uncertainty. As shown in Figure <ref>, the outliers selected from our sampling strategy indeed achieves much larger diversity than those of the greedy strategy. The results of the Figure <ref> show that our strategy can obtain outliers with comparable OOD score to those of the greedy strategy, which demonstrates the informativeness of the selected outliers. Training objective In each iteration, we use the mixed training set comprising labeled ID data and unlabeled outliers for training the neural network. Concretely, the classifier is trained to optimize θ by minimizing the following cross-entropy loss function: ℒ = 𝔼_(x,y)∼𝒟^train_in[-log p(y|x)] + 𝔼_x∼𝒟^sam_out[-log p(K+1|x)] Extension to other regularization terms. The details of DOS are presented in Algorithm <ref>. It is worthy to note that our sampling strategy is a general method, orthogonal to different regularization terms, and can be easily incorporated into existing loss function with auxiliary OOD dataset, e.g., energy loss. The corresponding score function can be adopted for better OOD detection performance. (see Section <ref> for loss function ablation) § EXPERIMENTS In this section, we assess the efficacy of DOS in comprehensive benchmarks. To gain further insights into the proposed sampling strategy, ablation studies are conducted for exploration and analysis. Code is available at the supplementary material. §.§ Setup Datasets. For the common benchmark, we use more challenging CIFAR-100 <cit.> as ID dataset. A down-sampled version of ImageNet (ImageNet-RC) <cit.> is utilized as auxiliary OOD training dataset. Additionally, we use the 300K random Tiny Images subset (TI-300K)[https://github.com/hendrycks/outlier-exposure] as alternative OOD training dataset, due to the unavailability of the original 80 Million Tiny Images[The original dataset contains offensive contents and is permanently downgraded.] in previous work <cit.>. The methods are evaluated on six OOD test datasets: SVHN <cit.>, cropped/resized LSUN (LSUN-C/R) <cit.>, Textures <cit.>, Places365 <cit.>, and iSUN <cit.>. To build the large-scale OOD detection benchmark, we split the ImageNet-1K <cit.> into ImageNet-10 <cit.> and ImageNet-990. Specifically, the ImageNet-10 is ID training dataset, and ImageNet-990 serves as the auxiliary OOD training data. The OOD test datasets comprise (subsets of) iNaturalist <cit.>, SUN <cit.>, Places <cit.>, and Textures <cit.>. Training details. For main results, the size of sampled OOD training samples is the same as ID training dataset, which is a common setting in prior work <cit.>. The DenseNet101/121 is adopted as the backbone for common/large-scale benchmark. The model is trained for 100 epochs using SGD with a momentum of 0.9, a weight decay of 0.0001, and a batch size of 64 for both ID and OOD training data. The initial learning rate is set as 0.1 and decay by a factor of 10 at 75 and 90 epochs. The above settings are the same for all methods trained with auxiliary outliers. We keep default size of clustering center the same as batch size without tuning for diverse sampling.All the experiments are conducted on NVIDIA V100 and all methods are implemented with default parameters using PyTorch. Comparison methods. According to dependency on auxiliary OOD training dataset, we divide the comparison methods into (1) Post-hoc methods: MSP <cit.>, ODIN <cit.>, Maha <cit.>, Energy_score <cit.>, GradNorm <cit.>, ReACT <cit.>, and DICE <cit.>,and (2) Outlier exposure methods: SOFL <cit.>, OE <cit.>, ACET <cit.>, CCU <cit.>, ROWL <cit.>, Energy_loss <cit.>, NTOM <cit.>, Share <cit.>, and POEM <cit.>. Evaluation metrics. We evaluate the performance of OOD detection by measuring the following metrics: (1) the false positive rate (FPR95) of OOD examples when the true positive rate of in-distribution examples is 95%; (2) the area under the receiver operating characteristic curve (AUROC). §.§ Results §.§.§ Evaluation on Common Benchmark On the CIFAR-100 with INRC benchmark, our method outperforms existing competitive methods, establishing state-of-the-art performance. In Table <ref>, we show the OOD detection performance for each OOD test dataset and the average over the six datasets. It is obvious that the outlier exposure methods normally perform better than those post-hoc methods. Despite the overwhelming INRC dataset resulting in saturated performance of several OOD test datasets, our approach further reduces the average FPR95 by 3.7%, compared to the competitive energy-regularized method POEM. Trained with the same absent category loss, the proposed sampling strategy DOS shows superiority over random (Share) and greedy (NTOM) strategies, with a 4.4% and 4.8% improvement on the average FPR95 respectively. At the same time, we maintain comparable classification accuracy on the ID data. To verify the performance of sampling strategy across different auxiliary OOD training datasets, we treat TI-300K as alternative auxiliary outliers. Unexpectedly, the limited auxiliary OOD training dataset severely deteriorates the performance of outlier exposure methods. For example, the FPR95 of POEM degrades from 6.9% to 55.7%. However, our method shows consistent superiority over other methods. Specifically, the average FPR95 is reduced from 50.15% to 24.36% –- a 25.79% improvement over the method NTOM using greedy sampling strategy. Due to the space constraint, the average FPR95 standard deviation of our methods is 1.36, 0.15 for the CIFAR100-TI300K and CIFAR100-INRC respectively. §.§.§ Evaluation on Large-Scale Benchmark We further evaluate our method on a large-scale OOD detection benchmark, which is more relevant to real-world applications. In Table <ref>, we present the OOD detection results for each OOD test dataset and the average over the four datasets. We can observe that outlier exposure methods still demonstrate better differentiation between ID and OOD data than the post-hoc methods, and our method achieves the best performance, surpassing the competitive OE by 3.07% in average FPR95. §.§ Ablation Studies To provide a comprehensive understanding of the proposed method, we conduct a set of analysis in this section. Considering the efficiency, we select common benchmark CIFAR-100 with TI-300K, as ablation study object. The impact of different representations. In this section, we investigate the effect of different clustering representations, including raw latent feature, probability output, and latent feature after dimensionality reduction. As shown in Table <ref>, the normalized feature achieves the best OOD detection performance. Compared with the output space, more semantic information in latent space results in better clustering. To further isolate the effect of predicted confidence, we normalize the raw latent embedding. The number of clustering centers. In this section, we emphasize the importance of diversity by adjusting the number of clustering centers. By default, the candidate OOD data pool is clustered into K centers in each iteration, which is equal to batch size of ID samples. As shown in the Figure <ref> & <ref>, the OOD detection performance deteriorates as the number of clustering centers decreases. It is noting that the number of clustering centers is not a hyper-parameter. Transfer sampling strategy to different loss function. To investigate the performance of proposed sampling strategy under different regularization, we replace the original absent category loss with common energy loss. As shown in Table <ref>, we find that the proposed sampling strategy is consistently effective, establishing state-of-the-art performance over other sampling strategies. § RELATED WORK OOD detection is critical for deployment of model in the open world. A popular line of research aims to design effective scoring functions for OOD detection, such as OpenMax score <cit.>, maximum softmax probability <cit.>, ODIN score <cit.>, Mahalanobis distance-based score <cit.>, cosine similarity score <cit.>, Energy-based score <cit.>, GradNorm score <cit.>, and non-parametric KNN-based score <cit.>. However, the model can be overconfident to the unknown inputs <cit.>. In order to investigate the fundamental cause of overconfidence, ReAct <cit.> proposes to rectify the extremely high activation. BATS <cit.> further rectifies the feature into typical sets to achieve reliable uncertainty estimation. DICE <cit.> exploits sparsification to eliminate the noisy neurons. LogitNorm <cit.> finds that the increasing norm of the logit leads to overconfident output, thereby a constant norm is enforced on the logit vector. Another direction of works address the OOD detection problem by training-time regularization with auxiliary OOD training dataset <cit.>. For example, models are encouraged to give predictions with uniform distribution <cit.> or higher energies <cit.> for outliers. To efficiently utilize the auxiliary OOD training dataset, recent works <cit.> design greedy sampling strategies that select hard negative examples for stringent decision boundary. POEM <cit.> achieves a better exploration-exploitation trade-off by maintaining a posterior distribution over models. In this work, we propose a straightforward and novel sampling strategy named DOS (Diverse Outlier Sampling), which first clusters the outliers, and then selects the most informative outlier from each cluster, resulting in a globally compact decision boundary. § CONCLUSION AND DISCUSSION In this paper, we propose Diverse Outlier Sampling (DOS), a straightforward and novel sampling strategy. Based on the normalized feature clustering, we select the most informative outlier from each cluster, thereby resulting in a globally compact decision boundary between ID and OOD data. We conduct extensive experiments on common and large-scale OOD detection benchmarks, and the results show that our method establishes state-of-the-art performance for OOD detection with a limited auxiliary dataset. This method can be easily adopted in practical settings. We hope that our insights inspire future research to further explore sampling strategy design for OOD detection. One limitation of our method is the training efficiency, due to the clustering process. We can adopt early-stopping strategy to alleviate this issue. plainnat
http://arxiv.org/abs/2306.11468v1
20230620114507
Empirical prior distributions for Bayesian meta-analyses of binary and time to event outcomes
[ "František Bartoš", "Willem M. Otte", "Quentin F. Gronau", "Bram Timmers", "Alexander Ly", "Eric-Jan Wagenmakers" ]
stat.ME
[ "stat.ME", "62F15", "G.3" ]
A Model Fusion Distributed Kalman Filter For Non-Gaussian Observation Noise [ Received ; accepted =========================================================================== Bayesian model-averaged meta-analysis allows quantification of evidence for both treatment effectiveness μ and across-study heterogeneity τ. We use the Cochrane Database of Systematic Reviews to develop discipline-wide empirical prior distributions for μ and τ for meta-analyses of binary and time-to-event clinical trial outcomes. First, we use 50% of the database to estimate parameters of different required parametric families. Second, we use the remaining 50% of the database to select the best-performing parametric families and explore essential assumptions about the presence or absence of the treatment effectiveness and across-study heterogeneity in real data. We find that most meta-analyses of binary outcomes are more consistent with the absence of the meta-analytic effect or heterogeneity while meta-analyses of time-to-event outcomes are more consistent with the presence of the meta-analytic effect or heterogeneity. Finally, we use the complete database - with close to half a million trial outcomes - to propose specific empirical prior distributions, both for the field in general and for specific medical subdisciplines. An example from acute respiratory infections demonstrates how the proposed prior distributions can be used to conduct a Bayesian model-averaged meta-analysis in the open-source software and JASP. § INTRODUCTION Clinical trials are essential for testing whether novel therapeutic treatments and inverventions are indeed benificial and not harmful.<cit.> Before novel therapeutic interventions can be implemented, multiple related clinical trials are required to accumulate evidence for the presence of a beneficial treatment effect. This process is usually achieved by meta-analytical techniques that allow researchers to pool estimates from individual clinical trials and obtain an aggregated estimate of the treatment effectiveness and its uncertainty.<cit.> If the overall uncertainty is too large, additional data collection might be required to obtain a more precise estimate. Aggregating data from multiple clinical trials poses a non-trivial analysis problem as the selection of a meta-analysis model affects the treatment effect point estimate and uncertainty jointly.<cit.> Uncertainty may originate from heterogeneity within and between trial data. For example, a fixed-effects meta-analysis model for the joint analysis of multiple clinical trials assumes no between-trial uncertainty, whereas a random-effects model takes both within and between-trial heterogeneity into account.<cit.> In the case with few clinical trials, it is difficult to establish whether a fixed-effects or random-effects model is most appropriate for the data at hand. Furthermore, the traditional approaches struggle with obtaining a reliable between-trial heterogeneity estimate.<cit.> The required choice between fixed or random meta-analysis model is of limited interest to the researcher pooling multiple clinical trials, as the concern is with obtaining the pooled effect estimate rather than the most appropriate model. In a previous article,<cit.> we introduced Bayesian model-averaged (BMA) meta-analysis<cit.> as a coherent way of combining meta-analytic inference across fixed- and random-effect models, and developed empirical prior distributions for the treatment effectiveness δ and across-study heterogeneity τ parameters for Cohen's d measured continuous outcomes. The BMA meta-analysis allows researchers to combine inferences based on a series of competing models by taking each of them into account according to their prior predictive performance and draw inferences that incorporate uncertainty about the data-generating process. Furthermore, the empirical prior distributions improve parameter estimates under small sample sizes (also see <cit.>) and specify Bayes factors tests for the presence vs. absence of the treatment effectiveness and between-study heterogeneity.<cit.> In this article, we extend the previous continuous-outcome-only work by proposing empirical prior distributions for both the treatment effectiveness and across-study heterogeneity for binary and time-to-event outcomes based on medical data obtained from the Cochrane Database of Systematic Reviews (CDSR). We use the binomial-normal model for log odds ratios and a normal-normal model for log risk ratios, risk differences, and log hazard ratios[This is also an extension to Pullenayegum<cit.> and Turner and colleagues<cit.> who developed prior distributions for the across-study heterogeneity τ based on an earlier version of the database.] Furthermore, we summarize information about the evidence in favor of the effect and across-study heterogeneity in the CDSR, implement the empirical prior distributions in the open-source software <cit.> and JASP,<cit.> and illustrate the methodology on an acute respiratory infections example. § META-ANALYSIS OF BINARY AND TIME TO EVENT OUTCOMES §.§ Effect size measures We examine meta-analytic models of binary and time-to-event outcomes (see Bartoš and colleagues<cit.> for treatment of continuous outcomes). While both outcome types can be, in essence, addressed in the same way as continuous data, there are additional specificities when dealing with binary outcomes. By binary outcomes, we refer to studies whose endpoint of interest is either a presence or absence of an event (e.g., death or recurrence). Consequently, results of individual trials can be summarized by a 2 × 2 table such as Table <ref>. The rows in Table <ref> correspond to the study groups (e.g., treatment vs control arm), the first column denotes the number of observed events, the second row denotes the number of event-free observations, and the third column denotes the number of observation in each group. There are multiple ways to quantify the differences between the two groups (i.e., effect sizes), the most common being odds ratios (OR), risk ratios (RR), and risk differences (RD). Each of the measures has its merits and the selection of the most appropriate measure needs to be based on clinical considerations. The most popular choices are the relative effect size measures (i.e., OR and RR) which are less sensitive to the baseline event rate; however, it has been argued that risk differences are better suited for convening the clinical impact.<cit.> OR and RR are ratios (as suggested by the name) – they are asymmetric and can attain positive values only. Therefore, log OR and log RR are usually used which leads to symmetric and unbounded response variables that can be modeled via a normal likelihood. Table <ref> summarizes log OR, log RR, and RD and their standard errors. Table <ref> also highlights an additional issue with the usage of log OR and log RR – both are undefined when the number of events in either of the groups is zero. This happens suprisingly often; 22.7% of binary outcomes meta-analyses included in CDSR contain at least one cell with zero events. One way of dealing with the zero-cell issue is continuity corrections<cit.> which add a small positive number to cells in Table <ref> or the Mantel-Haenszel method<cit.> or binomial-normal models for OR. For time-to-event outcomes, it is natural to use the log hazard ratios (log HR), which take into account the number of events, the timing of events, and the time until the last follow-up for each trial participant without an event (i.e., right censoring).<cit.> §.§ Normal-normal model Subsequently, the log OR, log RR, RD, or log HR, y_i, and their standard errors, se_i, can be modeled using a normal-normal meta-analytic model, y_i ∼Normal(γ_i, se_i), γ_i ∼Normal(μ, τ), where Normal(x, y)” denotes a normal distribution with mean x and standard deviation y. The model parameters, μ and τ correspond to the mean effect size and between study standard deviation (heterogeneity) and γ_i correspond to normally distributed true study effects. If we assume the absence of heterogeneity, i.e., a fixed effect model, τ is set to zero, and all γ_i are equal to μ. §.§ Binomial-normal model The continuity corrections with a normal-normal meta-analytic model can lead to bias, especially in unbalanced designs.<cit.> Another, more elegant, solution in the case of log OR is a binomial-normal logistic model that naturally deals with zero events,<cit.> a_i ∼Binomial(π_1,i, n_1,i), c_i ∼Binomial(π_2,i, n_2,i), logit(π_1,i) = β_i + γ_i/2, logit(π_2,i) = β_i - γ_i/2, γ_i ∼Normal(μ, τ), where Binomial(π, n) denotes a binomial distribution with the probability of an event π and the number of observations n. The parameter β_i correspond to study specific base-rate probabilities (on logistic scale), and parameters μ, τ, and γ_i have the same interpretation as in Equation <ref>. § BAYESIAN MODEL-AVERAGED META-ANALYSIS When considering the normal-normal or binomial-normal meta-analytic models, we need to specify prior distributions for the μ and τ parameters defining the competing hypotheses ℋ_· (and β_i in the case of the binomial-normal). Different prior distribution specifications/assumptions about either the presence or absence of the μ and τ parameters lead to four qualitatively different models; * the fixed-effect null hypothesis ℋ_0^f : μ = 0 , τ = 0, * the fixed-effect alternative hypothesis ℋ_1^f : μ∼ g(·) , τ = 0, * the random-effects null hypothesis ℋ_0^r : μ = 0, τ∼ h(·), * the random-effects alternative hypothesis ℋ_1^r : μ∼ g(·) , τ∼ h(·), where g(·) denotes a prior distribution on the mean effect size parameter assuming the presence of an effect, and h(·) denotes a prior distribution on the heterogeneity parameter assuming the presence of heterogeneity. The goals of Bayesian model-averaged meta-analysis (BMA) are twofold: 1) to evaluate the evidence in favor/against the specified models and hypotheses; and 2) to combine parameter estimates from the specified models. BMA provides a coherent way of combining results from multiple competing models (see Gronau and colleagues<cit.> and Bartoš and colleagues<cit.> for detailed treatment). This is in a stark difference to the classical inference that usually proceeds by selecting one of the models (2 or 4) on either theoretical or quantitative grounds (by the very so often under-powered test for residual heterogeneity) and completely commits to the single selected model. In short, in BMA, evidence in favor of the effect is quantified via an inclusion Bayes factor which measures the change from prior to posterior inclusion odds for the effect, BF_10_Inclusion Bayes factor for effect = p(ℋ_1^f |data) + p(ℋ_1^r |data)/p(ℋ_0^f |data) + p(ℋ_0^r |data)_Posterior inclusion odds for effect / p(ℋ_1^f) + p(ℋ_1^r)/p(ℋ_0^f) + p(ℋ_0^r)_Prior inclusion odds for effect, where p(ℋ_·) denotes prior model probabilities and p(ℋ_·|data) denotes posterior model probabilities. Similarly, the inclusion Bayes factor for heterogeneity is given by the change from prior to posterior inclusion odds for the heterogeneity, BF_rf_Inclusion Bayes factor for heterogeneity = p(ℋ_0^r |data) + p(ℋ_1^r |data)/p(ℋ_0^f |data) + p(ℋ_1^f |data)_Posterior inclusion odds for heterogeneity / p(ℋ_0^r) + p(ℋ_1^r)/p(ℋ_0^f) + p(ℋ_1^f)_Prior inclusion odds for heterogeneity. The posterior BMA distribution for the effect, assuming it is present, is defined as a mixture of posterior distributions under the fixed-effect alternative hypothesis model weighted by its posterior probability and the random-effect alternative hypothesis model weighted by its posterior probability, p(μ|data, ℋ_1) ∝ p(μ|ℋ_1^f, data) × p(ℋ_1^f |data) + p(μ|ℋ_1^r, data) × p(ℋ_1^r |data). Similarly, the posterior BMA distribution for the heterogeneity, assuming it is present, is defined as a mixture of posterior distributions under the random-effect null hypothesis model weighted by its posterior probability and the random-effect alternative hypothesis model weighted by its posterior probability, p(τ|data, ℋ^r) ∝ p(τ|ℋ_0^r, data) × p(ℋ_0^r |data) + p(τ|ℋ_1^r, data) × p(ℋ_1^r |data). § PRIOR DISTRIBUTIONS We followed the same procedure as in the previous study where we developed prior distributions for continuous outcomes<cit.>; we downloaded data from the Cochrane Database of Systematic Reviews (CDSR), we split the data equally into a training and testing data set (50/50), we used the training set to obtain prior distributions and the test set to select the best performing prior distributions. Finally. we combined the data to propose prior distributions based on the complete data set. One exception was that in the present work we no longer consider the Cauchy prior distribution for the meta-analytic mean parameter and the Uniform prior distribution for the heterogeneity parameter as they were dominated by the remaining prior distributions. §.§ Data set As previously, we used trial data obtained from the CDSR which is the leading journal and database for systematic reviews in health care. We identified all systematic reviews in the CDSR through PubMed with the NCBI's EUtils API (query: “Cochrane Database Syst Rev”[journal] AND (“2000/01/01”[PDAT]: “2021/01/31”[PDAT]). We downloaded the XML meta‐analysis table file (rm5‐format) associated with the review’s latest version. We obtained a total of 28,579 reviews, containing 163,249 comparisons, with a grand total of 788,883 estimates. We removed estimates without the control or the treatment group and split the data according to the outcome type. This left 97,500 comparisons containing 484,128 binary trial outcomes and 1,515 comparisons with 8,149 time-to-event outcomes. We randomly split the binary and time-to-event data into a training and testing data set. We used the highest level of grouping – the review level – for the 50/50 training and testing data set split to prevent information leakage and overfitting. We used the training data set to estimate the prior distributions and the testing data set to assess their predictive performance. §.§ Estimating prior distributions on the training data set Figure <ref> visualizes the data processing steps applied to the training data set. We excluded comparisons with fewer than 10 estimates to ensure that the training set yields reliable estimates of the μ and τ parameters. Then, we used the package<cit.> to re-estimate all comparisons using a frequentist random-effects meta-analytic model. We used the generalized linear mixed-effects model with fixed study effects for log OR implemented in the function,[I.e., model 4 described in Jackson and colleagues<cit.> that closely resembles the Bayesian binomial-normal model in Equation <ref>.] and the restricted maximum likelihood random effects model for log RR, RD, and log HR. The last row of Figure <ref> summarizes the number of converged comparisons (and the corresponding number of estimates). We used the frequentist meta-analytic estimate obtained on the training data set to estimate prior parameter distributions for μ and τ parameters for the Bayesian meta-analytic models (Equations <ref> and <ref>; we used independent uniform distributions for the logit(β_i) parameter as this is inconsequential for hypotheses regarding the mean effect and heterogeneity—as the prior distribution is common across the specified models).[As previously, we assumed that τ estimates lower than 0.01 are representative of ℋ^f. Therefore these estimates were not used to determine candidate prior distributions for τ.] The first row of Figure <ref> visualizes the histogram of mean meta-analytic estimates μ of log OR and heterogeneity τ, and the second row of Figure <ref> visualizes the histogram of mean meta-analytic estimates μ of log HR and heterogeneity τ. See Figure <ref> in Appendix <ref> for the corresponding visualization of log RR and RD. We considered two distributional forms for the mean effect size parameter μ: a Normal distribution centered at zero with free standard deviation parameter, and a Student's t-distribution centered at zero with free scale and degrees of freedom parameters. Moreover, we considered three distributions for the heterogeneity parameter τ: a Half-Normal prior distribution centered at zero with free standard deviation parameter, an Inverse-Gamma distribution with free shape and scale parameters, and a Gamma distribution with free scale and shape parameters. We used the package<cit.> to estimate the free parameters via maximum likelihood. The estimated free parameters of the prior distributions of log OR and log HR are summarized in Table <ref>. The first prior distribution for μ and τ of log OR, in gray-colored text, corresponds to the transformation of prior distributions obtained for Cohen's d measured continuous effect sizes from Bartoš and colleagues.<cit.> See Table <ref> in Appendix <ref> for a corresponding summary of prior distributions of log RR and RD. We observe a notable difference in the widths of the prior distributions estimated on the different effect size measures obtained from the training data set. log OR have clearly the widest prior distributions, log RR prior distributions span approximately around a half of the log OR width, while prior distributions of log HR and RD are much more narrowly concentrated around zero. This is, of course, not surprising especially in the case of RD as they are necessarily bounded to [-1, 1] range. §.§ Assessing prior distributions on the test data set Figure <ref> visualizes the data processing steps applied to the test data set. In contrast to the training data set, we included all comparisons with at least three estimates. We specified Bayesian binomial-normal meta-analytic models for log OR and Bayesian normal-normal meta-analytic models for log RR, RD, and log HR. For each effect size, we created models corresponding to all combinations of the considered prior distributions for effect size and heterogeneity (including the null hypothesis models of no effect and no heterogeneity). For instance, we estimated 4 × 5 meta-analytic models for log OR (a μ = 0 prior distribution assuming no effect + three informed distributions for the μ parameter) × (a τ = 0 prior distribution assuming no heterogeneity + four informed prior distributions for the τ parameter), as depicted in the first part of Table <ref>. We implemented the binomial-normal model in the function and we estimated the normal-normal model via the function in the package.<cit.> The package provides implementations for a wide range of highly modifiable Bayesian meta-analytic models estimated via MCMC using ,<cit.> computes marginal likelihoods via bridge sampling using the package,<cit.> and combines the models via Bayesian model-averaging tools implemented in the package.<cit.> Note that only three Bayesian binomial-normal meta-analytic models of log OR and ten normal-normal meta-analytic models of log HR did not converge. §.§.§ Predictive performance of the competing prior distributions First, we investigated the predictive performance of the competing prior distributions. We compared the specified prior distributions for each parameter separately, averaging over the specified prior distributions for the remaining parameter. For instance, when considering log OR and comparing the predictive performance of the three competing prior distributions for the μ parameter, for each specified prior distribution we averaged its performance across all four prior distributions for the τ parameter. Table <ref> summarizes the predictive performance of the competing prior distributions for log OR and log HR in terms of ranks and the average posterior model probability, the posterior model probabilities averaged across comparisons, and Figure <ref> visualizes the posterior model probabilities for each prior configuration across the comparisons for log OR and log HR. See Table <ref> and Figure <ref> in Appendix <ref> for the corresponding summary of log RR and RD. Across all effect size measures, we find that the more informed Student's t-distribution for the mean effect size parameter μ outranks the less informed normal prior distribution (most of the best ranks in Tables <ref> and <ref>). The dominance of the Student's t-distribution is the most notable for binary outcomes, especially RD, with a smaller difference in performance in the time-to-event outcomes. The same patern is also visible from visualization of the posterior model probabilities where the Student's t-distribution is painted in a sligly darker color signaling higher posterior model probabilities. In the case of the heterogeneity parameter τ we find that the results are a bit more mixed. The more informed Inverse-Gamma and Gamma distribution attain more of the best but also the worst ranks, with an essentially indistinguishable average posterior model probability. The similar performance of Inverse-Gamma and Gamma distributions is not unexpected given their similar shapes, with the Gamma distribution slightly less peaked at small values of the heterogeneity parameter τ (Figures <ref> and  <ref>). Furthermore, we find that the Half-Normal distribution outperforms the Gamma and Inverse-Gamma distributions for RD, most likely due to the shorter tail in the range-restricted effect size measure. Finally, we find that the informed prior distributions for log OR transformed from Cohen's d measured continuous outcomes slightly outperform the natively estimated informed prior distributions. This might be due to inherent differences between binary and continuous outcomes studies as well as the larger training data set presented in the current study. §.§.§ Predictive performance of model types Second, we investigate the predictive performance of the competing model types. Since all but the fixed effect null hypothesis model ℋ_0^f could use different (combinations of) prior distributions, we averaged the predictive performance across all employed prior distribution specifications. For instance, when considering the random effects alternative hypothesis ℋ_1^r for log OR, we averaged the predictive performance across the twelve possible prior distribution specifications (three prior distributions for the μ parameter and four prior distributions for the τ parameter). Table <ref> summarizes the predictive performance of the competing model types for log OR and log HR in terms of ranks and the average posterior model probability and Figure <ref> visualizes the posterior model probabilities for each model type across the comparisons. Across all effect size measures, we find that the random effects alternative hypothesis ℋ_1^r attains the highest posterior model probability. However, the fixed effect null hypothesis model ℋ_0^f dominates in terms of the best rank in all effect sizes measures derived from binary outcomes. The generally worst performing model is the random effects null hypothesis ℋ_0^r which receives the lowest posterior model probability in all effect size measures but log HR. See Table <ref> and Figure <ref> in Appendix <ref> for the corresponding summary of log RR and RD. §.§.§ Predictive performance of hypotheses Third, we investigate the predictive performance of the competing hypotheses; the presence vs absence of the effect (ℋ_1 vs ℋ_0) and the random vs fixed effects models (ℋ^r vs ℋ^f). As previously, the hypotheses were composed from multiple models and prior distributions, therefore, we averaged across the possible combinations of prior distributions. Figure <ref> visualizes the distribution of inclusion Bayes factors for the effect and heterogeneity for log OR and log HR. We find the usual skewed distribution of Bayes factors, with the skew favoring inclusion of Bayes factors for the presence of the effect and heterogeneity (as it is more difficult to obtain evidence for absence). Nonetheless, in all effect size measures of binary outcomes, we find that only the minority of comparisons yields with the evidence in favor of the effect (BF_10 > 1: 47.0% for log OR, 45.1% for log RR, and 40.1% for RD) or heterogeneity (BF_rf > 1: 39.2% for log OR, 34.6% for log RR, and 36.3% for RD). For log HR, the reverse is true as 62.8% of comparisons yield evidence in favor of the effect and 55.9% of comparisons yield evidence for heterogeneity. See Figure <ref> in Appendix <ref> for the corresponding visualization for log RR and RD. §.§ Estimating prior distributions on the complete data Finally, we used the complete data set to estimate empirical prior distributions for the field in general and for the specific sub-disciplines. We applied the same data processing steps as on the training data set, resulting in 12,079 comparisons of binary outcomes (255,048 estimates) and 234 comparisons of time-to-event outcomes (3,831 estimates). After estimating the frequentist meta-analytic models, we ended up with 11,964 log OR comparisons (253,054 estimates), 11,862 log RR comparisons (250,331 estimates), 12,042 RD comparisons (254,326 estimates), and 225 log HR comparisons (3,707 estimates), which we used for constructing the data-driven general and subfield-specific prior distributions. For comparisons of binary outcomes, we estimate the general and subfield-specific empirical prior distributions jointly using Bayesian hierarchical estimation with weakly informative priors on the hyperparameters. Bayesian hierarchical modeling allows us to shrink the estimated subfield parameter values towards the grand mean, which is especially useful for subfields with relatively little information and extreme values.<cit.> We implement the hierarchical models using the package<cit.> that interfaces with the Stan probabilistic modeling language.<cit.> We specify the models as such that all field-specific parameters of the Student's t-distributions (i.e., σ_i and ν_i) and the Inverse-Gamma distributions (i.e., α_i and β_i) are shrunk via a positive-only normal distributions. For the populational parameters, we use positive-only Cauchy distributions with scale one for the location of the Student's t-distribution (σ∼Cauchy_+(0, 1)), scale ten for the degrees of freedom of the Student's t-distribution (ν∼Cauchy_+(0, 10)), and with scale one for the shape and scale of the Inverse-Gamma distribution (α, β∼Cauchy_+(0, 1)). Due to the insufficient number of time-to-event comparisons to estimate prior distributions for the subfields (i.e., 225 comparisons), we provide only a general empirical prior distribution for log HR estimated via maximum likelihood. We estimate the more informed, and generally dominant, Student's t-distribution for the mean effect size parameter μ of all effect size measures. For the heterogeneity parameter τ of log OR, log RR, and log HR we estimate an Inverse-Gamma distribution (the performance is generally tied with the Gamma distribution) and for RD we estimate the more dominant Half-Normal prior distribution. The resulting empirical prior distributions are summarized in Table <ref> for log OR, and in Table <ref> and Table <ref> in Appendix <ref> for the log RR and RD. The hierarchically pooled empirical prior distributions based on the complete data sets are μ∼Student-t(0, 0.58, 4) and τ∼Inv-Gamma(1.77, 0.55) for log OR, μ∼Student-t(0, 0.32, 3) and τ∼Inv-Gamma(1.51, 0.23) for log RR, and μ∼Student-t(0, 0.03, 1) and τ∼Normal_+(0, 0.10) for RD. The maximum likelihood empirical prior distributions on the complete data sets are μ∼Student-t(0, 0.13, 2) and τ∼Inv-Gamma(2.42, 0.30) for log HR. We see a notable heterogeneity across topics. For instance, the prior distribution for the meta-analytic mean μ in the “Oral Health” topic is as much as six times wider that the prior distribution for the “Heart” topic. These between-topic differences highlight the possibility for incorporating domain specific information into the statistical inference. § EXAMPLE: ADVERSE EFFECTS OF HONEY IN TREATING ACUTE COUGH IN CHILDREN We illustrate the methodology with an example from the field of acute respiratory infections. Oduwole et al.<cit.> examined the effect of honey on treating acute cought in children. While the main analyses focused on reduction in Likert-scaled measured symptomatic relief of cough, finding possible benefits of giving honey to children, we focus on the avdverse events analysis (because it features logOR on zero cells). Specificaly, we re-examine the comparison honey and no treatment on the presence of nervousness, insomnia, hyperactivity. Oduwole et al.<cit.> found two elligible studies—with 5/35 and 2/40 events in the honey condition, and 0/39 and 0/40 events in the no treatment conditions—and reported meta-analytic effect size estimate OR = 9.40, 95% CI [1.16, 76.20], z = 2.10, p = 0.04 of honey on the presennce of nervousness, insomnia, hyperactivity (“Analysis 3.5. Comparison 3 Adverse events, Outcome 5 Honey versus no treatment”, p. 73). We conducted a re-examination of the Oduwole et al. meta-analysis <cit.> using the binomial-normal model, which we implemented in the open-source statistical software package JASP (jasp-stats.org) <cit.>. For the same analysis in <cit.>, we utilized the package and have included the details in Appendix <ref>. To perform a BMA meta-analysis using JASP, we loaded the data and activated the “Meta-Analysis” module by clicking on the blue "+" button in the top right corner. Then, we selected "Meta-Analysis" from the ribbon at the top, followed by choosing "Bayesian Meta-Analysis (Binomial)" from the drop-down menu. In the left input panel, we moved the observed number of events and the number of participants in each group into the corresponding boxes and adjusted the prior distributions under the “Prior” tab to match the “Acute Respiratory Infections” subfield-specific prior distributions given in Table <ref> (δ∼𝒯(0, 0.48, 3) and τ∼Inv-Gamma(1.67, 0.45)). Figure <ref> shows the JASP graphical user interface. The left panel shows settings for specifying the analysis settings and the right panel displaying the analysis output. The JASP output panel displays the corresponding BMA meta-analysis results. The “Model Summary” table summarizes the model-averaged evidence for the presence vs. absence of the effect and heterogeneity. The inclusion Bayes factors indicate weak evidence for the presence of the effect BF_10 = 2.64 and virtualy no evidence for either the presence or absence of heterogeneity BF_rf = 1.30. The “Conditional Estimates” table summarizes conditional the meta-analytic estimates (the effect size estimate assuming presence of the effect and the heterogeneiry estimate assuming presence of heterogeneity). We find the effect size estimate OR = 4.25, 95% [0.80, 18.20] and heterogeneity estimate τ_logOR = 0.73, 95% [0.10, 3.13]. We can notice two differences from the original frequentist output; 1) the effect size estimate is smaller and the credible interval is much narrower then previously reported, and 2) we are able to obtain a (very wide) estimate of the between-study heterogeneity. Both are a result of incorporating the existing information about the usual degrees of effect sizes in a given field which is especially relevant in cases with a very few observations. The JASP meta-analytic module also offers a multitude of visualizationoptions (such a as forest plot, prior and posterior distributions, etc.) and additional options and analyses such as performing one-sided hypothesis tests, and various publication bias adjustment methods.<cit.> § CONCLUDING COMMENTS In this article, we developed informed empirical prior distributions for the meta-analytic mean and heterogeneity parameter for meta-analyses of binary (log OR, log RR, and RD) and time-to-event (log HR) outcomes. We provided both general and topic-specific prior distributions based on almost 12,000 binary outcomes meta-analyses and general prior distributions based on 200 time-to-event meta-analyses extracted from the Cochrane Database of Systematic Reviews. Our results extend our previous work<cit.> where we developed empirical prior distributions for continuous outcomes and the work of Pullenayegum<cit.> and Turner and colleagues<cit.> who developed prior distributions for the heterogeneity parameter of log OR based on an earlier version of the database. The newly developed prior distributions can be combined with Bayesian model-averaged meta-analysis to quantify the evidence in favor of or against the presence of the mean meta-analytic effect and heterogeneity. Moreover, Bayesian model-averaged meta-analysis does not force the analyst to base the entire inference on a single model. Instead, Bayesian model-averaged meta-analysis acknowledges the model uncertainty and combines the evidence and parameter estimates from the competing models according to their predictive performance.<cit.> Both the direct quantification of evidence and accounting for model uncertainty is especially important in small sample sizes which are typical for medical meta-analyses, as it allows the analyst to disentangle the absence of evidence from the evidence of absence and avoid overconfident conclusions.<cit.> Our results illustrate substantial uncertainty about the most appropriate meta-analytic model. No combination of the usual model types, the null vs the alternative hypothesis model and the fixed vs the random effects model, clearly dominated the other models across the examined meta-analyses. Moreover, the meta-analyses of binary outcomes were more in line with the null hypothesis and fixed effect models, while the opposite held true for meta-analyses of time-to-event outcomes. Although these results go against the common belief that the random-effects alternative model is the best-suited model for analysing data,<cit.> the large uncertainly and considerable support for all model types echo our previous findings from meta-analyses of continuous outcomes.<cit.> We implemented the Bayesian binomial-normal meta-analytic model for log OR within the Bayesian model-averaged meta-analytic framework of the package<cit.> (alongside the already existing normal-normal meta-analytic model). We incorporated the framework and empirical prior distributions to into the user-friendly graphical user interface of JASP.<cit.> Finally, we illustrated the programs on and applied the empirical prior distributions to an example of acute respiratory infections. § ACKNOWLEDGMENTS This work was supported by The Netherlands Organisation for Scientific Research (NWO) through a Research Talent grant (to QFG; 406.16.528), a Vici grant (to EJW; 016.Vici.170.083), and an NWA Idea Generator grant (to WMO; NWA.1228.191.045). § FINANCIAL DISCLOSURE None reported. § DATA AVAILABILITY STATEMENT Data and analysis scripts are publicly available at: <https://osf.io/v9bj6/>. § CONFLICT OF INTEREST František Bartoš, Alexander Ly, and Eric-Jan Wagenmakers declare their involvement in the open-source software package JASP (<https://jasp-stats.org>), a non-commercial, publicly-funded effort to make Bayesian statistics accessible to a broader group of researchers and students. Willem Otte is co-founder of RCTAlert (<https://rctalert.com>), a commercial AI platform providing weekly clinical trial notifications. WileyNJD-AMA § R CODE FOR THE ADVERSE EFFECTS OF HONEY IN TREATING ACUTE COUGH IN CHILDREN EXAMPLE This Appendix shows how to conduct the example analysis with the statistical programming language .<cit.> First, we need to install the package,<cit.> (this command needs to be executed only if the package is not already installed): After the package has been installed, we load it into the session, and specify the number of events and observations in both the experimental and control group from both studies, In order to use the binomial-normal BMA meta-analysis we use the function. We specify the number of of events and observations in each group as the corresponding input (, , , ) and set the subfield-specific prior distributions using the and arguments accordingly to the “Acute Respiratory Infections” row in Table <ref>: To obtain the inclusion Bayes factors and conditional posterior distribution summaries, we use the function with the argument, which produces output corresponds to that given by JASP (up to MCMC error): § ANALYSIS OF LOG RR AND RD This Appendix contains tables and figures for the log RR and RD.
http://arxiv.org/abs/2306.08958v1
20230615085124
Temporally-Extended Prompts Optimization for SAM in Interactive Medical Image Segmentation
[ "Chuyun Shen", "Wenhao Li", "Ya Zhang", "Xiangfeng Wang" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Temporally-Extended Prompts Optimization for SAM in Interactive Medical Image Segmentation Chuyun Shen [email protected] School of Computer Science and Technology East China Normal University Shanghai 200062, China Wenhao Li [email protected] School of Data Science The Chinese University of Hong Kong, Shenzhen Shenzhen Institute of Artificial Intelligence and Robotics for Society Shenzhen 518172, China Ya Zhang [email protected] Cooperative Medianet Innovation Center Shanghai Jiao Tong University Shanghai, 200240, China Xiangfeng Wang [email protected] School of Computer Science and Technology East China Normal University Shanghai 200062, China July 31, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The Segmentation Anything Model (SAM) has recently emerged as a foundation model for addressing image segmentation. Owing to the intrinsic complexity of medical images and the high annotation cost, the medical image segmentation (MIS) community has been encouraged to investigate SAM's zero-shot capabilities to facilitate automatic annotation. Inspired by the extraordinary accomplishments of interactive medical image segmentation (IMIS) paradigm, this paper focuses on assessing the potential of SAM's zero-shot capabilities within the IMIS paradigm to amplify its benefits in the MIS domain. Regrettably, we observe that SAM's vulnerability to prompt forms (e.g., points, bounding boxes) becomes notably pronounced in IMIS. This leads us to develop a framework that adaptively offers suitable prompt forms for human experts. We refer to the framework above as temporally-extended prompts optimization (TEPO) and model it as a Markov decision process, solvable through reinforcement learning. Numerical experiments on the standardized benchmark demonstrate that the learned TEPO agent can further enhance SAM's zero-shot capability in the MIS context. § INTRODUCTION The Segmentation Anything Model (SAM) <cit.> has recently been proposed as a foundational model for addressing image segmentation problems. SAM's effectiveness is principally evaluated in natural image domains, demonstrating a remarkable prompt-based, zero-shot generalization capability. Segmentation within medical images (MIS), on the other hand, presents complex challenges owing to their substantial deviation from natural images, encompassing multifaceted modalities, intricate anatomical structures, indeterminate and sophisticated object boundaries, and extensive object scales <cit.>. Predominant MIS methods principally employ domain-specific architectures and necessitate reliance upon massive, high-quality expert annotations <cit.>. In light of the considerable expenditure incurred by dense labeling, the community has embarked on exploring SAM's zero-shot generalization capabilities in MIS tasks, thereby fostering automated annotation of medical images <cit.>. Motivated by the remarkable achievements of interactive medical image segmentation (IMIS), this paper goes a step further and centers on investigating the potential of zero-shot capabilities of SAM in the IMIS domain to magnify the advantages of SAM in MIS domain. An extensive body of research demonstrates the significant performance enhancement attributable to the IMIS paradigm <cit.>. Specifically, IMIS overcomes the performance limitation inherent in end-to-end MIS approaches by reconceptualizing MIS as a multi-stage, human-in-the-loop task. At each iterative stage, medical professionals impart valuable feedback (e.g., designating critical points, demarcating boundaries, or construing bounding boxes) to identify inaccuracies in the model output. Consequently, the model refines the segmentation results following expert knowledge embedded in human feedback. The congruity between the human feedback forms in IMIS and the prompt forms in SAM facilitates the seamless integration of SAM within the IMIS framework. Nevertheless, recent investigations reveal that, in contrast to natural image segmentation, the susceptibility of the SAM model to prompt forms (e.g., points or bounding boxes) is significantly heightened within MIS tasks, resulting in substantial discrepancies in zero-shot performance when various prompt forms are employed <cit.>. Regrettably, we find this issue is markedly exacerbated within the IMIS context. This phenomenon can be attributed to two primary factors. Firstly, the segmentation stages are interdependent; the previous prompt forms selection directly impacts the ensuing segmentation, which, in turn, influences the choice of subsequent prompt forms. Secondly, human experts display preferences and stochasticity in their feedback, seldom contemplating the ramifications of the prompt forms on the performance and the intricate interconnections between antecedent and successive prompt forms. Consequently, this revelation impels us to recommend the most efficacious prompt forms for human feedback at each successive IMIS stage, a challenge we designate as temporally-extended prompts optimization. As a formidable instrument for addressing sequential decision-making, reinforcement learning (RL) <cit.> demonstrates remarkable competencies not only in domains such as chess, video games, and robotics control but also in training foundational models <cit.> and IMIS <cit.>. Given that temporally-extended prompt optimization encompasses both the foundational model and IMIS, we formulate this problem as a Markov decision process (MDP) and employ RL for its resolution. The framework above is then instantiated as the algorithm denoted by TEPO. During each stage, TEPO agent determines which prompt form is most suitable for recommendation to human, considering the current segmentation outcomes and historical prompts. The ultimate objective is to augment the performance of the SAM in each stage relative to its preceding iteration, thereby maximizing its efficacy. The contributions presented in this paper encompass three distinct aspects: 1) In an unprecedented discovery, we ascertain that sequential prompt forms constitute the crucial elements influencing the zero-shot performance of SAM in IMIS, subsequently proposing a pertinent temporally-extended prompts optimization problem; 2) By conceptualizing the temporally-extended prompts optimization as an MDP, we employ RL to optimize the sequential selection of prompt forms, thereby enhancing the zero-shot performance of SAM in IMIS; 3) The performance juxtaposition and ablation studies conducted on the standardized benchmark substantiate the efficacy of the TEPO agent in ameliorating SAM's zero-shot capability. § RELATED WORK AND PRELIMINARIES §.§ Interactive Medical Image Segmentation Before the remarkable advancements in automatic segmentation achieved through convolutional neural networks (CNNs), traditional interactive techniques were employed within IMIS <cit.>. Among these techniques, the RandomWalk method <cit.> generates a weight map with pixels as vertices and segments images based on user interaction. Approaches like GrabCut <cit.> and GraphCut <cit.> establish a connection between image segmentation and graph theory's maximum flow and minimum cut algorithms. Geos <cit.> introduces a geodesic distance measurement to ascertain pixel similarity. There has been a surge of interest in deep learning-based IMIS methods in recent years. <cit.> suggests employing CNNs for interactive image segmentation, whereas DeepCut <cit.> and ScribbleSup <cit.> utilize weak supervision in developing interactive segmentation techniques. DeepIGeoS <cit.> incorporates a geodesic distance metric to generate a hint map. The interactive segmentation process can be viewed as a sequential procedure, which makes it a natural fit for reinforcement learning (RL). Polygon-RNN <cit.> tackles this problem by segmenting targets as polygons and iteratively selecting polygon vertices through a recurrent neural network (RNN). With a similar approach, Polygon-RNN+ <cit.> adopts a similar approach to Polygon-RNN, it employs RL to learn vertex selection. SeedNet <cit.> takes a different approach by constructing an expert interaction generation RL model that can obtain simulated interaction data at each interaction stage. IteR-MRL <cit.> and BS-IRIS <cit.> conceptualize the dynamic interaction process as a Markov Decision Process (MDP) and apply multi-agent RL models for image segmentation purposes. MECCA <cit.>, based on IteR-MRL, establishes a confidence network, seeking to mitigate the pervasive “interactive misunderstanding” issue that plagues RL-based IMIS techniques and ensure the effective utilization of human feedback. Additionally, <cit.> integrates the SAM within the 3D Slicer software, thereby facilitating the process of designing, evaluating, and employing SAM in the context of IMIS. §.§ Segment Anything Model The Segmentation Anything Model (SAM) <cit.>, recently introduced by Meta, serves as a fundamental framework for tackling image segmentation challenges. Motivated by the robust performance of foundational models in NLP and CV domains, researchers endeavored to establish a unified model for complete image segmentation tasks. Nonetheless, the actual data in the segmentation field necessitates revision and diverges from the design intentions mentioned above. Consequently, <cit.> stratifies the process into three distinct phases: task, model, and data. Refer to the primary publication <cit.> and a contemporary survey <cit.> for comprehensive explanations. Task. Drawing inspiration from foundational NLP and CV models, <cit.> introduces the promptable segmentation task to generate a valid segmentation mask in response to any given segmentation prompt. These prompts define the target object(s) to be segmented within an image and may include a location point, a bounding box, or a textual description of the object(s). The resulting mask must be plausible for at least one target object, even in instances where the prompt may be ambiguous or reference multiple objects. Model. The promptable segmentation task, paired with the objective of real-world applicability, imposes restrictions on the model architecture. <cit.> devises a streamlined yet efficacious model, known as SAM (Figure <ref>), which encompasses a powerful image encoder that computes image embeddings, a prompt encoder that embeds prompts, and a lightweight mask decoder that amalgamates the two information sources to predict segmentation masks. Data. SAM necessitates training on an extensive and diverse collection of masks to attain exceptional generalization capabilities on novel data distributions. <cit.> constructs a "data engine", employing a model-in-the-loop dataset annotation approach, thereby co-developing SAM in tandem. The resulting dataset, SA-1B, incorporates over 1 billion masks derived from 11 million licensed and privacy-preserving images. §.§ Segment Anything in Medical Images Building upon the foundational pre-trained models of SAM, many papers have delved into investigating its efficacy in diverse zero-shot MIS scenarios. <cit.> conducts a comprehensive evaluation of SAM in the everything mode for segmenting lesion regions within an array of anatomical structures (e.g., brain, lung, and liver) and imaging modalities (computerized tomography, abbreviated as CT, and magnetic resonance imaging, abbreviated as MRI). <cit.> subsequently scrutinizes SAM's performance in specific healthcare domains (optical disc and cup, polyp, and skin lesion segmentation) utilizing both the automatic everything mode and the manual prompt mode, employing points and bounding boxes as prompts. For MRI brain extraction tasks, <cit.> compares SAM's performance with the renowned Brain Extraction Tool (BET), a component of the FMRIB Software Library. <cit.> appraises SAM's performance in digital pathology segmentation tasks, encompassing tumor, non-tumor tissue, and cell nuclei segmentation on high-resolution whole-slide imaging. <cit.> adeptly implements SAM in polyp segmentation tasks, utilizing 5 benchmark datasets under the everything setting. Recently, an assortment of studies has rigorously tested SAM on over 10 publicly available MIS datasets or tasks <cit.>. Quantitative experimental results gleaned from these works reveal that the zero-shot performance of SAM is, on the whole, moderate and exhibits variability across distinct datasets and cases. To elaborate: 1) Utilizing prompt instead of everything mode, SAM can surpass state-of-the-art (SOTA) performance in tasks characterized by voluminous objects, smaller quantities, and well-defined boundaries when reliant on dense human feedback; 2) However, a considerable performance discrepancy remains between SAM and SOTA methods in tasks involving dense and amorphous object segmentation. § TEMPORALLY-EXTENDED PROMPTS OPTIMIZATION METHODOLOGY As elucidated in the preceding analysis, the susceptibility of SAM to prompt forms is markedly pronounced in IMIS. This serves as the impetus for devising a framework adept at adaptively proffering suitable prompt forms for human specialists, contingent upon the current progression of segmentation. The human expert subsequently imparts feedback to SAM, employing the recommended prompt form. The ensuing discourse delineates the modeling of this framework, the temporally-extended prompts optimization, as an MDP (Section <ref>) and elaborates on its solution through reinforcement learning (Section <ref>). §.§ Problem Formulation We consider a standard setup consisting of an agent interacting with an environment in discrete timesteps. In our setting, the purpose of the agent is to recommend appropriate prompt forms for human experts. At each timestep t the agent receives an observation o_t, takes an action a_t and receives a scalar reward r_t. In general, the environment may be partially observed so that the entire history of the observations, action pairs s_t=(o_1, a_1, …, a_t-1, o_t) may be required to describe the state. The behavior of an agent is defined by a policy, π, which maps states to a probability distribution over actions π: 𝒮→𝒫(𝒜). We model it as a Markov decision process with state space 𝒮, action space 𝒜, initial state distribution p(s_1), transition dynamics p(s_t+1| s_t, a_t), reward function r(s_t, a_t), and instantiate it as follows: State space. The state at timestep t is represented as a three-tuple S_t = (I, P_t-1, T_t-1), where I ∈ℝ^H× W× C represents the medical image slice input, P_t-1∈ℝ^H× W× K represents the segmentation logits from the previous time step t-1 (where K represents the number of segmentation classes, which in this case is 2), and T_t-1 is a set of interaction prompts provided before time step t. We consider four types of interaction prompts form at each timestep: forehead, background, center point, and bounding box, and we will introduce them in Section <ref>. Action space. The action space 𝒜 is a set of interactive forms provided by human experts at each time step. It is represented as a set of integers 𝒜={0,1,2,3}, where 0 denotes selecting the forehead point, 1 denotes accessing the background point, 2 denotes the center point, which is defined as the point farthest from the boundary of the error regions, and 3 denotes selecting the bounding box. At each time step, the agent chooses an action from the action space 𝒜 to assist human experts with their interactions with SAM. Reward function. At each step t, the difference between the current DICE score <cit.>, dice(P_t, Y) and the previous DICE score, dice(P_t-1, Y), is calculated as the reward value R_t: R_t = dice(P_t, Y) - dice(P_t-1, Y), where Y is the ground truth, dice(P_t, Y) represents the DICE score between the current predicted result P_t and the ground truth, and dice(P_t-1, Y) represents the DICE score between the previous predicted result and the ground truth. In summary, as shown in Figure <ref>, the whole process is as follows: the agent gives the recommended prompt form in accordance with the policy pi based on the raw image, the current segmentation probability and the hints given by the doctor. Then the doctor gives the SAM the corresponding prompt and updates the segmentation probability. The changes in the segmentation result are used by the agent as the reward to update the policy π. In addition, the return from a state is defined as the sum of discounted future reward R_t=∑_i=t^T γ^(i-t) r(s_i, a_i) with a discounting factor γ∈[0,1]. Depending on this problem formulation, the goal of temporally-extended prompts optimization is to learn a policy that maximizes the expected return from the start distribution J=𝔼_r_i, s_i ∼ E, a_i ∼ρ^π[R_1]. We denote the discounted state visitation distribution for a policy π as ρ^π. §.§ Learning the TEPO Agent with RL Before introducing RL methods to obtain the optimal prompt, we first introduce some notations. The action-value function is used in many RL algorithms. It describes the expected return after taking an action a_t in state s_t and thereafter following policy π : Q^π(s_t, a_t)=𝔼_r_i ≥ t, s_i>t∼ E, a_i>t∼π[ R_t | s_t, a_t ]. Additionally, many approaches in RL make use of the recursive relationship known as the Bellman equation: Q^π( s_t, a_t ) = 𝔼_r_t, s_t+1∼ E[ r ( s_t, a_t ) + γ𝔼_a_t+1∼π[ Q^π( s_t+1, a_t+1) ] ]. This paper adopts deep Q-network (DQN) <cit.> to instantiate the RL framework and learn the TEPO agent. Q-learning <cit.>, as the core module of DQN, is a commonly-used, off-policy RL algorithm, by employing the greedy policy μ(s)=max _a Q(s, a). DQN adapts the Q-learning to make effective use of large neural networks as action-value function approximators. We consider function approximators parameterized by θ^Q, which we optimize by minimizing the loss: L(θ^Q)=𝔼_s_t ∼ρ^β, a_t ∼β, r_t ∼ E[(Q(s_t, a_t |θ^Q)-y_t)^2], where y_t=r(s_t, a_t)+γ Q(s_t+1, μ(s_t+1) |θ^Q). The full algorithm, which we call TEPO, is presented in Algorithm <ref>. § EXPERIMENTS ON MIS This section provides an evaluation of the proposed TEPO on the benchmark, which is a prevalent dataset used for MIS tasks. We aim to address the following key questions, and the following evaluation will focus on answering these questions comprehensively, i.e., a) Does the SAM with multi-step interaction outperform the SAM with single-step interaction? b) Can the policies learned by the TEPO algorithm outperform the rule-based policies? c) What strategies can be learned from TEPO? d) How stable are the strategies learned by TEPO? §.§ Dataset and Training Details SAM requires 2D images as input and 3D images are conventionally often annotated by viewing them in slices, we adopt the practice of slicing the 3D magnetic resonance scans into axial slices, a method commonly used in related research efforts <cit.>. To evaluate the effectiveness of TEPO in the context of multi-step interaction, we carefully selected slices with sufficiently large foregrounds in the image. Specifically, we segment the Whole Tumor (WT) from the FLAIR images and choose slices that contain a minimum of 256 foreground pixel points for analysis. This carefully curated dataset enables accurate evaluation of the performance and potential of TEPO in future applications in MIS. The dataset for evaluation comprised a total of 369 patients. We split the dataset into three subsets: the training set evaluated 319 patients and included 17,396 slices; the validation set consisted of 20 patients, corresponding to 1,450 slices; and the test set included 20 patients with 1,389 slices. We crop the images to 200 × 150, implement random flip, rotate, add noise, affine transform data augmentation to the training dataset, and then rescale the intensity values. We train for 100 epochs, and in each epoch, 10,000 steps are sampled, and the Q network of TEPO is updated 100 times. The model is trained with a learning rate of 1e^-3 for the optimizer and a batch size of 64. §.§ Main Results [!htp] Action selection preference statistics and quantitative segmentation performance results for TEPO policies and rule-based policies. Labels used in the paper include “Fore” for the forehead point form, “Back” for the background point form, “Center” for the center point form, and “Bbox” for the bounding box form. These labels will be consistently used throughout the paper. “<-0.1" indicates the number of cases that reward less than 0.1, which means the algorithm misunderstands the interaction. In addition, we use boldface to indicate the highest dice score in each step. ! Algorithm variable Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 3*TEPO-2 Action Bbox (100.00%) Fore (99.57%) Fore (99.78%) Fore (99.71%) Fore (99.93%) Fore (99.86%) Fore (99.86%) Fore (99.86%) Fore (99.93%) 2-11 Dice 0.6901± 0.2094 0.6930± 0.1758 0.6937± 0.1694 0.6932± 0.1692 0.6940± 0.1693 0.6940± 0.1693 0.6940± 0.1694 0.6940± 0.1694 0.6940± 0.1694 2-11 -0.1 0 95 14 2 0 0 0 0 0 3*TEPO-3 Action Center (100.00%) Bbox (94.96%) Center (98.85%) Center (99.14%) Center (99.78%) Center (100.00%) Center (99.86%) Center (100.00%) Center (100.00%) 2-11 Dice 0.4658± 0.2877 0.7035± 0.1882 0.7611± 0.1687 0.7845± 0.1670 0.8026± 0.1553 0.8198± 0.1441 0.8263± 0.1409 0.8332± 0.1367 0.8362± 0.1378 2-11 -0.1 0 54 44 72 73 46 47 45 44 3*TEPO-5 Action Center (100.00%) Center (62.35%) Center (86.54%) Center (95.10%) Center (97.48%) Center (99.57%) Center (99.64%) Center (99.71%) Center (99.64%) 2-11 Dice 0.4658± 0.2877 0.6472± 0.2316 0.7369± 0.1926 0.7782± 0.1665 0.8021± 0.1577 0.8190± 0.1439 0.8288± 0.1375 0.8372± 0.1346 0.8421± 0.1322 2-11 -0.1 0 117 102 103 85 62 48 44 42 3*TEPO-7 Action Center (100.00%) Bbox (85.67%) Center (79.34%) Center (90.78%) Center (93.38%) Center (94.89%) Center (95.39%) Center (95.82%) Center (95.61%) 2-11 Dice 0.4658± 0.2877 0.6981± 0.1965 0.7552± 0.1720 0.7822± 0.1690 0.7991± 0.1612 0.8137± 0.1520 0.8240± 0.1424 0.8316± 0.1370 0.8342± 0.1380 2-11 -0.1 0 56 56 65 62 53 40 45 41 3*TEPO-9 Action Center (100.00%) Center (100.00%) Center (100.00%) Center (100.00%) Center (100.00%) Center (100.00%) Center (100.00%) Center (100.00%) Center (100.00%) 2-11 Dice 0.4658± 0.2877 0.6211± 0.2535 0.7192± 0.2131 0.7707± 0.1711 0.7990± 0.1583 0.8175± 0.1452 0.8302± 0.1390 0.8394± 0.1324 0.8449± 0.1307 2-11 -0.1 0 177 153 167 123 86 58 51 51 3*Random Action Center (26.78%) Center (29.30%) Fore (30.67%) Fore (33.48%) Back (32.04%) Fore (33.48%) Center (33.33%) Back (33.33%) Fore (33.55%) 2-11 Dice 0.4129 ± 0.3417 0.5723 ± 0.2947 0.6561 ± 0.2562 0.7072 ± 0.2260 0.7354 ± 0.2094 0.7571 ± 0.1943 0.7818 ± 0.1727 0.7956 ± 0.1627 0.8052 ± 0.1568 2-11 -0.1 0 121 141 123 115 89 66 50 39 3*Alternately Action Fore (100.00%) Back (100.00%) Fore (100.00%) Back (100.00%) Fore (100.00%) Back (100.00%) Fore (100.00%) Back (100.00%) Fore (100.00%) 2-11 Dice 0.4658 ± 0.2877 0.6010 ± 0.2691 0.6460 ± 0.2470 0.7280 ± 0.2067 0.7332 ± 0.2098 0.7823 ± 0.1730 0.7840 ± 0.1777 0.8138 ± 0.1512 0.8052 ± 0.1652 2-11 -0.1 0 98 207 98 172 59 111 27 92 The performance of the proposed TEPO algorithm is evaluated on the dataset for medical image segmentation tasks and compared with three rule-based policy baselines: the one-step oracle agent, the random agent, and the alternately changing agent. The one-step oracle agent is an optimal decision-making agent that has access to comprehensive information and can observe the reward after adapting various interaction forms. This allows it to achieve the highest accuracy in a single step and to explore efficient interaction strategies for the given task. The random agent, on the other hand, uniformly samples actions from available action sets and can be used to simulate clinicians without any preference for any particular interaction form for the task at hand. The alternately changing agent applies a policy that alternately chooses the forehead point and the background point. We evaluate the agent's performance through the dice score, computed using a ground truth mask and measurements taken at multiple timesteps (N={2,3,5,7,9}). At each timestep, the agent first chooses an action to indicate what form of interaction is required. To simulate a clinician's behavior, we use rules consisting of choosing specific positions, such as the forehead, background, and center, and drawing bounding boxes around the forehead region. Specifically, we select the forehead, background, and center points that are farthest from the boundaries of the false negative, false positive, and error regions, respectively. For the bounding box, we extend the forehead region by 10 pixels and draw a rectangle. The comparison of the performance of various interaction strategies is evaluated with respect to the number of interactions. As shown in Figure <ref>, the different lines correspond to the different agents' performance. “TEPO-X” indicates that the agent is trained in the X-step interaction scenario. For example, “TEPO-2” means a two-step scenario. “Random” denotes the random agent, “Alternately” denotes the alternately changing agent, and “1-step Oracal” denotes the one-step Oracal agent. We will use the same labeling convention throughout the paper unless noted otherwise. It is worth noting that we train in different interaction step scenarios, but in testing, we use 9-step interactions to find out comprehensive performances. §.§.§ Quantitative experimental analysis Q#a: Does the SAM in multi-step interaction mode outperform the SAM in single-step interaction mode? As illustrated in Figure <ref>, the TEPO-2 agent stays the same after the third round, this is because in our experiments, if the shortest distance of all points from the edge in the corresponding region is less than two pixels, then the user does not interact anymore. Table <ref> indicates that the TEPO-2 policy predominantly selects the forehead point starting from step two. However, the false negative region is too small to click, so the TEPO-2 policy stops interacting at step five for all test cases. Conversely, the performance of other multi-step policies improves with an increase in the number of interactions, showcasing that SAM can be enhanced through multiple rounds of interactions. Moreover, expect TEPO-2, other policies perform better than the one-step Oracle agent, implying that multi-step interactions are more effective for medical image segmentation than the single-step interaction mode. Q#b: Can the policies learned by the TEPO algorithm outperform the rule-based policies? The experimental results in Figure <ref> indicate that the TEPO-2 policy performs better than random and alternating selection methods during the initial two interactions. Moreover, the performance of all other RL-based policies is superior to rule-based approaches. These findings provide evidence that the TEPO algorithm significantly boosts the efficacy of SAM in interactive medical scenarios, even in zero-shot mode. Q#c: What strategies can be learned from the TEPO algorithm? As the TEPO algorithm is trained under different interaction round scenarios, the learned strategies exhibit variations, as summarized in Table <ref>. TEPO-2 employs a straightforward strategy: selecting the bounding box in the first step and the forehead point in subsequent ones. This strategy performs well in the initial two steps, with the performance in the first step nearing that of the one-step Oracle agent that adopts an ideal strategy. TEPO-3 applies a nearly deterministic strategy that chooses the bounding box at the second step and chooses center points at other steps. Moreover, TEPO-5 and TEPO-7 use more uncertain strategies that primarily employ the center point but may resort to alternative ones in the second and third steps. TEPO-9 finds a trivial strategy of choosing the center point at each step, resulting in the best performance in multiple interactions. Q#d: How stable are the strategies learned by TEPO? One issue that may affect the performance of TEPO is interactive misunderstanding, where user interactions result in reduced segmentation dice scores. In this study, we consider interactive misunderstandings when the segmentation dice score decreases by over 0.1. We analyze the occurrence of interactive misunderstandings for different strategies on our test data, as presented in Tables <ref>. For a more intuitive comparison, we plot the number of interactive misunderstandings for each strategy at different interaction steps in Figure <ref>. As TEPO-2 only applies to the initial two interactions, we exclude it from the plot. Resultantly, the findings indicate that TEPO-3, TEPO-5, and TEPO-7 exhibit fewer misunderstandings than the random and alternating agents, thereby indicating superior stability and performance. §.§.§ Qualitative experimental analysis To evaluate the effectiveness of different strategies and investigate the causes of misunderstandings that occur with SAM, we conducted a qualitative analysis and present their performance on a single medical image in Figure <ref>. The first column displays the raw image, while the middle columns show the interaction processes and corresponding segmentation outcomes. The last column provides the ground truth. Among the different strategies, TEPO-2 demonstrates relatively weak performance, as it only involves two effective interactions. The TEPO-9 and alternately changing agent purely used point-based interaction. TEPO-9 produces an equally good final outcome compared to the strategy with the bounding box and obtains the best result finally after nine interactions, the alternately changing policy performs poorly due to less effective interactions. TEPO-3, and TEPO-7 consistently use the bounding box in the second interaction and select center points in all other interactions. TEPO-5 uses the bounding box in the fourth interaction and center points in all other interactions. These three policies produce similar final results. In addition, we observe a misunderstanding issue in some interactions, such as the seventh interaction with TEPO-5, and the eighth interaction with TEPO-3 and TEPO-7. This is likely due to the corresponding region being too small for SAM to adequately understand the human feedback. Overall, our results suggest that SAM cannot accurately achieve segmentation in a single interaction in medical tasks without being properly tuned. However, with multiple rounds of interaction, it can achieve considerable results. Moreover, the strategies learned by TEPO demonstrate better segmentation performance compared to rule-based strategies. § CONCLUSION This paper focuses on assessing the potential of SAM’s zero-shot capabilities within the interactive medical image segmentation (IMIS) paradigm to amplify its benefits in the medical image segmentation (MIS) domain. We introduce an innovative reinforcement learning-based framework, temporally-extended prompts optimization (TEPO), to optimize prompts that can enhance segmentation accuracy in multi-step interaction situations. Our empirical study, conducted on the benchmark, highlights the prompt sensitivity of SAM and demonstrates that TEPO can further enhance its zero-shot capability in the MIS domain. Specifically, TEPO successfully reduces the incidence of interactive misunderstandings, thus improving segmentation accuracy and stability in medical images. These findings make a valuable contribution to the development of advanced MIS techniques, showcasing the potential efficacy of prompts optimization which expands the zero-shot capability of foundation models like SAM.
http://arxiv.org/abs/2306.06900v1
20230612070733
FocalGatedNet: A Novel Deep Learning Model for Accurate Knee Joint Angle Prediction
[ "Humaid Ibrahim", "Lyes Saad Saoud", "Ahmad Aljarah", "Irfan Hussain" ]
cs.RO
[ "cs.RO" ]
gobble * (0,0.75)[c]Submitted to IEEE Robotics and Automation Letters (RA-L) FocalGatedNet: A Novel Deep Learning Model for Accurate Knee Joint Angle Prediction Humaid Ibrahim ^1,2, Lyes Saad Saoud ^3, Ahmad Aljarah^2 and Irfan Hussain^2,3,4 ^1National Service and Reserve Authority, Khalifa University, Abu Dhabi, United Arab Emirates, P O Box 127788, Abu Dhabi, UAE ^2Advanced Research and Innovation Center, Khalifa University, Abu Dhabi, United Arab Emirates, P O Box 127788, Abu Dhabi, UAE ^3Mechanical Engineering Department, Khalifa University, Abu Dhabi, United Arab Emirates, P O Box 127788, Abu Dhabi, UAE ^4Khalifa University Center for Autonomous and Robotic Systems, Khalifa University, Abu Dhabi, United Arab Emirates, P O Box 127788, Abu Dhabi, UAE ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Predicting knee joint angles accurately is critical for biomechanical analysis and rehabilitation. This paper introduces a new deep learning model called FocalGatedNet that incorporates Dynamic Contextual Focus (DCF) Attention and Gated Linear Units (GLU) to enhance feature dependencies and interactions. Our proposed model is evaluated on a large-scale dataset and compared to existing models such as Transformer, Autoformer, Informer, NLinear, DLinear, and LSTM in multi-step gait trajectory prediction. Our results demonstrate that FocalGatedNet outperforms other state-of-the-art models for long-term prediction lengths (60 ms, 80 ms, and 100 ms), achieving an average improvement of 13.66% in MAE and 8.13% in RMSE compared to the second-best performing model (Transformer). Furthermore, our model has a lower computational load than most equivalent deep learning models. These results highlight the effectiveness of our proposed model for knee joint angle prediction and the importance of our modifications for this specific application. § INTRODUCTION Wearable robotic exoskeletons have recently gained significant attention due to their potential impact in various fields, including healthcare, industry, space, and military applications <cit.>. In healthcare, exoskeletons are primarily used for rehabilitation and assistance of post-stroke patients and age-related disorders to enhance patients' mobility, assist physical therapists, and reduce rehabilitation clinical costs <cit.>. The performance of an exoskeleton is highly influenced by its control strategy, which defines how it operates and interacts with the user <cit.>. Exoskeleton control strategies are typically categorized into low, middle, and high-level control, with the high-level control being responsible for user intention detection, terrain detection, and event estimation <cit.>. The accuracy of the high-level controller significantly contributes to the exoskeleton's overall functionality. Gait data analysis is typically used to detect the user's intentions or estimate essential gait data, such as angular positions, velocities, and accelerations, to develop an accurate controller that enhances the exoskeleton's performance <cit.>. Various sensors, such as inertial measurement units (IMUs), motion capture systems, foot pressure insoles, gyroscopes, accelerometers, Electromyography (EMG), and Electroencephalography (EEG), are utilized to estimate kinetic and kinematic parameters, muscles, and brain activities. Researchers have developed multiple algorithms to analyze and process sensor data, estimate human intention, and predict essential gait data. Recent developments in machine and deep learning techniques have replaced traditional gait analysis techniques that relied on input from the user and surrounding environment. These new methods are automatic and faster, allowing proper time for low-level controllers to react <cit.>. Machine learning algorithms establish accurate relationships between inputs and outputs in non-linear systems and show enhanced data variability handling, making them particularly effective for lower-limb locomotion analysis and pathological gait analysis <cit.>. Deep learning models are one of the more prominent techniques used in gait analysis, with recent developments in deep learning models outperforming traditional machine learning techniques such as Support Vector Machines (SVM) and Multilayer Perceptron (MLP) <cit.>. A study by R. Kolaghassi et al. <cit.> on the stability of deep learning models in one-step-ahead gait trajectory prediction found that the Transformer model was more robust to noise compared to other traditional deep learning models like LSTM <cit.> and CNN <cit.> while also maintaining a similar accuracy. Despite the efficacy of these deep learning models, Transformers in particular, have not been evaluated for their usefulness in long term gait trajectory prediction for exoskeleton control systems. The exoskeleton's inherent mechanical delay <cit.> poses a challenge to exoskeleton control, which can impact the response time of the control system. As exoskeletons move with their user, having an accurate and timely response is paramount and necessary to achieving rehabilitation goals (such as assisting in movement and lowering metabolic costs). Having a feed-forward input containing future gait trajectories can help alleviate this issue. While there have been several papers on deep learning models for gait prediction <cit.><cit.><cit.><cit.>, they only utilize a small output window size of a few time-steps. To the limit of our knowledge, the long-term performance of deep learning models in gait prediction has yet to be explored. To address these limitations, in this paper we propose a novel deep learning model, FocalGatedNet, and evaluate its performance (alongside other deep learning models) across different output lengths. The FocalGatedNet model, based on the Transformer model, incorporates Dynamic Contextual Focus (DCF) Attention and Gated Linear Units (GLU) to enhance feature dependencies and interactions for accurate knee joint angle prediction. Knee joint angle prediction is a critical task in the high-level control of lower-limb exoskeletons, particularly in activities such as walking, running, and jumping. Accurate prediction significantly enhances the performance of lower-limb exoskeletons, making it crucial for the success of exoskeletons in multiple applications. The proposed FocalGatedNet model is evaluated using a publicly available dataset consisting of gait data collected from healthy and pathological individuals. The experimental results demonstrate that the proposed model outperforms state-of-the-art deep learning models in knee joint angle prediction accuracy. The focal loss function in the FocalGatedNet model effectively handles the class imbalance problem in the dataset and improves the model's ability to focus more on the hard samples. The GLU layer in the proposed model captures the long-term dependencies in the gait data and enhances the model's ability to predict knee joint angles accurately. The main contributions of this paper can be summarized as follows: * We propose FocalGatedNet, a novel deep-learning model for predicting knee joint angles in lower-limb exoskeleton control. Our model integrates two key components, namely Dynamic Contextual Focus (DCF) Attention and Gated Linear Units (GLU), to effectively capture feature dependencies and interactions. Incorporating DCF Attention and GLU improves knee joint angle prediction accuracy, contributing to improved performance. * We evaluate the proposed model using a publicly available dataset that consists of gait data collected from healthy individuals, demonstrating its effectiveness in knee joint angle prediction accuracy for various output window sizes. The rest of the paper is organized as follows. In Section II, we provide a brief overview of related work in gait analysis and knee joint angle prediction. Section III introduces the proposed FocalGatedNet model architecture and explains the dynamic contextual focus attention and gated linear units in detail. In Section IV, we describe the experimental setup and dataset used to evaluate the proposed model's performance. Section V presents the results and discussion that discusses the experimental results and the potential applications of the proposed model in the field of wearable robotics and exoskeletons. Finally, in Section VI, we provide concluding remarks and future directions of this research. § RELATED WORKS §.§ Long-term series forecasting Deep learning models have been used in exoskeleton control for predicting trajectories as feed-forward inputs to the low-level controller. Models like LSTM and CNN have been used for predicting various kinematics like the lower-limb joint angles <cit.><cit.>, linear accelerations and angular velocities <cit.><cit.>. However these studies mainly employ a short prediction length of a single or a few time-steps. Many models in the realm of long-term time series forecasting (LTSF) have been used and more are being developed. Models based on the CNN were created like the Temporal Convolutional Network (TCN) <cit.> and the Gated Convolutional Neural Network (Gated CNN) <cit.>. They are both types of neural network architectures that use convolutional layers to process sequential data. More importantly, they address the issue of long-term dependencies in CNNs, which are designed to learn local patterns in spatial data <cit.>. TCN and Gated CNN differ in how they handle temporal dependencies in the data. TCNs use dilated causal convolutions, which allow them to capture long-term dependencies in the data. Gated CNNs, on the other hand, use a Gated Linear Unit (GLU) to selectively filter information from previous time steps, serving as a type of activation function. This allows for a better selection of features that are important for prediction. Compared to other activation functions such as the rectified linear unit (ReLU) or Sigmoid, the GLU offers several benefits. First, it can capture long-range dependencies in the input data by allowing information to flow through the gating signal. Second, it can produce sparser representations of the input data, which can be useful for reducing overfitting. Overall, both TCNs and gated CNNs have been shown to perform well on time series forecasting tasks. TCNs have the advantage of being able to capture longer-term dependencies in the data, while gated CNNs are generally more computationally efficient and may be easier to interpret. The choice of architecture ultimately depends on the specific task and data being used. We decided to adopt the GLU in our model to reduce computational time and complexity in consideration of the exoskeleton control system's delay and performance. More recently, transformers <cit.> have been utilized for the purpose of long-term series forecasting. They have been employed in various fields such as natural language processing, finance, economics, energy, and weather forecasting due to their ability to process sequential data. Traditional models like RNNs and LSTM are limited by their complexity. RNNs process information sequentially, making it significantly slower for long input sequences when compared to the transformer's parallel-processing technique. Compared to CNNs, transformers retain more global information and are better at capturing long-term dependencies. They are also more effective at generalization than CNNs, as discussed in <cit.>. Their experiments showed that for generalization on out-of-distribution samples, the Transformer outperformed the CNNs without the need of pretraining on large datasets. This is a major advantage for exoskeleton control. Each individual person's muscle activation patterns are different, this robustness will prove to be useful for model generalization. Transformers have shown a significant improvement in time series forecasting <cit.>. The self-attention mechanism of the Transformer allows it to identify essential patterns in the time series and learn dependencies between different time steps. However, a limitation of the transformer model is in its self-attention mechanism. The mechanism requires quadratic time and memory (𝒪(L^2) complexity, where L is the sequence length) with respect to the sequence length. For longer sequences, it can become unmanageably large and unusable. Other models like Informer <cit.> tackle this issue through their proposed ProbSparse attention which achieves 𝒪(L logL) complexity. Autoformer <cit.> introduces the Auto-Correlation mechanism which captures the correlations between different time steps in the input sequence by time delay aggregation. This mechanism achieves a similar 𝒪(L logL) complexity. Other non-transformer-based models like DLinear and NLinear <cit.> utilize a different preprocessing technique to achieve competitive results with only linear layers in the model. The DLinear model utilizes the decomposition scheme in Autoformer to process the raw data and through linear layers to calculate the prediction. The NLinear model works best when there is a distribution shift in the data. It subtracts the input sequence by the last value in the sequence, processes it in a linear layer, then adds back the value to the prediction. These simple changes manage to achieve better results compared to the other transformer-based models. Naturally, they have a complexity of 𝒪(L) due to their linear nature. Building on the work done by R. Kolaghassi et al. <cit.>, we will further explore the efficacy of the base Transformer and other Transformer-based models across different output sizes in our study. While being able to predict to a certain point of the gait cycle can be beneficial to exoskeleton control, balancing the model prediction error and the output length can be difficult. Yi et al. <cit.> determined in their experimental study that any prediction time in the interval of 27 ms and 108 ms can be determined as the optimal output size for knee joint angle prediction. This interval was inspired by the concept of electromechanical delay (EMD) <cit.>, which claims that the EMG signals are produced with a delay of 30-100 ms between the activation of muscles, as detected by the EMG sensor, and the force and movement generated by the muscle. The outcomes of their test results supported their claims, with the performance of their networks declining significantly when making predicting beyond the interval of 30-100 ms. However, one limitation of their study is that they did not evaluate any prediction times below the lower limit of the interval. We will investigate the same prediction interval but also include a one-step prediction time as well. §.§ Exoskeleton Delay The concept of compensating exoskeleton transmission delay through predicting future values was explored in <cit.>. They predicted the trajectory of 10 time-steps (around 200 ms) using two models, MLP and CNN. They mention that predicting the trajectory will help compensate for the response time of the exoskeleton control system. Fang et al. <cit.> states that delays caused by various aspects of the exoskeleton's mechanical structure can cause significant reductions in the actual assistance provided by the exoskeleton through simulated energy savings. Identifying the current phase of the gait cycle can enhance the control of assistive-powered prostheses. The gait phase holds crucial information necessary for determining the appropriate angle, angular velocity, and torque, thereby enhancing the controller's performance by providing the current gait phase. Consequently, improved control can positively impact the patient, assisting in reducing energy expenditure while walking with a powered limb. While being able to predict to a certain point of the gait cycle can be beneficial to exoskeleton control, balancing the model prediction error and the output length can be difficult. Yi et al. <cit.> determined in their experimental study that any prediction time in the interval of 27 ms and 108 ms can be determined as the optimal output size for knee joint angle prediction. This interval was inspired by the concept of electromechanical delay (EMD) <cit.>, which claims that the EMG signals are produced with a delay of 30-100 ms between the activation of muscles, as detected by the EMG sensor, and the force and movement generated by the muscle. The outcomes of their test results supported their claims, with the performance of their networks declining significantly when making predicting beyond the interval of 30-100 ms. However, one limitation of their study is that they did not evaluate any prediction times below the lower limit of the interval. In our study, we will investigate the same prediction interval but also include a one-step prediction time as well. Our primary objective in this study is to utilize the unique features of EMD along with our proposed model to investigate potential relationships between EMD and prediction time. In addition, we aim to maximize exoskeleton transmission delay compensation through longer prediction lengths. To achieve this, we leverage EMG and IMU data as our primary features. By doing so, we hope to contribute to the development of more effective assistive-powered exoskeleton/prostheses that can provide patients with better mobility and reduce their energy expenditure while walking with a powered limb. § FOCALGATEDNET The Transformer model architecture consists of an encoder and a decoder. These blocks consist of self-attention mechanisms and feed-forward networks. Building on this architecture we introduce FocalGatedNet, which combines the base transformer encoder with a modified decoder that utilizes stacked DCF attention layers and a GLU, as seen in Fig. <ref>. The proposed attention is similar to that of the original Attention mechanism <cit.>, which is to capture the importance or relevance of different parts of the input sequence when computing the output representation. However, there are some key differences between DCF attention and the traditional Attention mechanism: The main difference is that the DCF Attention operates on the input sequence in a hierarchical manner, while the traditional Attention mechanism treats all positions in the input sequence equally. The DCF architecture is given in Fig. <ref>. The mathematical description of the DCF is given as follows: Given an input sequence x∈ℝ^B× L× d_model, where B is the batch size, L is the sequence length, and d_model is the hidden size of the input embedding, the self-attention mechanism computes the query, key, and value representations as follows: Q = x W_Q ∈ℝ^B x L x d_model K = x W_K ∈ℝ^B x L x d_model V = x W_V ∈ℝ^B x L x d_model where W_Q, W_K, W_V are linear projections, each of size d_model× d_model. Then, the dot-product attention scores between queries and keys are computed as: A = Softmax (QK^T/√(d_modelh)) where h is the number of attention heads, A is the attention weights tensor, and 1/√(d_modelh) is the scaling factor SF. If a mask is provided, it is applied to the attention scores as: A = A ⊙ M where M∈0,1^B× h× L× L is the mask tensor. Next, the contextual focus vector is computed as: C = ∑_i=1^L A_i · V_i ∈ℝ^B x h x L x L The contextual focus vector is passed through a Softmax function with dropout to compute the attention weights: W = dropout ( Softmax ( flatten (C) ) ) ∈ℝ^B x h x L where flatten(C) denotes reshaping C from ℝ^B× h× d_model to ℝ^Bh× d_model. Finally, the weighted sum of the attention weights and values are computed as: O = concat(WV) W^O ∈ℝ^B x L x d_model where concat(W V) denotes concatenating W and V along the attention head dimension, and out is another linear projection layer of size d_model× d_model. The DCF uses a different parameterization of the attention weights, which is more effective than the traditional dot product attention used in the original Attention mechanism. Our proposed model achieves the same 𝒪(L^2d) time complexity as the base transformer. We then incorporate a gating mechanism, the GLU, as seen in Fig. <ref>. This mechanism can help the model selectively attend to different parts of the input sequence and further improve the quality of the learned representations. The GLU activation function can be represented mathematically as follows: GLU(x) = σ(W_g * x) ⊙ (W_h*x) where σ is the sigmoid activation function, * represents convolution, ⊙ represents element-wise multiplication, x is the input to the layer, W_g and W_h are the weights of the gating and output convolutional layers, respectively. The GLU has a complexity of 𝒪(L/k), where k is the kernel width. The addition of the GLU to our proposed attention will reduce the dimensionality of the input by performing feature gating, which allows the model to selectively focus on more relevant features. Similar to the Gated CNN, the GLU will reduce the number of parameters required to represent the input data, reducing the risk of overfitting and improving generalization. Particularly pertinent to exoskeleton control is the faster inference times. Allowing us to further compensate for the exoskeleton transmission delay while maintaining prediction accuracy. We explore the effects of our model components individually in our ablation study in Section V. § METHODOLOGY §.§ Dataset This study uses an open-source dataset on human gait kinematics and kinetics in four different locomotion behaviors <cit.>. In the dataset, 22 able-body subjects wearing various sensors performed the following locomotion actions: walking on level ground at a slow, normal, and fast pace relative to the subject's preferred speed on clockwise and counterclockwise circuits; on a treadmill at 28 speeds from 0.5 m/s to 1.85 m/s in 0.05 m/s increments; up and down a ramp with inclines of 5.2^∘, 7.8^∘, 9.2^∘, 11^∘, 12.4^∘, and 18^∘; and up and down stairs with step heights of 4 in, 5 in, 6 in, and 7 in. Three wearable sensor signals are provided in the data set: EMG data of various muscle groups; Acceleration and gyroscope information from 6-axis IMUs; and joint angle data from electrogoniometers. The EMG data were collected at a sampling frequency of 1000 Hz and bandpass filtered at a cutoff frequency of 20 Hz - 400 Hz. It includes 11 muscle groups: gluteus medius, external oblique, semitendinosus, gracilis, biceps femoris, rectus femoris, vastus lateralis, vastus medialis, soleus, tibialis anterior, and gastrocnemius medialis. 4 IMUs on the torso, thigh, shank, and foot segments were collected at a frequency of 200 Hz and lowpass filtered at 100 Hz. 3 GONs located on the hip, knee, and ankle were recorded at 1000 Hz and filtered with a 20 Hz cutoff frequency lowpass filter. In addition to the raw data from the sensors, the data set provides additional processed biomechanics data, including inverse kinematics/dynamics, joint power, gait cycle, force plate, and motion capture data. §.§ Baseline Models and Experimental Setup We implemented the models using PyTorch 1.9.0 in Python 3.6 trained on a computer equipped with an Intel i7-13700K CPU @ 3.40 GHz and an NVIDIA GeForce RTX 3090 Ti. We used the Adam optimizer with an adaptable learning rate that starts at 10^-4 and decays by a factor of 2 every epoch. We trained the models for 10 epochs with an early stopping algorithm that will terminate training when our validation error metric does not improve for 3 epochs to prevent overfitting. The batch size was set to 32. Due to the stochastic nature of the training process, we trained the models for 10 iterations and selected the best results from the iterations. We used a single subject's data and split it into 80% training and 20% testing. We will not be tackling the generalization issue in this paper as it is outside the scope of our study. To measure inference time, we ran the models 10^3 times (with 10^2 warm-up steps for the GPU) and recorded the average time for a single step. Our input features included all EMG, GON, and IMU sensor data for a total of 40 features. The knee sagittal angle will be our output. All data will be normalized to achieve zero mean and unit standard deviation. The IMU and toe/heel events were up-sampled using interpolation from 5 ms to 1 ms intervals to match EMG and goniometer data sampling rates. The lookback window was set to 128 ms (128 data points), and the forecasting horizon was set to 1 ms, 20 ms, 40 ms, 60 ms, 80 ms, and 100 ms (1, 20, 40, 60, 80, and 100 data points) into the future. These forecasting horizons are to investigate the 30-100 ms delay between the activation of muscles read by the EMG sensor and the force and movement produced by the muscle, as discussed in <cit.>. In addition, the larger prediction times can help compensate more for the response delay caused by the exoskeleton's mechanical parts <cit.>. We will evaluate FocalGatedNet's performance against other recent state-of-the-art Transformer-based models, Autoformer and Informer. Other recent models like the DLinear and NLinear models will be included in our study as well. An encoder-decoder LSTM model will be included as a baseline. The Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) metrics are calculated to compute the performance of the models. The FocalGatedNet model will contain 3 encoder layers and 2 decoder layers as we found these settings to yield the best results for our case. We also found that running the model without temporal embeddings (only value + positional embedding) achieved better results, while the other models performed best without positional embeddings (only value + temporal embedding). The other transformer-based models will have their default number of layers (2 encoders, 1 decoder), and the stacked LSTM model will have 1 encoder/decoder layer and a kernel size of 3. The number of attention heads (8 heads) and model dimensions (512 for the model, and 2048 for the feed-forward network) will be kept the same. Despite having more encoder and decoder layers, FocalGatedNet still displayed better inference time compared to most of the other transformer-based models, as confirmed in the next section. § RESULTS AND DISCUSSION In this section, we present the comparative analysis of various models for sagittal knee angle prediction across different prediction lengths Table <ref>. Notably, our findings reveal that the LSTM model demonstrates superior performance for shorter prediction lengths, while the FocalGatedNet model outperforms other models and even surpasses the base transformer for longer-term predictions in the range of 30-100 ms. This observation highlights the significant improvement achieved by the FocalGatedNet model and its ability to capture complex patterns and dependencies for extended forecasting horizons. For 60 ms, FocalGatedNet achieves a 5.16% (1.114 → 1.058) MAE reduction and 2.67% (1.442 → 1.404) RMSE reduction. Similarly, for 80 ms FocalGatedNet reduces the MAE by 18.85% (1.347 → 1.115) and RMSE by 10.30% (1.756 → 1.584). For 100 ms, FocalGatedNet gives a 16.98% (1.463 → 1.234) improvement in MAE and an 11.41% (2.010 → 1.793) improvement in RMSE. For 1 ms and 40 ms settings, FocalGatedNet is outperformed by LSTM and base transformer model, respectively. For 20 ms, it provides a minor performance boost over the Transformer model. The error is decreased by 3.40% in MAE and 5.85 % in RMSE. For longer prediction lengths (60 ms, 80 ms, 100 ms) FocalGatedNet surpasses the other models with an average improvement of 13.66% in MAE and 8.13% in RMSE, compared to Transformer, the second-best performing model. These results show that FocalGatedNet has better long-term performance compared to the other models in predicting the knee angle. In addition, our model has a lower computational load when compared to the other transformer-based models, as seen in Table <ref>. The overall inference time doesn't change significantly across the lengths we chose for the Transformer-based models and linear models. However, LSTM's inference time increased significantly as the output prediction length increased, from 8 s for a single time-step to 20 s for 100-time steps. This is due to LSTM's sequential information processing that proves to be detrimental to longer sequences. Since the output lengths are relatively small, we don't see the effects of the transformer's quadratic time and memory complexity 𝒪(L^2). Wu et al. <cit.> explores larger output lengths and experimentally confirm that the base transformer has better performance than the Informer and Autoformer at lower prediction lengths. As the prediction length increases exponentially, the Transformer's memory impact explodes while the Autoformer and Informer models maintain a modest footprint and efficiency 𝒪(L log L). Despite having more encoder/decoder layers than the other transformer-based models, FocalGatedNet had better time efficiency and equivalent training speed to the Informer model. While the linear models had fast speeds, their performance degrades significantly for larger output lengths. Making them ineffective for our task of forecasting knee angles. We found that the overall best interval for the prediction time is between 40 ms and 80 ms, corroborating the recommended prediction time of 54 ms and 81 ms for controlling exoskeletons, as discussed previously in Section II B. This is apparent in the Autoformer, LSTM, Transformer, and FocalGatedNet plots as seen in Fig. <ref> & Fig. <ref>. Their performance improves in the interval of 40 - 80 ms. The error starts to plateau in this range before increasing again once outside the interval. An ablation study on the individual effects of the DCF and the GLU blocks is included. We tested four cases: (a) both DCF and GLU; (b) only DCF; (c) only GLU; (d) and the original Transformer model. As seen in Table <ref>, for smaller prediction lengths the GLU-only model outperforms the base transformer and the GLU+DCF model. But for lengths 60 and above, our model starts performing better compared to the individual blocks. Without the GLU, the DCF performance worsens significantly. The inclusion of the GLU block is necessary to the model's performance. Despite the performance results presented by the FocalGatingNet model for knee joint angle prediction, it has a few limitations that could be addressed in future research. First, it requires a large amount of training data to achieve optimal performance. As with any deep learning model, the accuracy and generalizability of the model heavily rely on the quality and quantity of the training data. One solution to this limitation could be to explore data augmentation techniques to increase the diversity of the training data. Secondly, even though our model has a lower computational load than some state-of-the-art models, it is still computationally complex and needs further optimization techniques. For instance, model compression techniques or hardware acceleration could be used to reduce the model's size and speed up inference. § CONCLUSIONS In this paper, we proposed a novel deep learning model, FocalGatedNet, that incorporates two key improvements to the standard Transformer architecture: Dynamic Contextual Focus Attention and Gated Linear Units. Our experiments on a large-scale dataset demonstrate that FocalGatedNet outperforms several state-of-the-art models, including Transformer, Autoformer, Informer, NLinear, DLinear, and LSTM in predicting sagittal knee angles. For longer prediction lengths, FocalGatedNet significantly enhances the accuracy of knee joint angle prediction. For instance, our model achieves an average improvement of 13.66% in MAE and 8.13% in RMSE for long-term settings (60 ms, 80 ms, 100 ms) compared to the second-best performing model, Transformer. Moreover, our model has a lower computational load than several other transformer-based models. Our findings highlight the importance of our proposed modifications for knee joint angle prediction, especially in long-term settings. The results of FocalGatedNet's research hold promise for future research on wearable exoskeletons and rehabilitation for people with neurological impairments. § ACKNOWLEDGMENT This work was supported in part by the Advanced Research and Innovation Center (ARIC), which is jointly funded by Mubadala UAE Clusters and Khalifa University of Science and Technology, and in part by Khalifa University Center for Autonomous and Robotic Systems under Award RC1-2018-KUCARS. IEEEtran
http://arxiv.org/abs/2306.03439v1
20230606063557
Multi-mode lasing in supercell plasmonic nanoparticle arrays
[ "Rebecca Heilmann", "Kristian Arjas", "Tommi K. Hakala", "Päivi Törmä" ]
physics.optics
[ "physics.optics" ]
Tidal evolution for any rheological model using a vectorial approach expressed in Hansen coefficients. Alexandre C. M. Correia Ema F. S. Valente July 31, 2023 ====================================================================================================== Multicolour light sources can be used in applications such as lighting and multiplexing signals. In photonic and plasmonic systems, one way to achieve multicolour light is via multi-mode lasing. To achieve this, plasmonic nanoparticle arrays are typically arranged in superlattices that lead to multiple dispersions of the single arrays coupled via the superlattice Bragg modes. Here, we show an alternative way to enable multi-mode lasing in plasmonic nanoparticle arrays. We design a supercell in a square lattice by leaving part of the lattice sites empty. This results in multiple dispersive branches caused by the supercell period and hence creates additional band edges that can support lasing. We experimentally demonstrate multi-mode lasing in such a supercell array. Furthermore, we identify the lasing modes by calculating the dispersion by combining the structure factor of the array design with an empty lattice approximation. We conclude that the lasing modes are the 74th Γ- and 106th X-point of the supercell. By tuning the square lattice period with respect to the gain emission we can control the modes that lase. Finally, we show that the lasing modes exhibit a combination of transverse electric and transverse magnetic mode characteristics in polarization resolved measurements. keywords: plasmonics, nanophotonics, surface plasmon resonance, multi-mode lasing § INTRODUCTION Plasmonic nanoparticle arrays support surface lattice resonances (SLRs) that are dispersive plasmonic-photonic modes arising from a hybridization between the plasmonic resonances of individual nanoparticles and the diffracted orders governed by the lattice geometry. The spectral position of the SLRs can be easily tailored by varying the lattice geometry and period while simultaneously yielding high quality (Q-) factors <cit.>. Combined with emitters such as organic dye molecules, plasmonic nanoparticle arrays are an effective system to study light-matter interaction such as strong coupling or Bose-Einstein condensation <cit.>. Lasing in plasmonic nanoparticle arrays has been studied in various lattice geometries such as square, rectangular, honeycomb or hexagonal lattices. Typically the systems produce lasing at a band edge originating at high symmetry points of the lattice, for instance at the Γ-, K- or M-points <cit.>. Also bound states in continuum which have extraordinary high Q-factors have been recently exploited for lasing in plasmonic arrays <cit.>. For lighting applications and optical communication, multicolour light sources are necessary. Ideally, such sources span red, blue and green wavelengths in order to create white light or NIR regions for optical communication <cit.>. In photonic systems, one way to achieve multicolour light sources is via multi-mode lasing, i.e. simultaneous lasing at a set of different modes. As lasing occurs in plasmonic systems at band edges, multiple band edges need to be created to realize multi-mode lasing. The most straightforward approach is by organizing individual arrays in a larger superlattice network, where the SLRs of the individual arrays couple to the Bragg modes of the superlattice, leading to multiple band edges at different energies and wavevectors <cit.>. Multi-mode lasing has been observed in such superlattice geometries <cit.>. Another possibility to realize multimode lasing is by dividing a square array into smaller patches which have slightly different periods. This creates additional band edges at different energies which simultaneously lase under optical pumping <cit.>. Another way of creating additional band edges for a zero wavevector, i.e. into the direction normal to the array plane, is by introducing an effective second lattice period which yields a second SLR. This can be done in bipartite arrays <cit.> or by introducing periodic vacancies to the arrays <cit.>. By removing particles at designated positions, deterministic aperiodic lattices that yield more complicated band structures and hence additional band edges have been realised and lasing in such lattices has been demonstrated <cit.>. However, multi-mode lasing has not been explicitly studied. Other systems in which multi-mode lasing has been observed include low-symmetry arrays, where two polarization dependent modes lase simultaneously <cit.>, light-cone SLRs overlapping with higher Brillouin Zone (BZ) edges enabling lasing from several high symmetry points at once <cit.>, and lasing in quasi-propagating modes that span a continuum of energies <cit.>. In addition to the aforementioned plasmonic structures, multi-mode lasing has been achieved in various photonic systems such as hyperuniform structures  <cit.>, topological insulators <cit.>, bound states in the continuum <cit.> and as simultaneous lasing in the magnetic and electric resonances of a dielectric nanoparticle array <cit.>. Here we study lasing in a plasmonic square lattice where we create a periodic supercell by removing particles at designated positions. We perform lasing experiments by combining the array with a solution of dye molecules. Under optical pumping we observe multiple lasing peaks that emerge at different energies and non-zero wavevectors. To understand the interplay between the two periods in the system (underlying square and the supercell periods) we calculate the Empty Lattice Approximation (ELA) from the geometric structure factor and free photon dispersion <cit.>. We can see additional modes enabled by the much longer supercell period which would otherwise be suppressed in the original square lattice. These new modes can be seen to exist at high-symmetry points of the Brillouin Zone (BZ) as defined by the supercell: 74th Γ- and 106th X-point if all possible modes are considered. While these modes are expected to exist in any square lattice matching the period of the supercell, by changing the positions of the particles in the unit-cell we can exhibit some control over the modes. This type of supercell and theoretical framework provides a new platform for designing multimode lasing systems. § EXPERIMENTS We study a system based on a nanoparticle lattice with a square symmetry, however, part of the sites of the square array are left empty. This creates a unit cell of 13 × 13 sites which is repeated over the whole array. An SEM image of the array is shown in Fig. <ref> a). Details of the sample fabrication and the measurement setup are given in the Supporting Information, including Fig. S1. The period of the square array is p = 596 nm and hence, the period of the supercell is q = 7748 nm. The gold nanoparticles have a diameter of 120 nm and a height of 50 nm. A transmission measurement of the array is shown in Fig. <ref> b), where the dispersive surface lattice resonances (SLRs) are clearly visible. The SLRs correspond to the underlying square array with the Γ-point located at k_y=0, E=1.37 eV. In transmission measurements, the finer features caused by the supercell are scarcely visible below the TM-branch. We combine the nanoparticle array with a solution of organic dye molecules (IR 140, 10 mM) by sandwiching a droplet between the sample slide and a superstrate, and pump the system optically with a femtosecond laser (1 kHz repition rate, 800 nm central wavelength). With increasing pump fluence, a set of narrow peaks emerge as shown in Figure <ref> c). The corresponding threshold curves, i.e. emission intensity versus pump fluence are shown in the plots surrounding the spectrometer data recorded at a pump fluence of 0.1329 mJ/cm^2. At the highest energy of 1.428 eV there are five peaks at k_y= -1.662 μm^-1, k_y= -0.852 μm^-1, k_y= 0 μm^-1, k_y= 0.852 μm^-1, and k_y= 1.677 μm^-1. There are four modes along the transverse electric (TE) SLRs at energies of E = 1.419 eV (k_y= -0.387 μm^-1 and k_y= 0.392 μm^-1) and E = 1.417 eV (k_y= -0.370 μm^-1 and k_y= 0.366 μm^-1). For clarity, only the threshold curves for the modes at the slightly higher energy are shown in Fig. <ref> c), the other modes are shown in the Supporting Information, Figure <ref>. At energies of E = 1.416 eV are two peaks visible at k_y= -1.281 μm^-1 and k_y= 1.279 μm^-1. Lastly, there are two lasing peaks at k_y= -2.046 μm^-1 and k_y= 2.050 μm^-1 at an energy of E = 1.400 eV. Note that only the four lasing peaks that lie on the TE modes of the SLRs coindice with modes that can be seen in Fig. <ref> b), whereas none of the modes at other lasing peaks are visible in the transmission measurement. Figure S3 in the Supporting Information shows the dye emission with the lasing mode energies indicated. All of these modes coincide with the emission maximum. Interestingly, the Γ-point of the underlying square lattice (E = 1.37 eV) is located at an energy where no lasing takes place. This suggests that the modes originating from the supercell experience more gain and/or have lower losses and are therefore more likely to lase. By changing the square lattice period, the dispersions of the arrays can be conveniently shifted with respect to the emission maximum of the dye. As a consequence, the modes that exhibit lasing can be tuned as shown in Figure S4 in the Supporting Information. § RESULTS AND DISCUSSION In Figure <ref> the real space pattern of the nanoparticle array is shown (a) along with its geometric structure factor S(𝐤)(b). The structure factor describes the scattering properties of the lattice for any given wave-vector 𝐤 and can be interpreted as a measure of constructive interference along that scattering direction. In a typical square lattice S(𝐤) has peaks of equal magnitude at reciprocal lattice sites, i.e. at the centres of the Brillouin Zone (BZ). Removing particles from the array removes some of the destructive interference in the system allowing for new scattering directions to occur. The amplitude of these new scatterings depends on the number and positions of the particles removed. If particles are removed periodically, the system becomes periodic with a supercell period and has a new, smaller BZ. However, these new BZ:s are not made equal as their properties are dependent on the way particles are removed. Let us denote the initial square lattice period and the supercell period as p and q respectively, with associated reciprocal lattice vectors of magnitude a and b. For a periodic structure with a multi-particle unit cell, the normalized structure factor is S(𝐤) = 1/N^2∑_ij e^i𝐤·(𝐫_i - 𝐫_j) = 1/N_u^2∑_i'j'e^i𝐤·(𝐫_i' - 𝐫_j')·1/N_α^2∑_αβe^i𝐤·(𝐪_α - 𝐪_β)_δ(𝐤 - m_1 𝐛_1 - m_2𝐛_2), where N_u, N_α are the number of particles in a unit cell and number of unit cells, i,j and i',j' the indices of particles in the lattice and in a unit cell, 𝐪_α, 𝐪_β the positions of unit-cells, 𝐛_1,𝐛_2 reciprocal lattice vectors and m_1,m_2 integers. As can be seen in Figure <ref> b), the system retains information from both periods. In addition to the original scattering peaks (yellow dots at the Γ-points of the square lattice) a new set of secondary peaks appears at the Γ-points of the supercell lattice, the strongest of which exist near the corners of the initial BZ. The new system retains information from the original period p as the S(𝐤) can be seen to repeat with a period of a = 2π/p. The first approximation for the band-structure is given by the empty-lattice approximation (ELA) which is obtained by taking the convolution of geometric structure factor S(𝐤) with the in-plane free-photon dispersion |𝐤| = √(k_x^2 + k_y^2) = nE/(ħ c) <cit.>: ∫ d𝐪 S(𝐪)δ(|𝐪 - 𝐤| - nE/(ħ c) ). This corresponds to placing light cones at the peaks of the structure factor (in case of square (super) lattices the Γ-points of each Brillouin zone) and weighing them by the value of the structure-factor. The results are shown in Figure <ref> c). As S(𝐤) indicates the amount of constructive interference, the weight of the light-cone correlates with the strength of the mode. In addition to the typical ELA-dispersions of a square lattice, additional weaker dispersions emerge from the Γ-points of the supercell. The experimentally measured dispersion overlayed with the calculated ELA is shown in Figure S5 in the Supporting Information showing a good agreement with the theoretical model and measurements. By comparing the measurements to the ELA-dispersions, we find that the lasing peaks exist slightly below crossings of two or more bands at the high-symmetry points of the new BZ as is shown in figure <ref> d). Due to the large size of the supercell, the new BZ is small and can be repeated multiple times in the measured range. The modes at equal energy are separated by a multiple of k_y = b, so when folded back to the first BZ they can be seen to correspond to the same high-symmetry points. It is unclear whether in the experiments there exist even more lasing peaks at the same energy as we are limited in k_y by the optics, i.e. the numerical aperture of the objective, (see Supporting Information Figure S1). Interestingly, some of the bands in the band-structure in Figures <ref> c) and d) have different slopes than the TE and TM modes originating from the underlying square lattice (the lines with the highest weight). These modes with different slopes cannot be categorized as purely transverse electric/magnetic (TE/TM) and the modes observed in the measurements coincide with such bands.This implies that the lasing modes are not purely TE or TM polarized. To verify the hybrid TE/TM nature of the lasing states, we experimentally studied an array with the same lattice parameters (p = 596 nm and q = 13p), however, the edge length is now 240 μm. We combined the nanoparticle array with a reservoir of dye molecules with the same concentration of 10 mM as in the previous measurements, leading to increased total amount of molecules. The larger array as well as the increased gain caused by the higher amount of molecules lead to a stronger signal. This is needed as we added a polarizer into the detection path which decreases the amount of measurable light. We collected angle-resolved spectra as well as full momentum space images as is shown in Figure <ref>. Here, a vertical orientation refers to the axis of the polarizer oriented parallel k_x = 0^∘ and a horizontal to the axis of the polarizer oriented parallel to k_y=0^∘. Changing the size of the array changes the lasing spectrum. In total, five lasing peaks are clearly observable in Figure <ref> a) with additional four peaks with less intensity. The peaks at the higher energy (E = 1.403 eV) are at wavevectors k_y = -1.680, -0.859, 0, 0.845, and 1.661 μm^-6 which correspond directly to the wavevectors of the highest energy mode (E = 1.428 eV) of the smaller array shown in Figure <ref> c), and based on the weights of the ELA we conclude that the lasing mode is now the 71st Γ-point. The peaks at the lower energy in Figure <ref> a) (E = 1.393 eV) occur at wavevectors -2.047, -0.443, 0.431, and 2.024 μm^-6, where the larger k_y directly correspond to the X-point lasing peaks in Figure <ref> c). The peaks at the smaller k_y are the X-points closest to k_y in Figure <ref> d). The shift in energy is most likely caused by the increased amount of dye molecules that leads to a shift in the refractive index. The majority of these peaks are visible in the angle-resolved spectra with polarization filters applied, albeit with varying intensity. Further, the peaks at k_y = ± 0.8 μm^-1 and E = 1.403 eV do not appear in the case where a horizontally oriented polarizer is applied (Figure <ref>) b), implying that these modes are TM polarized. The full momentum space images in Figure <ref>, bottom row, show strong features along θ_y/x=0^∘ if a horizontally/vertically oriented polarizer is applied. These images are in logarithmic scale to make weaker features more visible. And indeed, although a polarizer is applied, the peaks along θ_x/y=0^∘ can still be distinguished with a horizontal/vertical polarization filter applied. Nevertheless, these peaks are much weaker in intensity as the others and although this implies the hybrid TE/TM nature of the modes, the modes are mainly TE or TM polarized. For comparison, we simulated a superlattice-type version of our structure and found the results of such calculation to be in agreement with <cit.>, as is shown in Supporting Information S7. Since both structures have the same supercell periodicity, band crossings happen at same values of 𝐤 and E. However, due to the differences in structure factors, these crossings have different weights, and thus different bands are expected to be responsible for lasing. The strongest secondary bands in the supercell lattice exist in the vicinity of the original square lattice modes while in our case the strongest secondary bands are found closer to the M-points. In this case, the observed crossings are not the results of these stronger bands as is evident in Figure <ref> c) and d). Instead, we find the crossings to come from multiple colliding weak modes. While these modes exist for the superlattice structure as well, the relative weights of the modes can be estimated to be different, as is shown by the different weight of the lines in Figure <ref> and  S7. In fact, the relative strengths of these crossings can be seen to correlate with the mode brightness. § CONCLUSION We demonstrated multi-mode lasing in a plasmonic supercell lattice. We showed that by introducing a periodic supercell in a square lattice geometry, additional band edges are formed near high symmetry points of the supercell. These band edges enable lasing and we observed lasing in multiple modes. By calculating the empty lattice approximation based on the structure factor of the lattice design, we identified the lasing modes to be the 74th Γ- and 106th X-point of the supercell. Due to their higher order nature these lasing modes are not purely TE or TM polarized. By tuning the square lattice period with respect to the emission maximum of the gain medium we were able to select the lasing modes. One advantage of the supercell approach compared to the superlattice approach is the relative size of the structure. In previous superlattices in which multimode lasing has been achieved, the individual arrays of the size of 10s of μm were arranged on a centimeter square (10^-4 m) scale <cit.>. The supercell arrays presented in this work on the other hand provide multimode in arrays with a size of 115 μm x 115 μm (10^-8 m) and are therefore significantly smaller. The supercell design approach provides new possibilities to engineer band edges for multimode lasing for instance by distributing the particles within the supercell differently or by changing the supercell period. § ACKNOWLEDGEMENTS We thank Grazia Salerno for valuable discussions on the calculation of the band structure based on the structure factor. Funding: This work was supported by the Academy of Finland under Project No. 318937 (PROFI), 322002, the Academy of Finland Flagship Programme, Photonics Research and Innovation (PREIN), Project No. 320166 and 320167, and the Vilho, Yrjö and Kalle Väisälä Foundation. Part of the research was performed at the OtaNano Nanofab cleanroom (Micronova Nanofabrication Centre), supported by Aalto University. We acknowledge the computational resources provided by the Aalto Science-IT project. R.H. acknowledges financial support by the Finnish Foundation for Technology Promotion. Author contributions: R.H. initiated the project and P.T. supervised it. R.H. fabricated the samples and performed the measurements. K.A. calculated the band structures. All authors discussed the results. R.H. and T.K.H. wrote the manuscript with input from all coauthors. Competing interests: There are no competing financial interests. § SUPPORTING INFORMATION The Supporting Information is available free of charge. Experimental Methods; Threshold curves of lower TE SLR lasing modes; Dye emission; Lasing experiments with different lattice periods; Measured dispersion and calculated ELA; Close-up of the structure factor; ELA of a superlattice. unsrt
http://arxiv.org/abs/2306.02266v1
20230604053832
Decoupling Numerical Method Based on Deep Neural Network for Nonlinear Degenerate Interface Problems
[ "Chen Fan", "Zhiyue Zhang" ]
math.NA
[ "math.NA", "cs.NA" ]
Decoupling Numerical Method Based on Deep Neural Network for Nonlinear Degenerate Interface Problems Chen Fan^a, Zhiyue Zhang^a,[Corresponding author. E-mail address: [email protected].] a School of Mathematical Sciences, Jiangsu Key Laboratory for NSLSCS, Nanjing Normal University, Nanjing 210023, China ========================================================================================================================================================================================================================== Abstract    Interface problems depict many fundamental physical phenomena and widely apply in the engineering. However, it is challenging to develop efficient fully decoupled numerical methods for solving degenerate interface problems in which the coefficient of a PDE is discontinuous and greater than or equal to zero on the interface. The main motivation in this paper is to construct fully decoupled numerical methods for solving nonlinear degenerate interface problems with “double singularities". An efficient fully decoupled numerical method is proposed for nonlinear degenerate interface problems. The scheme combines deep neural network on the singular subdomain with finite difference method on the regular subdomain. The key of the new approach is to split nonlinear degenerate partial differential equation with interface into two independent boundary value problems based on deep learning. The outstanding advantages of the proposed schemes are that not only the convergence order of the degenerate interface problems on whole domain is determined by the finite difference scheme on the regular subdomain, but also can calculate 𝐕𝐄𝐑𝐘 𝐁𝐈𝐆 jump ratio(such as 10^12:1 or 1:10^12) for the interface problems including degenerate and non-degenerate cases. The expansion of the solutions does not contains any undetermined parameters in the numerical method. In this way, two independent nonlinear systems are constructed in other subdomains and can be computed in parallel. The flexibility, accuracy and efficiency of the methods are validated from various experiments in both 1D and 2D. Specially, not only our method is suitable for solving degenerate interface case, but also for non-degenerate interface case. Some application examples with complicated multi-connected and sharp edge interface examples including degenerate and nondegenerate cases are also presented. Key words   nonlinear degenerate interface problems; deep neural network; fully decoupled method; very big jump ratio; convergence order; sharp edge interface Mathematics Subject Classification   34B16, 35R05, 65M85, 65N06, 68T99 § INTRODUCTION Nonlinear degenerate interface problems can depict many fundamental physical phenomena in chemical and mechanical engineering, physics and many other applications<cit.>. For the standard interface problems, it has attracted great interests in numerical computations, such as finite element method<cit.>, finite difference method<cit.>, finite volume element method <cit.>, spectral method<cit.>, least-squares method <cit.> and references therein. There has been a great deal of rigorous mathematical theory and numerical analysis to deal with degenerate PDE<cit.>. To the best of our knowledge, degenerate interface problems have received less attention so far, a few notable approaches can be found in the literature to handle the degenerate PDE with interface<cit.>. As is well known, the difficulty lies in the “double singularities" for nonlinear degenerate interface problems, namely, degeneracy and interface. Generally speaking, the most expensive part work of numerical schemes on standard sharp interface problems<cit.> is how to approximate the jump conditions very well. For example, there are many methods are interesting but the technique to treat the jump conditions is quite complicated. Nevertheless, our proposed approach based on deep neural network uses different, simple and natural techniques to treat the singularities compared with the above references, and hence obtains numerical method to solve nonlinear degenerate interface problems. In fact, the challenge work of numerical simulation on nonlinear degenerate interface problems are how to design the numerical methods not only to reduce the singularities affect at degenerate points, but also are less dependent or independent of the jump conditions. Due to nonlinear degenerate interface problems possess “double singularities", it is usually required extremely fine grids such as adaptive mesh or graded mesh to reduce singularity affect. Obviously, it is impossible to use uniform grids to numerically solve nonlinear degenerate interface problems for the traditional numerical methods. The main goal of this paper is to present an efficient and fully decoupled finite difference methods with uniform grids based on deep neural network for solving nonlinear degenerate interface problems. On the other hand, the deep neural network (DNN) models have achieved great successes in artificial intelligence fields including high-dimensional problems in computer vision, natural language processing, time series analysis, pattern and speech recognition. It is worth noting that even if there is a universal approximation theoretical results about the single layer neural network, the approximation theory of DNN still remain an open question. However, this should not prevent us from trying to apply deep learning to other problems such as numerical weather forecast, petroleum engineering, turbulence flow and interface problems. There are two main techniques to solve PDEs with deep learning, the first is to parameterize the solution of PDEs by the deep neural network (DNN). One of methods is that a universal approximation based on a neural network and point collocation are used to transform the PDE into an unconstrained minimization problem. The other one is that the original problem is transformed an optimization problem with variational form based on representing the trail functions by deep neural networks. Recently, we have noticed there are some gratifying works by using mesh free methods with DNN model to solve PEDs and interface problems<cit.>. However, we will use structured mesh method with deep learning to deal with degenerate interface problems which is a challenge and is always of great interests. Although boundary conditions are absent on the singular sub-domains, which is known to be the extreme ill-posedness, it is shown that the DNN approach still has some merits in structured grids method. In addition, we use a hybrid asymptotic and augmented compact finite volume method to realize using semi-decoupling numerical method based on a uniform Cartesian mesh for solving 1D degenerate interface problem<cit.>. This inspires us to develop fully decoupled numerical method for solving the degenerate PDE with interface. Although there have been a great deal of nice works for interface problems<cit.>, there are quite a few fully decoupled numerical methods on the uniform grids for solving such interface problems, even to mentioned interesting degenerate interface problems. In this paper, we focus on constructing fully decoupled numerical algorithms based on deep learning for solving the degenerate interface problems. This method not only effectively reduces the influence of the degeneracy and interface but also provides an accurate solutions on a uniform Cartesian mesh. We construct two DNN structures near the interface instead of the whole domain, and find the optimal solution by minimizing the mean squared error loss that consists of the equation and the interface conditions. These two parts are linked by its normal derivative jump conditions. We use DNN to treat considered problems on singular sub-domains near the interface to get a solution, then obtain two independent decoupled boundary value sub-problems without interface on regular sub-domains. We can compute those two nonlinear systems in parallel. We find that the proposed our approach is simple, easy to implement reducing lots efforts in handling jump conditions and also its ability to use existing method for solving nonlinear sub-problmes without interface. The choice of the singular sub-domain is more natural since we use a uniform grids, and programming of the new scheme is a straightforward task due to fully decoupled algorithms. Although deep learning has shown remarkable success in various hard problems of artificial intelligence areas, limited approximatability of deep learning with uniform grids results in two general boundary value sub-problems to get satisfactory approximations of the solutions for solving such nonlinear degenerate interface problems. A loss, no bad thing or a blessing in disguise. In fact, if deep learning has the ability to strictly decoupled the degenerate interface problems at the interface into two degenerate PDEs, we probably obtain nonlinear ill-conditioned systems for the corresponding discrete sub-problems. At this moment, we have to look for other special methods to treat degenerate PDE or interface problems likewise the litratures<cit.>, and references therein. The purpose of the paper is to develop a new fully decoupled numerical method based on DNN technique that not only effectively reduces the influence of the singularities and interface, but also provides a new way to realize completely decoupled method with different ideas compared to the existing methods to treat degenerate interface problems. It does not need any extra efforts to treat the cases between degenerate interface and general interface. The proposed approach has advantages of fully decoupled two problems without interface with uniform grids. Since our fully decoupled method is independent of the interface and the jump conditions, it not only results in two independent sub-problems, but also can easily treat the cases of 𝐕𝐄𝐑𝐘 𝐁𝐈𝐆 jump ratio(such as 10^12:1 or 1:10^12). In addition, the computational costs is almost the same for homogenous jump case and non-homogeneous jump case, this numerically demonstrates fully decoupled property of our method. The methods of this paper are sufficiently robust and also can easily handle 1D case and 2D case. In particular, it is easily to handle hard problems such as sharp-edge interface problems. Our method can robustly and efficiently apply to both of the general interface problems and degenerate interface problems, while an effective method to solve general interface problems is not suitable for solving such nonlinear degenerate interface problems. It is demonstrated that our method is a simple and straight method to deal with quit hard works. It should be mentioned that the convergence order of the schemes on entire domain for solving such degenerate PDE with interface can be determined by the convergence order of the sub-problems on regular sub-domain. Numerical experiments show that the proposed approach is able to effectively approximate the solutions of such hard degenerate interface problems. Numerical results have shown great improvement comparing to the existing methods for solving hard cases<cit.>. From the method<cit.> we know that it is impossible to split degenerate or general interface problems into two independent boundary value problems. Nevertheless, it is realized our algorithms to be completely decoupled for solving degenerate interface problems due to using dee learning. Although there are a few analytical results, the reason why deep neural networks coupled with traditional numerical methods have performed so well for solving degenerate interface problems still largely remains a mystery. This encourage us to consider the theoretical approximation analysis in the future. The rest of the paper is organized as follows. In section <ref>, we give some preliminaries about the Deep Neural Networks and follow this with the process on the interface and fully-decoupling two sub-problems. In section <ref>, we construct Deep Neural Network structure and finite difference scheme. We present some numerical experiments including some interesting models in mathematical physics area in section <ref>. Some concluding remarks are given in the final section. § DEEP NEURAL NETWORK The definition and attributes of the deep neural network (DNN), particularly its approximation property, are briefly discussed in this section <cit.>. In order to define a DNN, we will need two steps. The first is a (vector) linear function of the operator T: R^n → R^m, defined as T(x)=Ax+b, where A=(a_i,j) ∈ R^m × n, x and b are in R^n and R^m respectively. A nonlinear activation function σ: R → R is the second. The rectified linear unit (ReLU), a commonly used activation function, is defined as ReLU(x)=max(0,x)<cit.>. The exponential linear unit (ELU) will be used as the activation function in this paper, defined as ELU(x)=max(0,x)+min(0,e^x-1), it is mainly used to avoid the problem of gradient disappearance (Fig.<ref>). The (vector) activation function σ: R^m → R^m can be defined by applying the activation function in an element-wise manner. We can define a continuous function F(x) by acomposition of linear transforms and activation functions using these definitions, i.e., F(x)=T^k ∘σ∘ T^k-1∘σ∘ T^k-2∘…∘ T^0(x), where T^i(x)=A_ix+b_i with A_i and b_i are undetermined matrices and vectors respectively, σ(x) being the element-wisely specified activation function to make (<ref>) meaningful, the dimensions of A_i and b_i were chosen. All indeterminate coefficients (e.g., A_i and b_i) in (<ref>) are denoted as θ∈Θ, where θ is a high-dimensional vector and Θ is the space of θ. The DNN representation of a continuous function can be viewed as F=F(x;θ). Let 𝔽={F(x;θ) |θ∈Θ} denote the set of all expressible functions by the DNN parametrized by θ∈Θ. The approximation property of the DNN, which is relevant to the study of a DNN model's expressive power, have been discussed in other papers<cit.>. To accelerate the training of the neural network, we use the Adam optimizer <cit.>version of the stochastic gradient descent (SGD) method in two-dimensional case<cit.>. § 2D DEGENERATE ELLIPTIC INTERFACE PROBLEM §.§ Problem description Consider the following nonlinear degenerate elliptic equation with the interface, -∇·(β(x) ∇ u)=f(x,u), in Ω^- ∪Ω^+, [u]=w, on Γ, [β(x) ∇ u ·n]=v, on Γ, u=g, on ∂Ω. where Ω is a bounded domain in R^2, with Lipschitz boundary ∂Ω, and the interface Γ is closed and divides Ω into two disjoint sub-domains Ω^- and Ω^+; w and v are two functions defined only along the interface Γ. The function f(x,u) contains u and denotes the nonlinearity, and has different nonlinear forms with respect to u. β is weakly degenerate coefficient functions (degenerate points belong to the interface), it is also mentioned other poor properties such as ∞≥β≥ 0 (β tends to 0 on the interface). [u]=u^+(x)-u^-(x)=w and [β(x) ∇ u ·n]=β^+(x) ∇ u^+ ·n-β^-(x) ∇ u^- ·n=v are the difference of the limiting values of u(x) from Ω^+ and Ω^- respectively. Finally, g is a determined function on the boundary ∂Ω. §.§ DNN-FD method In this research, we focus on using DNN to develop fully-decoupled numerical methods for solving degenerate interface problems. First, we divide the domain Ω into uniform Cartesian meshes, we use DNN to solve examined problems on singular sub-domains near the interface, then extract two decoupled boundary value sub-problems on regular sub-domains with no interface. Those two nonlinear systems can be computed in parallel by finite difference method, (I) {[ -∇·(β^-(x) ∇ u^-)=f^-(x,u^-), x∈Ω_1,; u^-=u^-_t(x;θ^-), x∈Γ^-. ]. (II) {[ -∇·(β^+(x) ∇ u^+)=f^+(x,u^+), x∈Ω_2,; u^+=u^+_t(x;θ^+), x∈Γ^+,; u^+=g, x∈∂Ω. ]. where f^±, β^± and u^± are the functions in Ω^± respectively; Ω_1 and Ω_2 are regular domains shown in Fig.<ref> and u^±_t(x;θ^±) are the result of the deep neural network in the next section. The proposed method has the advantage of totally decoupling the original problems while using uniform grids. Because our fully decoupled technique is independent of the interface and jump conditions, it not only yields two nondegenerate sub-problems, but it can also easily handle the interface problems with large jump ratios. This method can easily handle both 1D and 2D cases. It is very simple to deal with difficulties like sharp-edge interface issues. While an effective approach for handling general interface problems is not suitable for solving such nonlinear degenerate interface problems, our method can be used robustly and efficiently to both general and degenerate interface problems. §.§.§ Deep Neural Network Structure In recent years, deep neural network has shown its strong ability in various fields<cit.>, mainly reflected in nonlinear fitting ability, high-dimensional data processing ability, excellent fault tolerance ability and strong feature extraction ability. Here, we apply it to the element mesh near the interface to solve the nonlinearity, degeneration and interface singularity of the original problem. We apply DNN in the banded degenerate domain composed of near interface element grid in Fig.<ref>. We construct the DNN structure on this domain instead of the whole area to approximate the solution u. The reason is that we want to solve the singularity on the interface through the characteristics of DNN, in order to avoid the influence of regular domains on the accuracy of DNN. And the regular domains can be improved by better numerical methods. The problem is naturally separated into two nonsingular sub-problems<cit.>, u(x) ≈ u_t(x;θ)= u^-_t(x;θ^-), if x∈Ω^-∖Ω_1, u^+_t(x;θ^+), if x∈Ω^+∖Ω_2. u_t^+(x ; θ^+)=(|x-x_0|+1)ĝ(x_0)+|x-x_0|û_t^+(x ; θ^+). where θ=(θ^-;θ^+) ∈Θ, the exact interface is the zero level set of the following level set function ϕ(x_0)=0. ĝ is an extension of g near the interface and |.| is the Euclidean distance. û_t^+ will be obtained from deep learning networks. The construction of equation (3.5) aims to ensure the uniqueness of the solution. Similarly, depending on the shape of the interface, u^-_t(x;θ^-) will also be constructed correspondingly. If the first jump condition across the interface is homogeneous, only one function u_t(x;θ) can be used to approximate the solution u. The structure of DNN with four hidden layers has been given in the Fig.<ref>. The following is the selection of sampling points, which is divided into two types: one is to select interior points {x_k}_k=1^M_1, {x_k}_k=1^M_2 which are random on the degenerate domains; and the other is the nodes {x_k}_k=1^M_3 on the element grids. In order to define the discrete loss function, all sampling points {x_k}_k=1^M_1, {x_k}_k=1^M_2, {x_k}_k=1^M_3 need to meet the first condition in (<ref>), L_1(θ):=1/M_1+M_3/2∑_k=1^M_1+M_3/2|- ∇·β^- ∇ u^-_t(x_k;θ)- f^-(x_k)|^2,x∈Ω^-∖Ω_1, L_2(θ):=1/M_2+M_3/2∑_k=1^M_2+M_3/2|- ∇·β^+ ∇ u^+_t(x_k;θ)- f^+(x_k)|^2,x∈Ω^+∖Ω_2. The nodes {x_k}_k=1^M_3 also need to meet the jump conditions across the interface, L_3(θ):=2/M_3∑_k=1^M_3| u^+_t(x_i^+_k,j^+_k; θ)- u^-_t(x_i^-_k,j^-_k; θ)-w|^2, L_4(θ):=2/M_3∑_k=1^M_3| β^+ ∇ u^+_t(x_i^+_k,j^+_k; θ)·n- β^- ∇ u^-_t(x_i^-_n,j^-_n; θ)·n-v|^2. This structure is to solve the singularity and geometric irregularity on the interface. If we sample points directly from the interface, the separated sub-problems will be also degenerate. In particular, there may be two cases for nodes, the first case is that the intersection of the interface and the grid is not a grid node shown in the Fig.<ref>, such as α_1, we can process by nodes close to the intersection in the horizontal or vertical direction, | u^+_t(x_i_1,j_1; θ)- u^-_t(x_i_1+1,j_1; θ)-w|^2, | β^+ ∇ u^+_t(x_i_1,j_1; θ)·n- β^- ∇ u^-_t(x_i_1+1,j_1; θ)·n-v|^2. The second case is that the interface just intersects with the grid at the node, such as α_2. We need to deal with it through the four nodes around it, | u^+_t(x_i_2,j_2; θ)- u^-_t(x_i_2+2,j_2; θ)-w|^2+ u^+_t(x_i_2+1,j_2-1; θ)- u^-_t(x_i_2+1,j_2+1; θ)-w|^2, | β^+ ∇ u^+_t(x_i_2,j_2; θ)·n- β^- ∇ u^-_t(x_i_2+2,j_2; θ)·n-v|^2+ | β^+ ∇ u^+_t(x_i_2+1,j_2-1; θ)·n- β^- ∇ u^-_t(x_i_2+1,j_2+1; θ)·n-v|^2. now, we are ready to define the total discrete loss function as follows: L(θ):= w_1L_1(θ)+ w_2L_2(θ)+ w_3L_3(θ)+ w_4L_4(θ), where w_i,i=1,2,3,4 are weights, which are used to solve the problem with large jump ratios. Therefore, each discrete loss function can be compared by the same order of magnitude. After we get the approximation of the gradient with respect to θ_k, we can update each component of θ as θ_k^n+1=θ_k^n-.η∂ L( θ)/∂θ|_θ=θ^n_k, where θ_k is any component of θ and η is the learning rate. For the sake of simplicity, η is usually taken as 10^-4 unless specified. §.§.§ Finite Difference Scheme On the regular domain, we can use better numerical methods to improve the accuracy of the whole regions. Here we use the finite difference method<cit.>. Take one of these areas as an example, (II) {[ -∇·(β^+(x) ∇ u^+)=f^+(x,u^+), x∈Ω_2,; u^+=u^+_t(x;θ^+), x∈Γ^+,; u^+=g, x∈∂Ω. ]. Suppose that the function u^+ has the following nodes (x_1_i,x_2_j) on the domain Ω=[a, b] ×[c, d], where a=x_1_0<x_1_1<x_1_2<⋯<x_1_i<⋯<x_1_N-1<x_1_N=b, c=x_2_0<x_2_1<x_2_2<⋯<x_2_j<⋯<x_2_M-1<x_2_M=d. The steps are h_1 and h_2 respectively, and x_1_i=x_1_0+i h_1 (i=0,1, ⋯, N), x_2_j=x_2_0+j h_2 (j=0,1, ⋯, M). By Taylor formula, numerical calculation usually uses the following first-order central difference quotient and second-order central difference quotient to approximate the first-order partial derivative and second-order partial derivative of the function u^+ at the node (x_1_i,x_2_j) respectively, δ_x_1 u^+_i j=u^+_i+1/2, j-u^+_i-1/2, j/h_1,  δ_x_2 u^+_i j=u^+_i, j+1/2-u^+_i, j-1/2/h_2. δ_x_1^2 u^+_i j=u^+_i+1, j-2 u^+_i j+u^+_i-1, j/h_1^2, δ_x_2^2 u^+_i j=u^+_i, j+1-2 u^+_i j+u^+_i, j-1/h_2^2. where x_1_i±1/2=x_1_i±h_1/2, x_2_j±1/2=x_2_j±h_2/2, u^+_i j is the approximate value of the function u^+ at the node. For the equation (II), the difference quotient is used to approximate the partial derivative at the nodes, and the following difference equations can be obtained on the domain Ω_2: δ_x_1(β^+_i jδ_x_1 u^+_i j)+δ_ x_2(β^+_i jδ_x_2 u^+_i j)=f^+_i j, where f^+_i j=f^+(x_1_i,x_2_j,u^+_i j). By substituting (<ref>) and (<ref>) into (<ref>), we can get 1/h_1^2(β^+_i+1/2, j u^+_i+1, j-(β^+_i+1/2, j+β^+_i-1/2, j) u^+_i j+β^+_i-1/2, j u^+_i-1, j)+ 1/h_2^2(β^+_i, j+1/2 u^+_i, j+1-(β^+_i, j+1/2+β^+_i, j-1/2) u^+_i j+β^+_i, j-1/2 u^+_i, j-1)=f^+_i j. where β^+_i j=β^+ (x_1_i, x_2_j), β^+_i ± 1/2, j=β^+ (x_1_i ± 1/2, x_2_j), β^+_i,j ± 1/2=β^+ (x_1_i, x_2_j ± 1/2), i=1, ⋯, N-1, j=1, ⋯, M-1. After discretizing the boundary value conditions, we can get u^+_i j=u^+_t(x_1_i, x_2_j;θ^+),  (x_1_i, x_2_j) ∈{x_k}_k=1^M_3. u^+_0 j=g_0 j, u^+_N j=g_N j, u^+_i 0=g_i 0, u^+_i M=g_i M, i=0, ⋯, N, j=0, ⋯, M. where g_i j=g(x_1_i,x_2_j). Finally, the following iterative method is used to solve (<ref>), set an initial value u^+_i j^(0)(i=1, ⋯, N-1, j=1, ⋯, M-1) and construct the sequence u^+_i j^(m)(i=1, ⋯, N-1, j=1, ⋯, M-1, m=0,1, ⋯) according to the following formula: 1/h_1^2(β^+_i+1/2, j u^+(m)_i+1, j-(β^+_i+1/2, j+β^+_i-1/2, j) u^+_i j^(m)+β^+_i-1/2, j u^+(m)_i-1, j)+ 1/h_2^2(β^+_i, j+1/2 u^+(m)_i, j+1-(β^+_i, j+1/2+β^+_i, j-1/2) u^+_i j^(m)+β^+_i, j-1/2 u^+(m)_i, j-1)=f^+_i j^(m). § NUMERICAL EXAMPLES In this section, we present some numerical results to illustrate the expected convergence rates for different configurations. The convergence order of the approximate solutions, as measured by the errors, is denoted by order =log _2(u_2 h-u_L^2 /u_h-u_L^2), where u_h is the numerical solution with space step size h and u is the analytical solution. §.§ 1D degenerate interface with homogeneous jump conditions Example 4.1. The degenerate differential equation with the homogeneous interface condition will be solved in Ω^-=(0,1), Ω^+=(1,2), and the interface point α=1. The boundary condition and the source function are chosen so that the exact solution is<cit.> u(x)={[ 1/τ^-(-exp (1-x)^1 / 2+1),x ∈Ω^-,; 1/τ^+(exp (x-1)^1 / 2-1), x ∈Ω^+. ]. The coefficient β is β={[ τ^-(1-x)^1 / 2, x ∈Ω^-,; τ^+(x-1)^1 / 2, x ∈Ω^+. ]. Hence, the interface jump conditions, [u]=w=0, [β u_x]=v=0. We test the current method for the classical interface problem with homogeneous jump conditions. The network used 4 intermediate layers. The width of each layer is 6 and the number of sampling points is 202, including 200 interior points and two grid nodes. The numerical results of the current method for the very big jump ratios (τ^- / τ^+=. .10^12 / 1  and τ^- / τ^+=1 / 10^12) are shown in Table <ref> and Table <ref> respectively. It can be seen clearly that the convergence orders reach the second order for the numerical solution in L^2 norms. Fig.<ref> shows the comparison between the exact solution and the numerical solution for the very big jump ratios when N=160. In Fig.<ref>, we present the decay of the loss function during the training process respectively, eventually the error between the DNN solution and the exact solution reduces to about O(10^-4) near the interface. Many other well-known methods usually give the numerical results with the jump ratios (τ^- / τ^+=. .10^3 / 1  and τ^- / τ^+=1 / 10^3) for the one-dimensional or two-dimensional interface problems<cit.>, while it can be calculated by the method used in this paper with the jump ratios (τ^- / τ^+=. .10^12 / 1  and τ^- / τ^+=1 / 10^12). The time for the deep neural network required to simulate the function is approximately 1263 seconds when N=160. §.§ 1D degenerate interface with nonhomogeneous jump conditions Example 4.2. In this example, the computational domain and interface (a point) are the same as in the previous example. The source function f(x, u) are chosen such that the exact solution is as follows<cit.>: u(x)={[ u^-(x)=exp((1-x)^2 / 3), x ∈Ω^-,; u^+(x)=exp((x-1)^1 / 2)+5,x ∈Ω^+. ]. The coefficient β is β={[ β^-=τ^-(1-x)^1 / 3, x ∈Ω^- ,; β^+=τ^+(x-1)^1 / 2, x ∈Ω^+. ]. The experiment satisfies the following jump conditions, [u]=w=5, [β u_x]=v=1/2τ^++2/3τ^-. This is an experiment with nonhomogeneous jump conditions and the requirements for the numerical algorithms problem is higher and stricter to the numerical algorithms. First, we present the convergence order of the variables with large jump ratios (τ^- / τ^+=. .10^12 / 1 and τ^- / τ^+=1 / 10^12) in Table <ref> and Table <ref> namely. It can be seen that the convergence orders for the case of nonhomogeneous jump conditions are the second order. Fig.<ref>a shows the comparison between the exact solution and the numerical solution for the large jump ratio when N=80. In Fig.<ref>b, we plot the decay of the L^2 norm error between the DNN solution and the exact solution during the training process with the large jump ratio (τ^- / τ^+=. .10^12 / 1) when N=80 (case 2). Second, to compare with the methods in the literature<cit.>, we also calculate the results of this experiment with the jump ratio (τ^- / τ^+=. .10^7 / 1). In Fig.<ref>b, we plot the decay of the loss functions during the training process with jump ratios (τ^- / τ^+=. .10^7 / 1 and τ^- / τ^+=. .10^12 / 1) when N=80. It can be seen that dealing with a smaller jump ratio is more simple and efficient. Finally, using this example, the two methods can calculate homogeneous and nonhomogeneous degenerate problems in one dimension, and the choice of coefficients can be constant, variable, or with singular properties. The advantage of the DNN-FD method is that the jump ratio of the calculated coefficients is bigger than that of the method in<cit.>. The method can also be extended to two-dimensional degenerate interfaces with the large jump ratio in the next section. This example takes approximately 1298 seconds when N=160, showing that the current method has no essential difference whether the jump conditions are homogeneous or not. §.§ 2D degenerate interface with nohomogeneous jump conditions Example 4.3. In this example, we consider the interface problem with nonhomogeneous jump conditions. The exact solution is<cit.> u(x)={[ u^-(x)=x_1^2+x_2^2+2,x∈Ω^-,; u^+(x)=1-x_1^2-x_2^2,x∈Ω^+. ]. The coefficient β is β={[ β^-=τ^-(-cos(x_1^2+x_2^2-(0.5)^2)+1),x∈Ω^-;; β^+=τ^+(3-x_1x_2),x∈Ω^+. ]. where Ω^-={x|| x|<0.5}, Ω^+=Ω\Ω^-, Ω=[-1,1] ×[-1,1], and r=√(x_1^2+x_2^2). The exact interface is the zero level set of the following level set function, ϕ(x)=x_1^2+x_2^2-(0.5)^2. We reconstruct the example from the literature<cit.> to degenerate it near the interface. It is a two-dimensional degenerate elliptic equation with nonhomogeneous jump conditions. The network used 6 intermediate layers. The width of each layer is 15 and the number of sampling interior points is 2000. In the running of the SGD method, we generate a new batch every 10 steps of updating. The numerical results of the present method for large jump ratios (τ^- / τ^+=. .10^10 / 1  and τ^- / τ^+=1 / 10^10) are shown in Table <ref> and Table <ref> respectively. It can be seen that the convergence orders for the case of nonhomogeneous jump conditions are the second order. Fig.<ref> shows the comparison between the exact solution and the numerical solution for the large jump ratio (τ^- / τ^+=1 / 10^10) when N=160. In Fig.<ref>, we plot the decay of the loss functions during the training process with large jump ratios (τ^- / τ^+=. .10^10 / 1 and τ^- / τ^+=1 / 10^10) when N=160. The two-dimensional case is more difficult than the one-dimensional case and takes more sampling points, but there is no essential difference in methods. The error between the DNN solution and the exact solution is also reduced to approximately O(10^-4) near the interface. This example shows that this method can be effectively extended to two-dimensional or even higher dimensional degenerate interface problems, and can also effectively solve the coefficients with the large jump ratio. §.§ 2D nondegenerate interface with homogeneous jump conditions Example 4.4. In this example, we consider the nondegenerate interface problem with high contrast diffusion coefficients with homogeneous jump conditions. The exact solution is<cit.> u(x)={[ u^-(x)=r^3/β^-, x∈Ω^-,; u^+(x)=r^3/β^++(1/β^--1/β^+) (0.5)^3, x∈Ω^+. ]. where Ω^-={x|| x|<0.5}, Ω^+=Ω\Ω^-, Ω=[-1,1] ×[-1,1], and r=√(x_1^2+x_2^2). The exact interface is the zero level set of the following level set function, ϕ(x)=x_1^2+x_2^2-(0.5)^2. The method used in this paper can compute not only degenerate problems, but also nondegenerate problems. The numerical results of the present method for large jump ratios (β^- / β^+=. .10^10 / 1 and β^- / β^+=1 / 10^10) are shown in Table <ref> and Table <ref> respectively. It can be seen easily that the numerical solution has second-order convergence in the L^2 norm. Fig.<ref> and Fig.<ref> show the comparison between the exact solution and the numerical solution for large jump ratios (τ^- / τ^+=. .10^10 / 1) and (τ^- / τ^+=1 / 10^10) when N=160 respectively. Due to the application of numerical methods on regular domains, the accuracy of this method is higher than that in <cit.>, and because of the fully decoupled format, it can handle the problem with higher coefficients and the larger jump ratio. §.§ 2D nondegenerate flower shape interface Example 4.5. In this example, we consider the flower shape interface problem. The exact solution is<cit.> u(x)={[ u^-(x)=7 x_1^2+7 x_2^2+6,x∈Ω^-,; u^+(x)=5-5 x_1^2-5 x_2^2,x∈Ω^+. ]. The coefficient β is β={[ β^-=(x_1^2-x_2^2+3) / 7,x∈Ω^-,; β^+=(x_1 x_2+2) / 5,x∈Ω^+. ]. The exact interface is the zero level set of the following level set function, ϕ=(x_1-0.02 √(5))^2+(x_2-0.02 √(5))^2-(0.5+0.2 sin (5 θ))^2, with {[ x(θ)=0.02 √(5)+(0.5+0.2 sin (5 θ)) cos (θ),; y(θ)=0.02 √(5)+(0.5+0.2 sin (5 θ)) sin (θ), ] θ∈[0,2 π).. The peculiarity of this example is that the problem has a complex smooth interface. It is designed to examine the performance of the DNN-FD method in dealing with geometric irregularities. Our method also has advantages in dealing with complex interface problems. This method becomes simple and efficient by applying a deep neural network near the interface. We present a grid refinement analysis in Table <ref> that successfully reached the second order. Fig.<ref> shows the sampling points in the area of the method in this paper. It can be seen from the figure that we will set more sampling points near the curve with the large radian. Similarly, as dealing with the singularity and non-smoothness of the interface, we will set more sampling points. We take the points by sections based on different degeneracies, large jump ratios, and other conditions to show the properties of the interface well. Fig.<ref> shows the comparison between the exact solution and the numerical solution when N=160. §.§ 2D nondegenerate happy-face interface Example 4.6. In this example, we consider the following more general self-adjoint elliptic interface problem, -∇·(β(x) ∇ u(x))+σ(x) u(x)=f(x) , in Ω. The example is a happy-face interface and the coefficients β^± are symmetric positive definite matrices. The exact solution is<cit.> u(x)={[ u^-(x)=7 x_1^2+7 x_2^2+1,x∈Ω^-,; u^+(x)=5-5 x_1^2-5 x_2^2,x∈Ω^+. ]. The coefficient β is β^+(x)=([ x_1 x_2+2 x_1 x_2+1; x_1 x_2+1 x_1 x_2+3 ]), β^-(x)=([ x_1^2-x_2^2+3 x_1^2-x_2^2+1; x_1^2-x_2^2+1 x_1^2-x_2^2+4 ]). The exact interface can be viewed in the literature<cit.>. The other coefficient σ is σ(x)={[ σ^-(x)=x_1 x^2+1, x∈Ω^-,; σ^+(x)=x_1^2+x_2^2-2,x∈Ω^+. ]. The difficulty of the example is that the interfaces have kinks around ears and mouth. We present the convergence results in Table <ref>. Numerical results indicate that the DNN-FD solution always converges to the exact solution with second-order accuracy. And the exact solution and the numerical solution are compared in Fig.<ref> when N=160. §.§ 2D nondegenerate sharp-edged interface Example 4.7. In this example, we consider the nonsmooth interface problem. The exact solution is<cit.> u(x)={[ u^-(x)=7 x_1^2+7 x_2^2+6,; u^+(x)=x_1+x_2+1, if x_1+x_2>0 , sin(x_1+x_2)+cos(x_1+x_2), if x_1+x_2≤ 0. ]. The coefficient β is β={[ β^-=(x_1^2-x_2^2+3) / 7,x∈Ω^-,; β^+=8,x∈Ω^+. ]. The exact interface is the zero level set of the following level set function, φ(x)= x_2-2 x_1, if x_1+x_2>0, x_2+x_1 / 2, if x_1+x_2≤ 0. For nonsmooth interface problems, the method used in this paper can also be applied the numerical results of the current method are given in Table <ref>. In Table <ref>, we present a grid refinement analysis that successfully achieves the second order. In other words, the proposed method is not sensitive to the grid for the solution and interface. In Table <ref>, We also calculated the logarithmic ratios of L^∞ errors. Although the scheme is the second order one and costs too much expensive works on the interface, it is so hard to get satisfactory results in <cit.> because of nonsmooth property of the interface. And the solution u has a singularity at (0, 0) with blow-up derivatives. Our method has approximately the second-order convergence, the numercial results are much better than ones of IFVE method. Fig.<ref> shows the comparison between the exact solution and the numerical solution when N=320. §.§ 2D nondegenerate five-pointed star interface Example 4.8. In this example, we consider the five-pointed star interface problem. The exact solution is<cit.> u(x)={[ u^-(x)=8,x∈Ω^-,; u^+(x)=x_1^2+x_2^2+sin (x_1+x_2),x∈Ω^+. ]. The coefficient β is β={[ β^-=1,x∈Ω^-,; β^+=2+sin (x_1+x_2),x∈Ω^+. ]. The exact interface is the zero level set of the following level set function, ϕ(r, θ)={[ R sin(θ_t / 2)/sin(θ_t / 2+θ-θ_r-2 π(i-1) / 5)-r, θ_r+π(2 i-2)/5⩽θ<θ_r+π(2 i-1)/5,; R sin(θ_t / 2)/sin(θ_t / 2-θ+θ_r-2 π(i-1) / 5)-r, θ_r+π(2 i-3)/5⩽θ<θ_r+π(2 i-2)/5 . ]. with θ_t=π / 5, θ_r=π / 7, R=6 / 7 and i=1,2,3,4,5 . This example presents a more difficult challenge, that is, considering that the interface consists of several sharp-edged nonsmooth interfaces. Our method can also be applied after special processing for more complex nonsmooth interfaces, such as the five-pointed star interface. The numerical results of the current method for in Table <ref>. It can be seen that even if the non-smoothness of the interface changes, our method can always maintain the second-order accuracy. The exact solution and the numerical solution are compared in Fig.<ref> when N=320. §.§ 2D degenerate five-pointed star interface Example 4.9. In this example, we consider the degenerate five-pointed star interface problem. The exact solution is<cit.> u(x)={[ u^-(x)=6+sin (2 π x_1) sin (2 π x_2),x∈Ω^-,; u^+(x)=x_1^2+x_2^2+sin (x_1+x_2),x∈Ω^+. ]. The coefficient β is β={[ β^-=(x_1-6/7)^2+(x_2-6/7)^2,x∈Ω^-,; β^+=(x_1-6sin(π/10)/7sin(π/3))^2+(x_2-6sin(π/10)/7sin(π/3))^2,x∈Ω^+. ]. The exact interface is the same as in the previous example. In the last example, we reconstruct the examples from the original literature<cit.>. We will challenge one which is combining degenerate and nonsmooth interface problems, where the degenerate points are respectively the two angles of the five-pointed star on the positive and negative domains. Furthermore, because the solution of the problem is nonlinear, the difficulty of this example increases once again. The choice of the activation function has also changed, and the selected nonlinear activation function offers a good approximation to the solution of the problem. The numerical results of the current method are shown in Table <ref>. The experimental results have the second-order accuracy in the L^2 norm. Fig.<ref> shows the comparison between the exact solution and the numerical solution when N=320. §.§ 2D degenerate interface with large jump conditions Example 4.10. This example is based on the addition of a large jump ratio to Example <ref>. The boundary condition and the source function are chosen so that the exact solution is<cit.> u(x)={[ 7 x_1^2+7 x_2^2+6,x ∈Ω^-,; x_1^2+x_2^2+sin (x_1+x_2), x ∈Ω^+. ]. The coefficient β is β={[ β^-=τ^-((x_1-6/7)^2+(x_2-6/7)^2),x∈Ω^-,; β^+=τ^+((x_1-6sin(π/10)/7sin(π/3))^2+(x_2-6sin(π/10)/7sin(π/3))^2),x∈Ω^+. ]. The exact interface is the zero level set of the following level set function, ϕ(r, θ)={[ R sin(θ_t / 2)/sin(θ_t / 2+θ-θ_r-2 π(i-1) / 5)-r, θ_r+π(2 i-2)/5⩽θ<θ_r+π(2 i-1)/5,; R sin(θ_t / 2)/sin(θ_t / 2-θ+θ_r-2 π(i-1) / 5)-r, θ_r+π(2 i-3)/5⩽θ<θ_r+π(2 i-2)/5 . ]. with θ_t=π / 5, θ_r=π / 7, R=6 / 7 and i=1,2,3,4,5 . Our method can also be applied in the five-pointed star interface with large jump ratios. The numerical results of the current method for in Table <ref> and Table<ref>. It can be seen that even if the non-smoothness of the interface changes, our method can always maintain the second-order accuracy. The numerical solution is shown in Fig.<ref> when N=320. §.§ 2D interface problem with non analytical solution Example 4.11. In this example, we consider the five-pointed star interface problem with non analytical solution which is constructed from Example <ref>. The coefficient β is β={[ β^-=1,x∈Ω^-,; β^+=2+sin (x_1+x_2),x∈Ω^+. ]. The exact interface is the zero level set of the following level set function, ϕ(r, θ)={[ R sin(θ_t / 2)/sin(θ_t / 2+θ-θ_r-2 π(i-1) / 5)-r, θ_r+π(2 i-2)/5⩽θ<θ_r+π(2 i-1)/5,; R sin(θ_t / 2)/sin(θ_t / 2-θ+θ_r-2 π(i-1) / 5)-r, θ_r+π(2 i-3)/5⩽θ<θ_r+π(2 i-2)/5 . ]. with θ_t=π / 5, θ_r=π / 7, R=6 / 7 and i=1,2,3,4,5 . We change the f^-(x)=|x-x_0|(1+2 log |x-x_0|), where ϕ(x_0)=0. This example presents a more difficult challenge, that is, considering that the interface consists of several sharp-edged nonsmooth interfaces and has non analytical solution. Our method can also be applied this example. The numerical results of the current method for in Table <ref> where f_h is the right term calculated by the numerical solution u_h. Due to the lack of the analytical solution to the equation, we define L^2 errors and convergence orders of the equation as the reference of stability during the operation. This value is stable around a constant, confirming the feasibility of the method. The numerical solution is shown in Fig.<ref> when N=320. §.§ 2D Linear elasticity interface problem Example 4.12. Finally, we will consider the example with physical significance that is a linear elasticity PDE with a discontinuous stress tensor as follows, -∇·𝕋=f(x,u), in Ω^- ∪Ω^+, [u]=w, on Γ, [𝕋·n]=v, on Γ, u=g, on ∂Ω. One application of the linear elasticity problem is to model the shape and location of fibroblast cells under stress. Let 𝐮=(u_1, u_2)^T denote the displacement field. Then, the strain tensor is σ=1/2(∇𝐮+(∇𝐮)^T). then the elasticity tensor 𝕋 is a linear transformation on the tensors. In the isotropic case, we have 𝕋σ=λTr(σ) 1+2μ(σ+σ^T). where λ and μ are lamé constants, Tr(.) is the trace operator, and 1 is the identity matrix. In this case, the above parameters satisfies the following relationships μ=E/2(1+ν), λ=E ν/(1+ν)(1-2 ν). where E is Young modulus and μ,ν are Poisson’s ratio. The interface is defined in the polar coordinate r=0.5+sin 5 θ/7. We set the computational domain Ω =[-1,1]×[-1,1]. The Dirichlet boundary condition and homogeneous jump conditions are determined in this example. Then we choose two groups of the Poisson’s ratio and the shear modulus as follows<cit.> ν= ν^-=0.24, in Ω^-; ν^+=0.20, in Ω^+., μ= μ^-=2000000, in Ω^-; μ^+=1500000, in Ω^+. and ν= ν^-=0.24, in Ω^-; ν^+=0.00024, in Ω^+ ., μ= μ^-=2000000, in Ω^-; μ^+=1500000, in Ω^+. The network used 6 intermediate layers. The width of each layer is 20 and the learning rate η is 5 × 10^-4.In Fig.<ref>, we plot the profiles of the DNN-FD solution, which are the displacements in x_1 and x_2 coordinates, respectively. The corresponding numerical results are shown in Table <ref> and Table <ref>. We find that the DNN-FD solutions have the second-order accuracy in the L^2 norm. § CONCLUSIONS. Numerical methods for solving nolinear degenerate interface problems is one of fundamental iusses in scientific computations, it is challenge to design effective and robust fully decoupled numerical method for such degenerate interface problems. In this paper, fully decoupled finite difference method based on deep neural network for solving degenerate interface problems including 1D and 2D cases is proposed. It is shown that we can adopt uniform grids to solve degenerate PDE with interface. There are no unknown augmented parameters in the discrete schemes, and no more extra conditions and works to be required for designing numerical approximation algorithms. In fact, some augmented variables is obtained by adopting DNN technique, the degenerate interface problem is completely decoupled two independent to the case of other degenerate or singular problems. The accuracy of the proposed fully decoupled algorithms has been demonstrated by solving various examples including degenerate and nondegenerate cases. In particular, the fully decoupled properties of the algorithm make the method capable of easy handling the jump ratio from the case of semi-decoupling 𝐁𝐈𝐆 jump (such as 10^7:1 or 1:10^7) to the case of fully decoupled 𝐕𝐄𝐑𝐘 𝐁𝐈𝐆 jump (such as 10^12:1 or 1:10^12) conditions. An interesting typical sharp edge example with degenerate five-pointed star interface shows that our approach works very well for those very hard problems. Numerical examples confirm the effectiveness of the fully decoupled algorithms for solving degenerate interface problems. § ACKNOWLEDGMENTS. This work is partially supported by the National Natural Science Foundation of China(grants No. 11971241).   abbrv 10 adams2002immersed L. Adams and Z. Li. The immersed interface/multigrid methods for interface problems. SIAM Journal on Scientific Computing, 24(2):463–479, 2002. albright2017high J. Albright, Y. Epshteyn, M. Medvinsky, and Q. Xia. High-order numerical schemes based on difference potentials for 2d elliptic problems with material interfaces. Applied Numerical Mathematics, 111:64–91, 2017. arbogast1996nonlinear T. Arbogast and M. F. Wheeler. A nonlinear mixed finite element method for a degenerate parabolic equation arising in flow in porous media. SIAM Journal on Numerical Analysis, 33(4):1669–1687, 1996. baharlouei2023dnn S. Baharlouei, R. Mokhtari, and F. Mostajeran. Dnn-hdg: A deep learning hybridized discontinuous galerkin method for solving some elliptic problems. Engineering Analysis with Boundary Elements, 151:656–669, 2023. bao2017numerical W. Bao, Y. Cai, X. Jia, and Q. Tang. Numerical methods and comparison for the dirac equation in the nonrelativistic limit regime. Journal of Scientific Computing, 71(3):1094–1134, 2017. beale2007accuracy J. Beale and A. Layton. On the accuracy of finite difference methods for elliptic problems with interfaces. Communications in Applied Mathematics and Computational Science, 1(1):91–119, 2007. beale2019solution J. Beale and W. Ying. Solution of the dirichlet problem by a finite difference analog of the boundary integral equation. Numerische Mathematik, 141(3):605–626, 2019. bedrossian2010second J. Bedrossian, J. H. von Brecht, S. Zhu, E. Sifakis, and J. M. Teran. A second order virtual node method for elliptic problems with interfaces and irregular domains. Journal of Computational Physics, 229(18):6405–6426, 2010. bernis1990higher F. Bernis and A. Friedman. Higher order nonlinear degenerate parabolic equations. Journal of Differential Equations, 83(1):179–206, 1990. wang2015matched B.Wang, K.-L.Xia, and G.-W.Wei. Matched interface and boundary method for elasticity interface problems. Journal of computational and applied mathematics, 285:203–225, 2015. cai2017discontinuous Z. Cai, C. He, and S. Zhang. Discontinuous finite element methods for interface problems: robust a priori and a posteriori error estimates. SIAM Journal on Numerical Analysis, 55(1):400–418, 2017. cao2017superconvergence W. Cao, X. Zhang, Z. Zhang, and Q. Zou. Superconvergence of immersed finite volume methods for one-dimensional interface problems. Journal of Scientific Computing, 73(2):543–565, 2017. Chen2018Enriched S. Chen and J. Shen. Enriched spectral methods and applications to problems with weakly singular solutions. Journal of Scientific Computing, 77(3):1468–1489, 2018. Chen1998Finite Z. Chen and J. Zou. Finite element methods and their convergence for elliptic and parabolic interface problems. Numerische Mathematik, 79(2):175–202, 1998. collobert2008unified R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine learning, pages 160–167, 2008. del2017numerical M. J. Del Razo and R. J. LeVeque. Numerical methods for interface coupling of compressible and almost incompressible media. SIAM Journal on Scientific Computing, 39(3):B486–B507, 2017. du2012analysis Q. Du, M. Gunzburger, R. B. Lehoucq, and K. Zhou. Analysis and approximation of nonlocal diffusion problems with volume constraints. SIAM Review, 54(4):667–696, 2012. ewing1999immersed R. E. Ewing, Z. Li, T. Lin, and Y. Lin. The immersed finite volume element methods for the elliptic interface problems. Mathematics and Computers in Simulation, 50(1-4):63–76, 1999. gunzburger2018stokes M. Gunzburger, X. He, and B. Li. On stokes–ritz projection and multistep backward differentiation schemes in decoupling the stokes–darcy model. SIAM Journal on Numerical Analysis, 56(1):397–427, 2018. ben2001jacobi B.-Y. Guo and L.-L. Wang. Jacobi interpolation approximations and their applications to singular differential equations. Advances in Computational Mathematics, 14(3):227–276, 2001. guo2020recovering R. Guo, T. Lin, and Y. Lin. Recovering elastic inclusions by shape optimization methods with immersed finite elements. Journal of Computational Physics, 404:109123, 2020. han2017deep J. Han, A. Jentzen, et al. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics, 5(4):349–380, 2017. handa2016gvnn A. Handa, M. Bloesch, V. Pătrăucean, S. Stent, J. McCormac, and A. Davison. gvnn: Neural network library for geometric computer vision. In European Conference on Computer Vision, pages 67–82. Springer, 2016. he2022mesh C.-Y. He, X.-Z. Hu, and L. Mu. A mesh-free method using piecewise deep neural network for elliptic interface problems. Journal of Computational and Applied Mathematics, 412:114358, 2022. he2018relu J. He, L. Li, J. Xu, and C. Zheng. Relu deep neural networks and linear finite elements. arXiv preprint arXiv:1807.03973, 2018. he2010interior X. He, T. Lin, and Y. Lin. Interior penalty bilinear ife discontinuous galerkin methods for elliptic equations with discontinuous coefficient. Journal of Systems Science and Complexity, 23(3):467–483, 2010. hou2005numerical S. Hou and X.-D. Liu. A numerical method for solving variable coefficient elliptic equation with interfaces. Journal of Computational Physics, 202(2):411–445, 2005. hou2010numerical S. Hou, W. Wang, and L. Wang. Numerical method for solving matrix coefficient elliptic equation with sharp-edged interfaces. Journal of Computational Physics, 229(19):7162–7179, 2010. hu2022discontinuity W.-F. Hu, T.-S. Lin, and M.-C. Lai. A discontinuity capturing shallow neural network for elliptic interface problems. Journal of Computational Physics, 469:111576, 2022. huang2017unfitted P. Huang, H. Wu, and Y. Xiao. An unfitted interface penalty finite element method for elliptic interface problems. Computer Methods in Applied Mechanics and Engineering, 323:439–460, 2017. ji2022immersed H.-F. Ji, F.Wang, J.-R. Chen, and Z.-L. Li. An immersed cr-p0 element for stokes interface problems and the optimal convergence analysis. Computer Methods in Applied Mechanics and Engineering, 399:115306, 2022. jiang2012phase W. Jiang, W. Bao, C. V. Thompson, and D. J. Srolovitz. Phase field approach for simulating solid-state dewetting problems. Acta Materialia, 60(15):5578–5592, 2012. kingma2014adam D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. lagaris1998artificial I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks, 9(5):987–1000, 1998. lecun2015deep Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015. leveque1994immersed R. J. LeVeque and Z. Li. The immersed interface method for elliptic equations with discontinuous coefficients and singular sources. SIAM Journal on Numerical Analysis, 31(4):1019–1044, 1994. li2003new Z. Li, T. Lin, and X. Wu. New cartesian grid methods for interface problems using the finite element formulation. Numerische Mathematik, 96(1):61–98, 2003. lusch2018deep B. Lusch, J. N. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 9(1):1–10, 2018. pao1989adaptive Y. Pao. Adaptive pattern recognition and neural networks. Reading, MA (US); Addison-Wesley Publishing Co., Inc., 1989. ren2000iterative W. Ren and X.-P. Wang. An iterative grid redistribution method for singular problems in multiple dimensions. Journal of Computational Physics, 159(2):246–273, 2000. robbins1951stochastic H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. shen2011spectral J. Shen, T. Tang, and L.-L. Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011. shen2016 J. Shen and Y. Wang. Muntz Galerkin methods and applications to mixed dirichlet–neumann boundary value problems. SIAM Journal on Scientific Computing, 38(4):A2357–A2381, 2016. sun2014adaptive H. Sun and D. L. Darmofal. An adaptive simplex cut-cell method for high-order discontinuous galerkin discretizations of elliptic interface problems and conjugate heat transfer problems. Journal of Computational Physics, 278:445–468, 2014. wang2013approximate C. Wang and R. Du. Approximate controllability of a class of semilinear degenerate systems with convection term. Journal of Differential Equations, 254(9):3665–3689, 2013. wang2014carleman C. Wang and R. Du. Carleman estimates and null controllability for a class of degenerate parabolic equations with convection terms. SIAM Journal on Control and Optimization, 52(3):1457–1480, 2014. wang2021bilinear Q. Wang, J. Xie, Z. Zhang, and L. Wang. Bilinear immersed finite volume element method for solving matrix coefficient elliptic interface problems with non-homogeneous jump conditions. Computers & Mathematics with Applications, 86:1–15, 2021. wang2021new Q. Wang, Z. Zhang, and L. Wang. New immersed finite volume element method for elliptic interface problems with non-homogeneous jump conditions. Journal of Computational Physics, 427:110075, 2021. wang2020mesh Z. Wang and Z. Zhang. A mesh-free method for interface problems using the deep learning approach. Journal of Computational Physics, 400:108963, 2020. wu2019finite D. Wu, J. Yue, G. Yuan, and J. Lv. Finite volume element approximation for nonlinear diffusion problems with degenerate diffusion coefficients. Applied Numerical Mathematics, 140:23–47, 2019. xia2014mib K. Xia, M. Zhan, and G.-W. Wei. Mib galerkin method for elliptic interface problems. Journal of Computational and Applied Mathematics, 272:195–220, 2014. xu2021fourth M. Xu, L. Zhang, and E. Tohidi. A fourth-order least-squares based reproducing kernel method for one-dimensional elliptic interface problems. Applied Numerical Mathematics, 162:124–136, 2021. yarotsky2017error D. Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks, 94:103–114, 2017. yu2018deep B. Yu et al. The deep ritz method: a deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6(1):1–12, 2018. zhang2020minimal Z. Zhang, P. Rosakis, T. Y. Hou, and G. Ravichandran. A minimal mechanosensing model predicts keratocyte evolution on flexible substrates. Journal of the Royal Society Interface, 17(166):20200175, 2020. zhao2017efficient M. Zhao, W. Ying, J. Lowengrub, and S. Li. An efficient adaptive rescaling scheme for computing moving interface problems. Communications in Computational Physics, 21(3):679–691, 2017. zhao2010high S. Zhao. High order matched interface and boundary methods for the helmholtz equation in media with arbitrarily curved interfaces. Journal of Computational Physics, 229(9):3155–3170, 2010. zhao2021semi T. Zhao, K. Ito, and Z. Zhang. Semi-decoupling hybrid asymptotic and augmented finite volume method for nonlinear singular interface problems. Journal of Computational and Applied Mathematics, 396:113606, 2021. zhou2006high S. Zhou, Yongchengand Zhao, M. Feig, and G.-W. Wei. High order matched interface and boundary method for elliptic equations with discontinuous coefficients and singular sources. Journal of Computational Physics, 213(1):1–30, 2006. zhu2019fast H. Zhu and C. Xu. A fast high order method for the time-fractional diffusion equation. SIAM Journal on Numerical Analysis, 57(6):2829–2849, 2019. zhu2015immersed L. Zhu, Z. Zhang, and Z. Li. An immersed finite volume element method for 2d pdes with discontinuous coefficients and non-homogeneous jump conditions. Computers & Mathematics with Applications, 70(2):89–103, 2015.
http://arxiv.org/abs/2306.09126v1
20230615133714
STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events
[ "Kazuki Shimada", "Archontis Politis", "Parthasaarathy Sudarsanam", "Daniel Krause", "Kengo Uchida", "Sharath Adavanne", "Aapo Hakala", "Yuichiro Koyama", "Naoya Takahashi", "Shusuke Takahashi", "Tuomas Virtanen", "Yuki Mitsufuji" ]
cs.SD
[ "cs.SD", "cs.CV", "cs.MM", "eess.AS", "eess.IV" ]
New Dimensions of Galactic Chemical Evolution David H. Weinberg^1 July 31, 2023 ============================================= While direction of arrival (DOA) of sound events is generally estimated from multichannel audio data recorded in a microphone array, sound events usually derive from visually perceptible source objects, e.g., sounds of footsteps come from the feet of a walker. This paper proposes an audio-visual sound event localization and detection (SELD) task, which uses multichannel audio and video information to estimate the temporal activation and DOA of target sound events. Audio-visual SELD systems can detect and localize sound events using signals from a microphone array and audio-visual correspondence. We also introduce an audio-visual dataset, Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23), which consists of multichannel audio data recorded with a microphone array, video data, and spatiotemporal annotation of sound events. Sound scenes in STARSS23 are recorded with instructions, which guide recording participants to ensure adequate activity and occurrences of sound events. STARSS23 also serves human-annotated temporal activation labels and human-confirmed DOA labels, which are based on tracking results of a motion capture system. Our benchmark results show that the audio-visual SELD system achieves lower localization error than the audio-only system. The data is available at <https://zenodo.org/record/7880637>. § INTRODUCTION Given multichannel audio input from a microphone array, a sound event localization and detection (SELD) system <cit.> outputs a temporal activation track for each of the target sound classes along with one or more corresponding spatial trajectories, e.g., the direction of arrival (DOA) around the microphone array, when the track indicates activity. Such a spatiotemporal characterization of sound scenes can be used in a wide range of machine cognition tasks, such as inference on the type of environment, tracking of specific types of sound sources, acoustic monitoring, scene visualization systems, and smart-home applications. Recently neural network (NN)-based SELD systems <cit.> show high localization and detection performance. These systems need data with activity and DOA labels of target sound events for training and evaluation. Because annotation of DOA labels is challenging in real sound scene recordings, most SELD datasets <cit.> consist of synthetic audio data, which are made by convolution of monaural sound event signals and multichannel impulse response signals with DOA labels. The Sony-TAu Realistic Spatial Soundscapes 2022 dataset (STARSS22) <cit.> tackles real sound scene recordings with DOA labels, which is based on tracking results of a motion capture (mocap) system. Currently, STARSS22 is the only SELD dataset with real sound scenes, including overlapping sound events, moving source events, and natural distribution of temporal activation and DOA, e.g., sounds of footsteps are relatively short and heard from a lower elevation. While this dataset is suitable for evaluating audio-only SELD systems in natural sound scenes, the dataset does not include other modality input, e.g., video data. Sound events in real sound scenes originate from their sound source objects, e.g., speech comes from a person's mouth, sounds of footsteps are produced from the feet of a walker, and a knocking sound originates from a door. Such sound source object information is usually apparent in the visual modality. Video data aligned with audio recordings in SELD tasks have the potential to mitigate difficulties and ambiguities of the spatiotemporal characterization of the sound scene as audio-visual data improves source separation <cit.> and speech recognition <cit.>. Visible people in the video data can provide candidate positions of human body-related sounds. When a person walks in the video, tapping sounds are easily recognized as footsteps. In this context, we propose audio-visual SELD task, which uses audio and video data to estimate spatiotemporal characterization of sound events. The left side of Figure <ref> shows an audio-visual SELD system, which takes multichannel audio recordings and video aligned with the audio recordings and outputs activity and DOA of target sound events in each frame. To tackle audio-visual SELD tasks, we need an audio-visual dataset consisting of multichannel audio, video, and activity and DOA labels of sound events per frame. There is another interest in audio-visual sound source localization task <cit.>, which takes monaural audio and images and estimates where the audio is coming from in the images. They focus on learning audio-visual semantic correspondence, not estimating physical DOA around a microphone array. While the audio-visual sound source localization datasets <cit.> are adequate to train NNs with audio-visual correspondence and evaluate localization performance in images, they are typically monophonic without spatial labels for the target sound events. Several datasets serve multichannel audio and video data with real sound scenes <cit.>, whereas only a few audio-visual datasets have multichannel audio data with DOA labels of speakers around a microphone array <cit.>. While these datasets help to evaluate audio-visual speaker DOA estimation (DOAE) tasks, the evaluation focuses only on speech, not sound events such as musical instruments or footsteps. To tackle audio-visual SELD tasks, we introduce an audio-visual dataset, the Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23), consisting of multichannel audio, video, and spatiotemporal annotations of sound events, i.e., activity and DOA labels per each frame. The right part of Figure <ref> shows a still in a frame of a video in STARSS23, made from 360^∘ video, spatial acoustic power map generated from a microphone array, and sound event labels. The dataset contains over 7-hour recordings with 57 participants in 16 rooms with the spatiotemporal annotation as a development set. The participants are guided by generic instructions and suggested activities in the recording to induce adequate occurrences of the sound events and diversity of content. There are 13 classes of target sound events, such as speech, musical instruments, and footsteps. To reveal whether the dataset has a natural distribution of sound events, we analyze the dataset in terms of frame coverage, overlap, and DOA distributions per each sound event class. We develop and test an audio-visual SELD system with STARSS23. To investigate the effects of audio-visual input, we present overall localization and detection performance and per-class results. § RELATED WORK Sound event localization and detection SELDnet <cit.> is the first SELD method, which uses convolutional recurrent neural network (CRNN) to output activity and DOA separately. An activity-coupled Cartesian DOA (ACCDOA) <cit.> vector, which embeds sound event activity information to the length of a Cartesian DOA vector, enables us to solve SELD tasks with a single output. While the two methods tackle overlaps from different classes, they cannot solve overlaps from the same class. Multi-ACCDOA <cit.> is an extension of ACCDOA, which allows models to output overlaps from the same class. To handle that case effectively, Multi-ACCDOA incorporates auxiliary duplicating permutation invariant training (ADPIT) <cit.>. There are other SELD works about framework <cit.>, network architecture <cit.>, audio feature <cit.>, data augmentation <cit.>. To train or evaluate SELD methods, we need multichannel audio data with temporal activation and DOA labels. In synthetic multichannel audio datasets <cit.>, DOA labels can be easily annotated because the data are made from multichannel impulse response signals with DOA labels. While the SECL-UMons and AVECL-UMons datasets <cit.> tackled spatial recording with DOA labels, it is limited to isolated single event recordings or combinations of two simultaneous events, ignoring spatiotemporal information linking events in a natural scene. STARSS22 <cit.> tackled real spatial recording with temporal activation and DOA labels of each target class in natural scenes. Participants improvise natural scenes with a mocap system, whose tracking results are used for DOA labels. However, the dataset does not release video data. Therefore, it cannot be used to evaluate audio-visual SELD systems. Audio-visual sound source localization There is broad interest in audio-visual sound source localization tasks <cit.>. Chen et al. have tackled unsupervised learning to localize sound sources in video and evaluated the method on the VGG-SS dataset <cit.>, which annotates bounding boxes of sound sources for sound and video pairs. The AVSBench dataset <cit.> serves pixel-level audio-visual segmentation maps for videos over 23 class categories. Because the datasets do not have multichannel audio recordings, they cannot be applied to evaluating SELD tasks. Audio-visual dataset with multichannel audio Several audio-visual datasets include multichannel audio data <cit.>. As many datasets are used for self-supervised learning <cit.> or non-localization tasks <cit.>, there are no DOA labels. The YouTube-360 dataset <cit.> serves first-order ambisonics (FOA) signal and 360^∘ video data without any labels for self-supervised learning. A few audio-visual datasets are collected for audio-visual DOAE tasks <cit.>. Qian et al. proposed an audio-visual DOAE system, which takes spectrograms and phase features from the audio input and face-bounding boxes from video input to estimate the DOA of each speaker <cit.>. The system was evaluated with the Audio-Visual Robotic Interface (AVRI) dataset, recorded using Kinect and a four-channel Respeaker array, along with activity and DOA labels. The audio-visual features are helpful for DOAE, and the dataset supports the evaluation of audio-visual speaker DOAE. However, the dataset is only for speech, not various sound events such as clapping and knocks. We summarize the comparison of STARSS23 with other real sound scene datasets in table <ref>. § STARSS23 DATASET §.§ Overview STARSS23 contains multichannel audio and video recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of sound events belonging to a set of target classes. The dataset enables us to train and evaluate audio-visual SELD systems, which localize and detect sound events from multichannel audio and visual information. STARSS23 is available in a public research data repository[<https://zenodo.org/record/7880637>] under the MIT license. There is also a demo video[<https://www.youtube.com/watch?v=ZtL-8wBYPow>]. The contents are recorded with short instructions, guiding participants in improvising sound scenes. The recordings contain a total of 13 target sound event classes. The multichannel audio data are delivered as two 4-channel spatial formats: FOA and tetrahedral microphone array (MIC). The video data are blurred 1920×960 equirectangular video data recorded by a 360^∘ camera. The annotations of STARSS23 consist of temporal activation, DOA, and source distance of the target classes. STARSS23 is split into a development set and an evaluation set. The development set totals about 7 hours and 22 minutes, of which 168 clips were recorded with 57 participants in 16 rooms. The development set is further split into a training part (dev-set-train, 90 clips) and a testing part (dev-set-test, 78 clips) to support the development process. In the evaluation set, no publicly available annotations exist because the evaluation set is prepared for a competition, which is described in Appendix <ref>. STARSS23 improves a multichannel audio dataset, i.e., STARSS22 <cit.>. One of the critical differences is releasing video data aligned with multichannel audio data. While we maintain all the sessions of STARSS22, we add about 2.5 hours of material to the development set. STARSS23 also serves source distance labels of sound events as additional annotations. We follow the recording procedure in STARSS22, where video data are used only to check labels internally. Adding descriptions about video data and distance annotation, we show the data construction part as an audio-visual dataset. §.§ Data construction As shown in Figure <ref>, STARSS23 is constructed in three steps: sound scene recording, data conversion, and annotation. We explain each step as follows. Sound scene recording STARSS23 was created in Tampere, Finland, and Tokyo, Japan. Recordings at both sites shared the same process, organized in sessions corresponding to different rooms, sound-making props, and participants. In each session, various clips were recorded with combinations of that session's participants acting simple scenes and interacting among themselves and with the sound props. The scenes were based on generic instructions on the desired sound events. The instructions were a rough guide to ensure adequate event activity and occurrences of the target sound classes in a clip. The left photo of Figure <ref> shows that participants improvise following the instructions. A set of 13 target sound event classes are selected to be annotated, based on the sound events captured adequately in the recorded scenes. The class labels are chosen to conform to the AudioSet ontology <cit.>. They are: female speech, male speech, clapping, telephone, laughter, domestic sounds, footsteps, door, music, musical instrument, water tap, bell, knock. Music, e.g., background music or pop music, is played by a loudspeaker in the room. On the other hand, musical instruments are played by participants, including acoustic guitar, piano, and others. Domestic sounds consist of vacuum cleaners, mechanical fans, and boiling, which have strongly-directional and loud sounds. They can be distinguishable from natural background noise in sound scenes. The scenes also contain directional interference sounds such as computer keyboard or shuffling cards that are not labeled. As shown in the left photos of Figure <ref>, each scene was captured with audio-visual sensors, i.e., a high-resolution 32-channel spherical microphone array (Eigenmike em32[<https://mhacoustics.com/products#eigenmike1>]) with a height set at 1.5 m, and a 360^∘ camera (Ricoh Theta V[<https://theta360.com/en/about/theta/v.html>]) mounted 10 cm above the microphone array. For each recording session, a suitable position of the Eigenmike and Ricoh Theta V was determined to cover the scene from a central place. We also capture the scenes with two additional sensors for annotation: a mocap system of infrared cameras surrounding the scene, tracking reflective markers mounted on the participants and sound sources of interest (Optitrack Flex 13[<https://optitrack.com/cameras/flex-13/>]), and wireless microphones mounted on the participants and sound sources, providing close-mic recordings of the main sound events (Røde Wireless Go II[<https://rode.com/en/microphones/wireless/wirelessgoii>]). The origin of the mocap system was set at ground level on the exact position of the Eigenmike. In contrast, the mocap cameras were positioned at the room's corners. Recording starts on all devices before the beginning of a scene and stops right after. A clapper sound initiates the acting, and it serves as a reference signal for synchronization between the different types of recordings, including the mocap system that can record a monophonic audio side signal for synchronization. All types of recordings were manually synchronized based on the clapper sound and subsequently cropped and stored at the end of each recording session. The details of sound scene recording, e.g., generic instructions, sound events, and sensors, are summarized in Appendix <ref>. Data conversion The original 32-channel recordings are converted to two 4-channel spatial formats: FOA and MIC. Conversion of the Eigenmike recordings to FOA following the SN3D normalization scheme (or ambiX) was performed with measurement-based filters <cit.>. Regarding the MIC format, channels 6, 10, 26, and 22 of the Eigenmike were selected, corresponding to a nearly tetrahedral arrangement. Analytical expressions of the directional responses of each format can be found in <cit.>. Finally, the converted recordings were downsampled to 24kHz. The raw 360^∘ video data were converted to an equirectangular format with 3840×1920 resolution at 29.97 frames per second, which is convenient to handle as planar video data. Based on the participant's consent, the visible faces of all recordings were blurred. Finally, the video with face-blurring was converted to a 1920×960 resolution. Annotation Spatiotemporal annotations of the sound events were conducted manually by the authors and research assistants. As shown in the lower right part of Figure <ref>, there are four steps: a) annotate the subset of the target classes that were active in each scene, b) annotate the temporal activity of such class instances, c) annotate the position of each such instance when active, moreover, d) confirm the annotations. Class annotations (a) were observed and logged during each scene recording. Activity labels (b) were manually annotated by listening to the wireless microphone recordings. Because each wireless microphone would capture prominent sounds produced by the participant or source it was assigned to, onset, offsets, source, and class information of each event could be conveniently extracted. In scenes or instances where associating an event to a source was ambiguous purely by listening, annotators would consult the video recordings to establish the correct association. The temporal annotation resolution was set to 100 msec. After onset, offset, and class information of events was established for each source and participant in the scene, the positional annotations (c) were extracted for each such event by attaching tracking results to the temporal activity window of the event. Positional information was logged in Cartesian coordinates with respect to the mocap system's origin. The event positions were converted to spherical coordinates, i.e., azimuth, elevation, and distance, which are more convenient for SELD tasks. Then, the class, temporal, and spatial annotations were combined and converted to the text format used in the previous dataset <cit.>. The details of the annotation are summarized in Appendix <ref>. Confirmation of the annotations (d) was performed by listening to the Eigenmike recording while watching a synthetic video. The video is the equirectangular video overlapped with the event activities, which is visualized as labeled markers positioned at their respective azimuth and elevation on the video plane. If a clip is not passed the confirmation, the clip is annotated again. §.§ Data analysis Having a natural distribution of sound events is beneficial to evaluate audio-visual SELD systems. We analyze the frame coverage, polyphony, and DOA per sound event classes on dev-set-train of STARSS23. Table <ref> shows frame coverage, and max, mean, and distribution of polyphony globally and of each class separately. Regular classes in frames are female and male speech, music, and domestic sounds. These classes are also frequent in our daily lives. Musical instruments and laugh classes show high mean polyphony of the same class, which are natural situations in jam sessions and conversations. Figure <ref> shows the distribution of DOA with the axis of the azimuth and elevation. Regarding female speech in Figure <ref>, the elevation distribution has a strong peak around -10 degrees, while that of azimuth seems uniformly distributed. Compared to the speech class, footsteps appear in lower azimuth than -10 degrees. See Appendix <ref> for further data analysis, e.g., duration. § BENCHMARK In this section, we examine an audio-visual SELD task with STARSS23. For evaluation, we set the dev-set-train for training and hold out the dev-set-test for validation. §.§ Audio-visual SELD system To build an audio-visual SELD system, we start with an audio-only system based on SELDnet <cit.> and multi-ACCDOA <cit.>, which is widely used in audio-only SELD tasks <cit.>. To extend the audio-only systems to audio-visual, we merge visual and audio information in the middle of the network, following the audio-visual speaker DOAE work <cit.>. First, we summarize the audio-only SELD system. Audio features, e.g., amplitude spectrograms, are extracted from the multichannel audio data. Convolution layers embed the audio features, then a gated recurrent unit (GRU) layer and a fully connected layer (FC) decode the audio embedding to multi-ACCDOA output. As shown in the bottom of Figure <ref>, each class in the multi-ACCDOA output is represented by a three-dimensional vector with Cartesian coordinates x, y, and z. The length of the vector shows activity, and the direction of the vector indicates the DOA around the microphone array. To train the SELD system, we use mean squared error (MSE) between the estimated and target multi-ACCDOA outputs under the ADPIT scheme <cit.>. In inference, when the length of the vector is greater than a threshold, the class is considered active. Next, we extend the audio-only system to handle audio and visual input. We concatenate audio and visual embedding in the middle of the network. As visual input, we use the corresponding video frame at the start of the audio features. With the corresponding image, an object detection module, e.g., YOLOX <cit.>, outputs bounding boxes of potential objects on target classes, e.g., person class. As shown in the right part of Figure <ref>, each bounding box is encoded to two vectors along the image's horizontal and vertical axis, based on Gaussian distributions <cit.>. The center is the same as the bounding box, and the standard deviation is proportional to the width and height. These vectors are combined into two vectors along the azimuth and elevation. The visual encoded vectors are embedded by FCs. Then the visual embedding and the audio embedding from the convolution layers are concatenated. The concatenated embeddings are fed into the decoder to output multi-ACCDOA. §.§ Evaluation metric We used four joint localization and detection metrics <cit.> with extensions from a previous study <cit.>, which supports multi-instance scoring of the same class. Two metrics are referred to as location-aware detection and are error rate (ER_20^∘) and F-score (F_20^∘) in one-sec non-overlapping segments. We consider the prediction as a true positive if the prediction and reference class are the same and the angle difference is below 20^∘. F_20^∘ is calculated from location-aware precision and recall, whereas ER_20^∘ is the sum of insertion, deletion, and substitution errors, divided by the total number of the references. The other two metrics are referred to as class-aware localization and are localization error (LE_CD) in degrees and localization recall (LR_CD) in one-sec non-overlapping segments, where the subscript refers to classification-dependent. Unlike location-aware detection, we do not use any threshold but estimate the difference between the correct prediction and reference. LE_CD expresses the average angular difference between the same class's predictions and references. LR_CD tells the true positive rate of how many of these localization estimates were detected in a class out of the total number of class instances. We used the macro mode of computation while the mode does not apply ER_20^∘ because it includes substitution errors between two classes. We first computed the metrics for each class and then averaged them for the other three metrics to obtain the final system performance. §.§ Experimental setting As audio features, multichannel amplitude spectrograms and inter-channel phase differences (IPDs) are used <cit.>. Input features are segmented to have a fixed length of 1.27 sec. To reduce the calculation cost of video, we use 360×180 videos converted from the released 1920×960 videos. As visual input, we extract the corresponding video frame at the start of the audio features. We use a pretrained YOLOX object detection model[<https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox/yolox_tiny_8x8_300e_coco.py>] to get bounding boxes of person class. Other classes, e.g., cell phone and sink, are not stably detected in our preliminary experiments with STARSS23 videos. The bounding box results are encoded to two vectors along azimuth and elevation as in Sec.<ref>. The vector size is 37 (= 36 + 1) to cover 360 degrees of azimuth per 10 degrees. To get audio embedding, we stack three convolutional layers with kernel size 3×3. We embed the visual encoded vectors with two FCs. The concatenated embeddings are processed with a bidirectional GRU layer with a hidden state size of 256. The number of tracks in the multi-ACCDOA format was fixed at N = 3 maximum simultaneous sources. The threshold for activity was 0.3 to binarize predictions during inference. Details on the experimental setting are in Appendix <ref>. We compare the audio-visual SELD system with an audio-only system based on the same data split and implementation. The difference is the presence or absence of video input. The experiments are for the two formats; FOA and MIC. The code is available in a GitHub repository[<https://github.com/sony/audio-visual-seld-dcase2023>] under the MIT license. §.§ Experimental results Table <ref> summarizes the performance of the audio-visual and audio-only SELD systems in both audio formats. Compared with both formats, SELD systems with FOA format show better SELD performance than that with MIC format. In FOA format, while the audio-visual SELD system shows a slightly worse location-aware F-score, the audio-visual system exhibited better localization error with comparable localization recall. There is a similar trend of lower localization error in MIC format. We further investigate the location-aware F-score over classes, considering both localization and detection aspects. Figure <ref> shows the F-score per class in FOA format. We focus on five classes related to a human body, i.e., female and male speeches, clapping, laughing, and footsteps because the audio-visual SELD system uses bounding boxes of a person as visual input. We show the average score of the five classes as body-related on the left of the figure. The audio-visual system demonstrates a higher location-aware F-score in the body-related. On the other hand, the audio-visual system performs worse in the average of the other classes, i.e., non-body-related. The results suggest that the visual input, i.e., bounding boxes of a person, contributes to localization and detection of body-related classes, whereas the visual input may limit the performance of non-body-related classes. Further experimental results are in Appendix <ref>. § CONCLUSION This paper attempts to broaden sound event localization and detection (SELD) to an audio-visual area by introducing an audio-visual SELD task. We present an audio-visual dataset, Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23), which consists of multichannel audio data, video data, and spatiotemporal annotation of sound events in natural sound scenes. Furthermore, we present quantitative evaluations for an audio-visual SELD system compared with an audio-only system and demonstrate the benefits of visual object positions. We still need to improve SELD performance of various sound events using audio-visual data. Also, we hope that STARSS23 opens a wide range of future research on spatial audio-visual tasks, taking advantage of the well-organized audio-visual recording and detailed labels about spatial sound events. We thank Akira Takahashi for his helpful code review and thank Atsuo Hiroe, Kazuya Tateishi, Masato Hirano, Takashi Shibuya, Yuji Maeda, and Zhi Zhong for valuable discussions about the data construction process. The data collection and annotation at Tampere University have been funded by Google. This work was carried out with the support of the Centre for Immersive Visual Technologies (CIVIT) research infrastructure at Tampere University, Finland. ieee § CHECKLIST * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? See Section <ref>. * Did you describe the limitations of your work? See Section <ref>. * Did you discuss any potential negative societal impacts of your work? See Appendix <ref>. * Have you read the ethics review guidelines and ensured that your paper conforms to them? We have read them and confirmed them. * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? <https://github.com/sony/audio-visual-seld-dcase2023> * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? See Section <ref> and Appendix <ref>. * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? See Table <ref> and Figure <ref>. * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? See Appendix <ref> * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? See Section <ref>. * Did you mention the license of the assets? YOLOX <cit.> code from MMDetection is licensed under the Apache-2.0 license, free for research and commercial use. * Did you include any new assets either in the supplemental material or as a URL? <https://zenodo.org/record/7880637> * Did you discuss whether and how consent was obtained from people whose data you're using/curating? See Appendix <ref>. * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? See Appendix <ref>. * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? See Appendix <ref>. * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? See Appendix <ref>. * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? See Appendix <ref>. § APPENDIX We show appendixes for data construction, data analysis, experiment, competition, social impact, and personal data handling. Finally, we answer the questions from Datasheets for Datasets <cit.>. § DATA CONSTRUCTION §.§ Sound scene recording The generic instructions for recording participants include information about duration, participants, props, active classes, and description of sound scenes. The recording duration can be shorter or longer as they are rough instructions. We set the basic structure of sound scenes, e.g., people gathering on a sofa and discussing the weekend. Following the rough instructions, we leave the other details of sound scenes to the participants, e.g., they can walk wherever they want and talk about whatever they want without a fixed dialogue text. We show an example of the instructions: Duration 3 min Participants 3 people Props Playing cards, mobile phone Active classes Speech, laugh, mobile phone Description * 3 people gather on the sofa and talk about the weekend. * Propose to play a game of cards, shuffle and distribute the cards. * Mobile phone rings while playing, attend the call, walk around and talk for a few seconds and laugh during the call. * Come back, sit and continue playing. Sound scenes consist of target sound events, directional interference sounds, and background noise. Each target class contains diverse sounds, e.g., the speech classes include a few different languages, and the phone class has sounds from different mobile phones. In addition, several target classes correspond to super classes with some subclasses in the AudioSet ontology <cit.>, e.g., the domestic sounds class contains the vacuum cleaner, mechanical fan, and boiling subclasses. We provide the subset of sounds encountered in the recordings for the target classes in the form of more specific AudioSet-related labels. The subset information is summarized in Table <ref>, which is a re-post of a table in the STARSS22 paper <cit.>. Directional interference sounds are derived from computer keyboards, shuffling cards, dishes, pots, and pans; however, they are not annotated. Natural background noise is mainly related to HVAC (heating, ventilation, and air-conditioning), ranging from low to high levels. Spatial and temporal annotations relied on careful mounting of tracking markers and wireless microphones <cit.>. The tracking markers were mounted on independent sound sources such as on a mobile phone or on a hoover. Head markers were also provided to the participants as headbands or hats. The head tracking results served as reference points for all human body-related sound classes. Mouth position for speech and laughter classes, feet stepping position for footstep class, and hand position for clapping class were each approximated with a fixed translation from the head tracking result. Regarding clapping, participants were instructed to clap about 20 cm in front of their faces, while footsteps were projected from the head coordinates to floor level. Hence, the mounting positions were considered in the annotation process to translate the tracking data of each class into each sound source position. The wireless microphones were mounted to the lapel of each participant and to additional independent sound sources being far from the participants. §.§ Annotation At certain time, the motion capture (mocap) system could not track the attached marker, e.g., when markers were out of the view of the mocap coverage, markers were moving too fast, or obstacles occluded markers. Whenever such misses were short, the tracking results were interpolated with Motive[<https://optitrack.com/software/motive/>]. If such misses were long and interpolating the results was not possible, azimuth and elevation of sound sources were annotated based on the 360^∘ video data. For example, direction of arrival (DOA) of the door class was often annotated from the the video data because many doors were outside of the view of the mocap. To calculate source distance in such cases, in addition to the video data, we additionally used information on room dimensions and installation positions from our recording log. Any interpolated spatial annotations were visually checked at the confirmation videos. §.§ Data format We use WAV file format for the audio data and MP4 file format for the video data. The metadata are tabulated and served in CSV file format. The sound event classes, DOAs, and distances are provided in the following format: * frame number, active class index, source number index, azimuth, elevation, distance with all labels served as integers. Frame, class, and source enumeration begins at 0. Frames correspond to a temporal resolution of 100 msec. Azimuth and elevation angles are given in degrees, rounded to the closest integer value, with azimuth and elevation being zero at the front, azimuth ϕ∈ [-180^∘, 180^∘], and elevation θ∈ [-90^∘, 90^∘]. The azimuth angle increases counter-clockwise (ϕ = 90^∘ at the left). Distances are provided in centimeters, also rounded to the closest integer value. The source index is a unique integer for each source in the scene. Note that each unique participant gets assigned one identifier, but not individual events produced by the same participant; e.g., a clapping event and a laughter event produced by the same person have the same identifier. Independent sources that are not participants (e.g., a loudspeaker playing music in the room) get a 0 identifier. Note that the source index and the source distance are only included as additional information that can be exploited during training. An example line could be as follows: * 10, 1, 1, -50, 30, 181 which describes that in frame 10, an event of class male speech (class 1) belonging to one participant (source 1) is active at location (-50^∘, 30^∘, 181 cm). § DATA ANALYSIS §.§ Duration Figures <ref> illustrates the box plots of duration, i.e., how long a sound event lasts. The figure shows that there are classes with similar duration trends. Speech classes have a wide range of plots, whereas laughing has a shorter duration than speech. While phone and bell have similar medians, the box of phone is longer as phone calls repeated the recorded sound until it was answered. There are several classes with longer duration, e.g., domestic sounds, music, musical instruments, and faucet, whereas collision or tapping sounds such as door and knock are relatively short. §.§ DOA Figure <ref> shows the distribution of DOAs of the rest of the classes not depicted in Figure <ref>. While DOAs of human-produced classes such as speech, laugh, clap, or footsteps are dispersed across the 360^∘ plane, classes such as door, knock, or faucet result in specific discrete points in the plot due to their fixed position in the room. Music and bell classes also show similar trends to door class as their sound sources are rarely moved. Domestic sounds, phone, and musical instrument classes show similar trends to the speech classes as the respective sources are sometimes still and other times moving. § EXPERIMENT §.§ Experimental setting We add a few details on experimental settings. We apply the short-term Fourier transform (STFT) as audio features with a 20-ms frame length and 10-ms frame hop. We keep the input feature length to 1.27 sec during inference and set the shift length to 1.2 sec. For visual features, we use the pre-trained YOLOX object detection model[<https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox/yolox_tiny_8x8_300e_coco.py>], which is trained with the COCO dataset <cit.>. While COCO has 80 object classes, we focus on person, cell phone, and sink classes because they are strongly related to the 13 sound events in STARSS23. Only person class is stably detected in our preliminary experiments in STARSS23 videos. Therefore, we use the model to get bounding boxes for the person class. We use a batch size of 16 and the Adam optimizer with a weight decay of 10^-6 to train the audio-visual sound event localization and detection (SELD) system. The learning rate is set to 0.001. We validate and save model weights at every 1,000 iterations up to 20,000 iterations. We select a model that demonstrated the best aggregated SELD error, ℰ_SELD, calculated as ℰ_SELD = ER_20^∘ + ( 1 - F_20^∘ ) + LE_CD/180^∘ + ( 1 - LR_CD )/4. When there are no true positive outputs in a class, to compute the macro average, we set 180^∘ as localization error in the class. We report the average scores and error bars of five experiments. The model parameter size is 0.8 M. The model size is kept intentionally small for easier trials. A single GPU (e.g., GeForce GTX 1080 Ti) is used for training. The training take around six hours. §.§ Experimental results In addition to the F-score per class in first-order ambisonics (FOA) format in Figure <ref>, we show the other SELD metrics per class in both FOA and tetrahedral microphone array (MIC) formats. As shown in Figure <ref>, the F-score in MIC format shows a similar trend as in FOA format. While the audio-visual SELD system performs worse in the F-score of non-body-related classes, the audio-visual system demonstrates a higher F-score in body-related classes. Figure <ref> and <ref> show that the audio-visual system demonstrates lower localization error of body-related classes in both formats. There is no significant trend of the localization recall between the two formats. §.§ Additional experiment We add a few experiments to support the above experimental results, i.e., demonstrating that the audio-visual system contributes to the performance of body-related classes. In the additional experiments, we use only body-related classes as the target classes of the SELD systems. The systems train and evaluate with activities and DOAs of speeches, clap, laugh, and footsteps classes. We follow the previous experiments in the other settings. Table <ref> shows the body-related classes only SELD performance in audio-visual and audio-only systems evaluated for dev-set-test in STASS23. The audio-visual SELD system scores better in all metrics than the audio-only system. The visual input, i.e., bounding boxes of the person class, enables the audio-visual system to localize and detect body-related classes more accurately. Audio-only system in MIC format shows high standard deviation in localization error. It is because a few classes are sometimes set 180^∘ as they had no true positive output. Even if we omit such cases, the audio-visual system still shows lower localization recall. § COMPETITION STARSS23 has served as the development set and evaluation set for the SELD Task of the DCASE 2023 Challenge[<https://dcase.community/challenge2023/task-sound-event-localization-and-detection-evaluated-in-real-spatial-sound-scenes>], which aims to accelerate audio-only and audio-visual SELD research. The task participants use the development audio/video recordings and labels to train and validate their SELD systems in the development process. The evaluation recordings without labels are used to produce system outputs for the challenge evaluation phase. If researchers wish to compare their system against the submissions of the challenge, they will have directly comparable results if they use the evaluation data as their testing set. Also, the implementation of the audio-visual SELD system described herein, trained and evaluated with STARSS23, has served as the baseline method for the audio-visual track of the challenge. § SOCIAL IMPACT The STARSS23 enables research on audio-only and audio-visual SELD tasks, which form the backbone of various real-world applications important on acoustic and audio-visual monitoring, intelligent home applications, or audio-visual machine perception. The dataset, together with the associated challenge, accelerates such research for research institutes and industry since it is the first of its kind based on real annotated recordings. Multiple research institutes, university laboratories, and industrial research and development groups have already shown interest in the dataset and its use, either as part of the DCASE challenge, or outside of it in independent published studies. We expect the dataset to set the standard in the upcoming years in audio-visual SELD-related studies, due to its unique spatial annotations from real tracked people and sound sources, and its spatial audio content which is becoming more and more relevant with multiple monitoring or smart home devices employing microphone arrays. The dataset also offers opportunities for cross-regional evaluation, with its recordings coming from two different sites geographically far apart. Of course as with many strongly annotated datasets, the diversity of sound events is limited and cannot capture the conditions of many real-world application specific scenes. However, we believe that it is a useful contribution to the development and maturation of such systems, at which point we expect more application-specific SELD datasets to appear. § PERSONAL DATA HANDLING STARSS23 data was recorded with over 50 voluntary participants. Before the recording, we explain our research purpose, how we record the sound scenes, and how we treat and release the recording data. Regarding the recording process, an example of the generic instructions is in Appendix <ref>. Our explanation is based on text format and verbal description. Participants can ask us questions related to recording and public release. We also explain potential risks, i.e., recording data containing personally identifiable information, and our Institutional Review Board (IRB) approvals. The personally identifiable information is raw speech and blurred faces. The participants are also instructed not to reveal personal information during the recordings, and limit themselves to conversations of generic topics. After the explanation and Q&A, when participants understand the purpose and risk of recording and release, each participant signs the consent form. § DATA SHEET For dataset documentation, we take the questions from Datasheets for Datasets <cit.> and answer them. §.§ Motivation * Q: For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. A: This dataset is created to tackle audio-visual sound event localization and detection (SELD) tasks. While visual data about sound source objects can support SELD tasks, e.g., feet are a potential source of footsteps, the existing datasets did not contain the complete multichannel audio, video, and annotation set. STARSS23 serves real sound scene recording with multichannel audio, video data aligned with the audio, and spatiotemporal annotation of sound events. STARSS23 allows the incorporation of audio-visual correspondence into multichannel audio signal processing methods. * Q: Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? A: Creative AI Laboratory (CAL) at Sony Group and Audio Research Group (ARG) at Tampere University. * Q: Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. A: The data collection and annotation at Tampere University has received funding by Google. * Q: Any other comments? A: N/A. §.§ Composition * Q: What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. A: Real sound scene recordings with multichannel audio data, video data aligned with the audio data, and annotations of temporal activation, the direction of arrival (DOA), and distance of sound events. * Q: How many instances are there in total (of each type, if appropriate)? A: The development set totals about 7 hours and 22 minutes, of which 168 clips were recorded with 57 participants in 16 rooms with annotation. The evaluation set totals about 3.5 hours and 79 clips without annotation. * Q: Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). A: It contains all possible instances. * Q: What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. A: Raw audio and video data, but we convert 32ch 48kHz audio recordings to 4ch 24kHz, and 3840×1920 360^∘ video recordings to 1920×960 equirectangular. We also conduct face-blurring on video data to anonymize identical information. * Q: Is there a label or target associated with each instance? If so, please provide a description. A: The dataset contains temporal activation, DOA, and source distance labels of 13 target sound event classes. The classes are female speech, male speech, clapping, telephone, laughter, domestic sounds, footsteps, door, music, musical instrument, water tap, bell, and knock. * Q: Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. A: The evaluation set has no annotation because the set is used in a testing phase of the SELD task of the DCASE 2023 challenge. While almost all audio recordings in the development and evaluation set are accompanied by synchronized video recordings, only 12 audio recordings in the development set are missing videos (from fold3_room21_mix001.wav to fold3_room21_mix012.wav). * Q: Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. A: In each clip, we use the same tag, e.g., foldX_roomY_mixZ. We use the tag for audio (.wav), video (.mp4), and labels (.csv). * Q: Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. A: Yes. STARSS23 has the dev-set-train part and the dev-set-test part for the development process. * Q: Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. A: In addition to target sound events, the sound scene recordings contain directional interference sounds and background noise. That makes the dataset a more realistic situation. We confirm all the labels by listening to the audio and watching the video. If there are any errors, they would be negligible. * Q: Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate. A: Self-contained. * Q: Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description. A: No. * Q: Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. A: No. * Q: Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. A: STARSS23 set female and male speech as two target sound event classes. The frame coverages of both classes are almost equal; 28.4 % and 31.4 %, respectively. * Q: Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how. A: As an audio-visual sound scene recording dataset, STARSS23 contains raw speech data. However, faces in video data are blurred, and the talk contents have no personal topics. So it is hard to identify individuals. We also explain to participants the potential risk of identical information before recording. After the explanation, participants sign a consent form for recording when they understand the potential risk. * Q: Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description. A: No. * Q: Any other comments? A: N/A. §.§ Collection process * Q: How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If the data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. A: Each clip of a sound scene is recorded in a room with participants and sound props. Participants improvise a scene following a generic instruction about the sound scene and event. * Q: What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? How were these mechanisms or procedures validated? A: Multichannel audio and video data are recorded with a 32ch microphone array and 360^∘ camera, respectively. A motion capture system and wireless microphones are also recorded for annotation. The specific recording equipment is Eigenmike em32[<https://mhacoustics.com/products#eigenmike1>], Ricoh Theta V[<https://theta360.com/en/about/theta/v.html>], Optitrack Flex 13[<https://optitrack.com/cameras/flex-13/>], and Røde Wireless Go II[<https://rode.com/en/microphones/wireless/wirelessgoii>] respectively. * Q: If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? A: N/A. * Q: Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? A: Voluntary participants act in a sound scene, and authors record the scene. * Q: Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. A: The first round of recordings was collected between September 2021 and April 2022. A second round of recordings was collected between November 2022 and February 2023. * Q: Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. A: Yes. We get approvals from our Institutional Review Board (IRB). Following the discussion about personally identifiable information, we conduct face-blurring on video data and explain to participants about potential risks, i.e., recording data containing identifiable information. * Q: Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? A: From the individuals. * Q: Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself. A: Yes. Before the recording, we explain our research purpose, how we record sound scenes, and how we treat and release the recording data. Our explanation is based on text format and verbal description. Participants can ask us questions related to recording and release. * Q: Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. A: Yes. After our explanation and Q&A, when participants understand the purpose and risk of recording and release, each participant signs the consent form. * Q: If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate). A: Yes. In such a case, they can contact authors. * Q: Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. A: N/A. * Q: Any other comments? A: N/A. §.§ Preprocessing/cleaning/labeling * Q: Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section. A: We convert audio and video data to the appropriate size. We also annotate temporal activation and DOA of sound events. After annotating, we confirm the labels by listening to the audio and watching the video. * Q: Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. A: We saved “raw” (large size) audio and video data for future use. If one is interested in them, one can contact the authors. * Q: Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point. A: We use python code for audio data conversion, and RICOH THETA[<https://theta360.com/en/about/application/pc.html>] and FFmpeg[<https://ffmpeg.org/>] for video data conversion. To annotate temporal activation, Audacity[<https://www.audacityteam.org/>] and REAPER[<https://www.reaper.fm/>] are used. Tracking results are used in Motive[<https://optitrack.com/software/motive/>]. * Q: Any other comments? A: N/A. §.§ Uses * Q: Has the dataset been used for any tasks already? If so, please provide a description. A: No, the dataset has not yet been used for any scientific papers. This paper is the first to use the dataset. * Q: Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. A: Yes. We have the code repository of the SELD system: <https://github.com/sony/audio-visual-seld-dcase2023>. * Q: What (other) tasks could the dataset be used for? A: Apart from audio-visual SELD tasks, one could use the dataset for audio-only SELD tasks, audio-visual speaker DOA estimation tasks, and audio-visual sound source localization tasks. Audio-visual sound source distance estimation could be another task. * Q: Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms? A: No. * Q: Are there tasks for which the dataset should not be used? If so, please provide a description. A: No. * Q: Any other comments? A: N/A. §.§ Distribution * Q: Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. A: Yes. STARSS23 is publicly available at <https://zenodo.org/record/7880637>. * Q: How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? A: STARSS23 is distributed via <https://zenodo.org/record/7880637>. The DOI is "10.5281/zenodo.7709051". * Q: When will the dataset be distributed? A: STARSS23 is already distributed. * Q: Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. A: STARSS23 is licensed under MIT License. * Q: Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. A: No. * Q: Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. A: No. * Q: Any other comments? A: N/A. §.§ Maintenance * Q: Who will be supporting/hosting/maintaining the dataset? A: Sony Group and Tampere University. * Q: How can the owner/curator/manager of the dataset be contacted (e.g., email address)? A: Please contact and . * Q: Is there an erratum? If so, please provide a link or other access point. A: All changes to the dataset will be announced on <https://zenodo.org/record/7880637>. * Q: Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub)? A: Yes, all the updates will be synced on the website. * Q: If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. A: No. * Q: Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers. A: The older dataset versions remain in Zenodo if any changes are made. * Q: If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description. A: Yes. Others can contact the authors of this paper to describe their proposed extension or contribution. We would discuss their proposed contribution to confirm its validity, and if confirmed, we will release a new version of the dataset on Zenodo and announce it accordingly. * Q: Any other comments? A: N/A.
http://arxiv.org/abs/2306.04433v1
20230607134649
Cross-Database and Cross-Channel ECG Arrhythmia Heartbeat Classification Based on Unsupervised Domain Adaptation
[ "Md Niaz Imtiaz", "Naimul Khan" ]
eess.SP
[ "eess.SP", "cs.AI" ]
The manuscript was submitted on May 24, 2023. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Government of Canada's New Frontiers in Research Fund (NFRF). Md Niaz Imtiaz and Naimul Khan are with the Department of Electrical, Computer and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada (e-mail: [email protected]; [email protected]). Cross-Database and Cross-Channel ECG Arrhythmia Heartbeat Classification Based on Unsupervised Domain Adaptation Md Niaz Imtiaz, and Naimul Khan July 31, 2023 ================================================================================================================ The classification of electrocardiogram (ECG) plays a crucial role in the development of an automatic cardiovascular diagnostic system. However, considerable variances in ECG signals between individuals is a significant challenge. Changes in data distribution limit cross-domain utilization of a model. In this study, we propose a solution to classify ECG in an unlabeled dataset by leveraging knowledge obtained from labeled source domain. We present a domain-adaptive deep network based on cross-domain feature discrepancy optimization. Our method comprises three stages: pre-training, cluster-centroid computing, and adaptation. In pre-training, we employ a Distributionally Robust Optimization (DRO) technique to deal with the vanishing worst-case training loss. To enhance the richness of the features, we concatenate three temporal features with the deep learning features. The cluster computing stage involves computing centroids of distinctly separable clusters for the source using true labels, and for the target using confident predictions. We propose a novel technique to select confident predictions in the target domain. In the adaptation stage, we minimize compacting loss within the same cluster, separating loss across different clusters, inter-domain cluster discrepancy loss, and running combined loss to produce a domain-robust model. Experiments conducted in both cross-domain and cross-channel paradigms show the efficacy of the proposed method. Our method achieves superior performance compared to other state-of-the-art approaches in detecting ventricular ectopic beats (V), supraventricular ectopic beats (S), and fusion beats (F). Our method achieves an average improvement of 11.78% in overall accuracy over the non-domain-adaptive baseline method on the three test datasets. § INTRODUCTION According to research by the UN <cit.>, cardiovascular illness is now the leading cause of mortality worldwide. Studies indicate that 80% of sudden cardiac fatalities have a strong correlation with arrhythmia <cit.>. Therefore, a timely diagnosis of arrhythmia is needed, and it must be done accurately. The identification of cardiac arrhythmias heavily relies on the electrocardiogram (ECG), a physiological signal that provides information about the electrical activity of the heart. Furthermore, the prevention, diagnosis, and treatment of cardiovascular illnesses all depend on high-precision automatic diagnostics. The classification of heartbeat is a fundamental and common task in the automated detection of arrhythmias. Over the past few years, several machine learning and deep learning approaches have been put forth to identify various heart irregularities using ECG signals. Conventional methods employ several classification algorithms, like Support Vector Machines <cit.>, and K-Nearest Neighbor <cit.>, on hand-crafted features. Several techniques for classifying ECG heartbeats have been developed as a result of recent advancements in deep learning. These methods leverage the potent feature-learning capabilities of deep learning algorithms and large volumes of annotated clinical data. The proposed deep learning methods can be broadly categorized into two types: convolutional neural networks (CNN) <cit.> and recurrent neural networks (RNN) <cit.>. CNNs employ several convolutions with various filters to directly extract key features from the ECG data. These high-level features are then forwarded to classifiers for prediction. On the other hand, RNNs evaluate the temporal relationships within ECG data and incorporate features at various time steps. Even though deep learning techniques have advanced significantly, they still face challenges in the cross-domain paradigms. Individual differences have a significant impact on the morphological properties of ECG signals. For this reason, when evaluated on new patient data, models perform significantly worse. Domain shift <cit.>, which refers to differences between test and training data, may go against the fundamental identically distributed assumption in a learning-based scheme. Moreover, deep learning models require an extensive amount of labeled data for training to achieve good results. In the real world, collecting sufficient amounts of labeled ECG data is typically costly and laborious. The ECG signal takes a very long time to capture, and the changes are subtle <cit.> <cit.>, making manual labeling exceedingly time-consuming. Also, in some real-world settings, it is hard to obtain the data labels of ECG signals collected from new participants, which prevents us from re-training new supervised models for these cases. Therefore, it is a challenging but necessary endeavor to achieve precise cross-domain ECG heartbeat classification. Our solution to the problems discussed above is a domain-adaptive deep learning model based on cluster discrepancy optimization to classify arrhythmia heartbeats without additional annotation from experts. The model comprises a feature extractor based on residual blocks. Inspired by Ye et al.'s study <cit.>, we add a bi-classifier after the feature extractor. A bi-classifier minimizes inconsistencies in the predictions made by a single classifier. In a bi-classifier network, the outputs of the two classifiers are combined to get the final prediction. The drawback of a bi-classifier network is that combining the two outputs could result in an incorrect prediction, even if one of the classifiers has predicted the correct label. So, proper training to minimize the discrepancy between classifiers is required. The proposed method is composed of three stages: pre-training, computing cluster centroids, and adaptation. The pre-training stage trains the model using source data and labels to correctly classify the ECG segments by minimizing the classification loss and the discrepancy loss between the two classifiers. Taking inspiration from Sagawa et al.'s study <cit.>, we incorporate a distributionally robust optimization (DRO) method during the training process. The DRO method tackles the issue of disappearing worst-case training loss. Although reducing the vanishing worst-case training loss enhances the accuracy of predictions, the DRO method is susceptible to issues when there is a large discrepancy between source and target distributions. During the computing cluster centroids and adaptation stage of our method, our objective is to minimize the distribution differences between the source and target distributions. The cluster centroids of the source and target domains are computed in the second stage. These centroids are used to compact samples within the same cluster and to move the clusters farther apart from each other. Finally, in the adaptation stage, the feature distribution differences between the source and target domain are minimized through four loss functions. Our approach is capable of enhancing the performance of deep learning models on new data without the need for any supplementary human effort. Unlike domain-specific techniques that maintain a distinct model for each domain, our approach employs a single global model for all test domains. The suggested approach is appropriate for applications that demand efficient adaptation to new data from diverse distributions, such as customized portable devices and online diagnostic systems. This paper's key contributions are as follows: (1) A novel technique to select confident predictions in the unlabeled target domain is proposed, which in turn improves the precision of cluster separation. (2) To mitigate the discrepancy in feature distributions among domains, two new objective functions are introduced: the running combined loss and the inter-domain cluster discrepancy loss. These objective functions are utilized alongside two existing ones <cit.>, namely the cluster-compacting loss and the cluster-separating loss, during the adaptation stage. (3) Two-stage training after pre-training is performed to efficiently organize distinguishable clusters in the source domain first and then use them to minimize the cluster discrepancy between the source and target domains. Inspired by Niu et al. <cit.>, we also combine three time features with the deep features to improve the performance of our proposed approach further. The efficacy of our method is demonstrated through experimental results conducted on public databases. The MIT-BIH Arrhythmia Database (MITDB) <cit.> is used to train the proposed model, while the St. Petersburg INCART 12-lead Arrhythmia Database (INCARTDB) and the European ST-T Dataset (ESTDB) <cit.> are used to test it. This study considers both cross-database and cross-channel paradigms. Our proposed method is compared against five recent approaches <cit.> that are recognized for their high performance. This comparison is made by evaluating them using the same network architecture and experimental setting as our proposed method. Our proposed technique achieves the overall accuracy of 84.61%, 82.32%, and 76.44% on INCARTDB (cross-domain paradigm), INCARTDB (cross-domain and cross-channel paradigm), and ESTDB, respectively, which is considerably higher than other approaches. An ablation analysis and comparison of our proposed method with the method without domain adaptation are shown in the experimental results section. § RELATED WORKS Unsupervised domain adaptation intends to recognize the unlabeled target data by transferring the deep feature knowledge obtained from the labeled source data. Sagawa et al. suggested a group distributionally robust optimization algorithm, which necessitates the samples to be explicitly annotated with their respective groups <cit.>. Their model was trained to minimize the loss that would occur in the worst-case scenario over groups present in the training data. Sun and Saenko extended the idea proposed by Sun et al. <cit.> to propose a deep correlation alignment (CORAL) method to handle situations where the target domain is unlabeled<cit.>. By means of a linear transformation, the CORAL technique adjusts the second-order statistics of the source and target distributions to match each other. Their approach involves integrating CORAL directly into deep networks by creating a differentiable loss function that reduces the gap between the correlations of the source and target. A heuristic training technique called representation self-challenging (RSC) was introduced by Huang et al., which considerably enhances the ability of CNNs to generalize to out-of-domain data <cit.>. Through an iterative process, RSC eliminates the dominant features that are activated on the training data and compels the network to activate the remaining features that are correlated with labels. Their method seems to provide feature representations that are useful for handling out-of-domain data, without requiring any knowledge about the new domain beforehand or the need to learn additional network parameters. Several techniques for domain adaptation have been proposed to work with ECG data. Niu et al. suggested an adversarial domain adaptation-based deep learning approach for classifying ECGs <cit.>. In their model, the high-level features obtained by the feature extractor are passed through a domain discriminator module and a classifier module in parallel. The domain discriminator module resolves the issue of insufficient model depth and low-feature abstraction. The classifier module combines the temporal features with the extracted high-level features to increase feature diversity. A multi-source domain generalization model for ECG classification was developed by Hasani et al. to handle the distribution discrepancy issue that arises when data are collected from numerous sources under various acquisition situations <cit.>. They used a combined convolutional neural network (CNN) and long short-term memory (LSTM) model to obtain features and the adversarial domain generalization technique to avoid the inconsistency between the training and test data. They also used a variety of augmentation techniques, such as lead dropout, random ECG padding and cropping, and introducing low-frequency aberrations, to boost generalization. Wang et al. introduced a domain adaptive ECG arrhythmia classification (DAEAC) method to enhance the deep neural network's performance in the inter-patient paradigm<cit.>. To reduce the distribution differences between the data used for training and testing, they introduced two loss functions named cluster-aligning loss and cluster-maintaining loss. A subdomain adaptive deep network (SADN) was introduced by Jin YR et al. where they excavated the detection knowledge from labeled source domain data and used the knowledge to enhance performance on unlabeled target domain data <cit.>. They used convolutional layers, residual blocks, and squeeze-and-excitation-residual blocks for automatically extracting significant deep features. To limit data distribution disagreement across datasets, they used a loss function that incorporates the concept of local maximum mean discrepancy. Although a few attempts were made for ECG arrhythmia classification across different domains with distribution disparities, they still suffer from unsatisfactory performance for different types of arrhythmias. Inspired by the analysis and considerations mentioned above, this paper proposes a novel deep adaptive model that utilizes knowledge gained from labeled source domain data to enhance classification accuracy on unlabeled target domain data. While some recent domain-adaptation-based methods have presented results on ECG classification <cit.>, they either utilize different groups as source-target within the same database or use nonpublic databases. None of them have utilized the same train-test configuration as this study. We evaluate recent approaches against our proposed method by employing identical network architecture and train-test configuration. § PROPOSED METHOD §.§ Framework The framework of the proposed approach is demonstrated in Fig. 1. In general, we use the term source domain to refer to the training dataset and target domain to refer to the test dataset. Assume we have labeled source data X_s= {x_s^i}_i=1^N_s and their corresponding labels Y_s= {y_s^i}_i=1^N_s, as well as unlabeled target data X_t= {x_t^i}_i=1^N_t. Our objective is to learn a function F using both labeled source data and unlabeled target data, so that it can predict the labels of target data with high accuracy. We propose a network composed of a feature extractor and two parallel classifiers (Fig. 2). Feature extractor takes ECG segments and automatically obtains distinctive deep features. The feature extractor is composed of three residual blocks and three max-pooling layers. Each residual block has three 1D convolution layers. To generate deep feature maps, the input layer undergoes the application of the first and second convolution layers. Similarly, the third convolution layer is employed on the input layer to produce shallow feature maps. The residual blocks are responsible for compressing the input vector's length and obtaining deep features for the classification process that follows. The deep features are then passed through two parallel classifiers. In situations where a single classifier produces inaccurate predictions despite the feature extractor generating good distinctive features, a bi-classifier can rectify the problem. Moreover, we use the discrepancy between the two classifiers to identify confident predictions in the target domain. Each classifier has three fully connected layers. Before the last fully connected layer of the classifier, three corresponding time features (described in the data preprocessing and network inputs section) are added to the deep features, which enhances the feature diversity. The outputs of the two classifiers are then combined to get the predicted heartbeat category. The proposed method consists of the following stages. §.§.§ Pre-training We train the model using the labeled source data and use a distributionally robust optimization (DRO) technique <cit.> during the training process. DRO enables us to train models that reduce the worst-case loss in a set of predefined groups during the training process. To prevent the model from relying on false correlations, which could lead to high losses on some data groups, we opt to train the model to reduce the worst-case loss across training data groups. Assume predicting labels y ∈ Y taking input x ∈ X. Let we have a model family Θ and loss l : Θ × (X × Y) → ℝ_+. The training samples come from a distribution P. The usual objective is to obtain a model θ∈Θ that reduces the expected loss 𝔼_ℙ [ l ( θ ; (x,y))] within the distribution P. The typical method for achieving this objective in training is through empirical risk minimization (ERM): θ̂_ERM := θ∈Θarg min 𝔼_(x,y) ∼P̂ [ l ( θ ; (x,y))] where P̂ denotes the empirical distribution over the training set. In DRO <cit.>, we intend to reduce the worst-case anticipated loss by considering an uncertain set of distributions ϱ: θ∈Θmin{ R(θ) : =Q∈ϱsup 𝔼_(x,y) ∼ Q [ l ( θ ; (x,y))]} The uncertainty set ϱ represents the range of potential test distributions that our model should be capable of performing effectively on. If we select a general family ϱ, like a divergence ball centered around the training distribution, it can make our model more resilient to various distributional shifts. However, this approach can also result in excessively cautious models that optimize for implausible worst-case distributions. We apply DRO to the weighted cross-entropy loss and obtain the classification loss L_cls. Additionally, we calculate the classifier discrepancy loss (L_dis) by measuring the Euclidean distance between outputs from the two classifiers. The ultimate objective function in the pre-training stage is the weighted sum of classification loss and classifier discrepancy loss: L= L_cls + α L_dis where α is a hyperparameter. Upon completion of the pre-training stage, we obtain a robust model that exhibits a high degree of precision in classifying heartbeats within the source domain. §.§.§ Cluster-centroid computing The cluster hypothesis is a basic assumption in the classification task <cit.>. Samples that belong to the same class should reside in the same cluster, whereas those from different classes should be at a considerable distance from each other. Following this fundamental assumption, we initially determine the centroids of the clusters in the source domain by averaging the outputs of the feature extractor for every heartbeat category. Then we train the model using two loss functions: the cluster-compacting loss (4) and the cluster-separating loss (5) <cit.> along with the classification loss to simultaneously decrease the intra-class spacing and increase the inter-class spacing. L_comp= Σ_k=1^KΣ_i=1^n_k D(𝔼[X_k], X_k,i) L_sep= Σ_k≠l^KΣ_l=1^K max(T_m-D(𝔼[F(X_l)], 𝔼[F(X_k)]), 0) where n_k is the number of samples belonging to the k^th category and K is the number of heartbeat categories. D represents the Euclidean distance. We compute the L_comp and L_sep using the cluster centroids in the source domain. The responsibility of cluster-compacting loss is to minimize the distance between samples within a class. The cluster-separating loss aims to keep the cluster centroids of distinct categories apart from each other in order to reach a pre-defined threshold T_m <cit.>. We call the cluster-related loss functions together cluster loss (L_ctr). The ultimate loss function here is the weighted sum of classification loss and cluster loss: L= L_cls + γ_1 L_comp + γ_2 L_sep where γ_1 and γ_2 are hyperparameters. Once the model has been trained to optimize the source clusters, we compute the centroids of the well-organized clusters (CC_s). As we do not have labels for the target domain, we first identify confident predictions (CP_t) in the target domain to compute target cluster centroids. To identify the confident predictions in the target domain we propose a novel high-confident prediction technique. We first calculate the mean intra-cluster distance (7) and the mean classifier discrepancy (8) in the source domain. The mean intra-cluster distance measures the average distance between the samples in a cluster and the center of that cluster. The mean classifier discrepancy is the average difference between the outputs of the two classifiers. M_ctr= 1/n_s^kΣ_i=1^n_s^k D(F(X_i), CC_s^k) for each k ∈ K M_dis= 1/N_sΣ_i=1^N_s D(C_1,i, C_2,i) where N_s is the total number of samples in the source domain and n_s^k is the number of samples belonging to the k^th category. We feed the target domain data into the pre-trained network. If the softmax value is greater than 0.99 for a particular class, it is a candidate for confident prediction. Instead of choosing it as a confident prediction, we further verify two additional conditions. We calculate the discrepancy between the feature extractor's output for that prediction and the source cluster centroid of the predicted class and check if the difference is less than M_ctr. We also verify whether the discrepancy between the outputs of the two classifiers for that prediction is less than M_dis. If it satisfies all the conditions, we consider it a confident prediction. In some cases, the model confidently predicts the wrong class. That is why relying only on the softmax score does not suffice. This technique filters out misclassifications by the classifier and misleading features from the feature extractor, thereby decreasing the likelihood of the model making erroneous confident predictions. After obtaining the confident predictions, we calculate the cluster centroids (CC_t) for the target domain based on them. §.§.§ Domain adaptation This stage minimizes the gap between the source and target domains and organizes the clusters efficiently. We feed batches from both the source and target domains. We introduce two new loss functions: the inter-domain cluster discrepancy loss (9) and the running combined loss (10), which are added to the cluster-compacting loss (4) and the cluster-separating loss (5), along with the classification loss, to train the model. L_cd= Σ_k=1^K D(CC_s^k, CC_t^k) L_cmd= Σ_k=1^K D(CC_m,i^k, CC_m^k) {for all i : 1<= i <= N_s/N_b} where N_b is the number of samples in a batch and CC_m^k=avg(CC_s^k, CC_t^k) We calculate the cluster-compacting loss and the cluster-separating loss for both the source and target domains using the source and target clusters, respectively, that are computed in the cluster computing stage. The inter-domain cluster discrepancy loss minimizes the cluster shift of the target domain from the source domain. To calculate the running combined loss, we first calculate the average of the cluster centroids of the source and target domains, which are computed in the cluster computing stage. We call it global average cluster centroids. For each training batch, we compute the difference between the current average cluster centroids and the calculated global average cluster centroids. This provides us with how the currently predicted clusters deviate from the previously calculated standard clusters. The final loss function is calculated as a weighted sum of classification loss, cluster-compacting loss, cluster-separating loss, inter-domain cluster discrepancy loss, and running combined loss: L =L_cls + β_1 (L_comp^s + L_comp^t) + β_2 (L_sep^s + L_sep^t) + β_3 L_cd + β_4 L_cmb where β_1, β_2, β_3, and β_4 are hyperparameters. Algorithm 1 illustrates the entire algorithm. §.§ Data preprocessing and network inputs We use a special input format that allows for the inclusion of both morphological and temporal information. Initially, the ECG signal undergoes preprocessing to create the input data, which is then fed into the model. The classifier receives input from both the model's extracted features and the manually extracted time features, which are concatenated together to produce the ultimate classification outcome. The preprocessing consists of the following steps: (1) Signal denoising: We first filter the ECG signal by a bandpass filter with a passband of 3-20 Hz to eliminate the influence of power-line interference, muscle artifacts, baseline wander, and electrode contact noise. (2) Resampling: The ECG sampling frequency varies across different datasets, with the MITDB, INCARTDB, and ESTDB datasets having sampling frequencies of 360 Hz, 257 Hz, and 250 Hz, respectively. We use an FIR filter to resample the data, resulting in a unified sampling rate of 256 Hz to address the challenge of varying sampling rates. (3) Heartbeat segmentation: We partition the ECG signal into multiple small segments, utilizing the R-peaks as a reference point. To keep the input dimension fixed, we segment the ECG signal into segments of fixed length. We first compute the RR intervals from the R-peaks for all the ECG signals belonging to the source domain. Then, we compute the arithmetic mean (RR_mean) of the RR intervals. Suppose, R_i is the R-peak position of the i^th heartbeat. The corresponding segment starts at position (R_i-⌊1/2RR_mean⌋) and ends at position (R_i+⌊1/2RR_mean⌋). (4) Time feature extraction: Taking inspiration from Niu et al. <cit.>, we extract three time features, including the current RR-interval (RR_curr), an average of the pre-RR intervals (RR_pre) from the beginning to the current position, and an average of the last eight pre-RR intervals (RR_pre8) from the current position. § EXPERIMENTS §.§ Databases The proposed model has been trained with the MIT-BIH Arrhythmia Database (MITDB) and tested using the St. Petersburg INCART 12-lead Arrhythmia Database (INCARTDB) and the European ST-T Dataset (ESTDB). These are public databases that have been widely utilized for testing in arrhythmia heartbeat classification tasks. Both the cross-database and cross-channel paradigms are taken into account in this study. According to the American National Standards Institute/Association for the Advancement of Medical Instrumentation (ANSI/AAMI EC57: 1998) standard, every heartbeat can be categorized into one of five groups: Normal heartbeat (N), ventricular ectopic heartbeat (V), supraventricular ectopic heartbeat (S), fusion heartbeat (F), and unknown heartbeat (Q). This study investigates classifying four types of heartbeats: N, V, S, and F, as the Q-type data is inadequate in size and cannot be utilized as a reliable basis for evaluating the classification results. §.§.§ MIT-BIH Arrhythmia Database (MITDB) The MIT-BIH Arrhythmia Database contains 48 ECG recordings obtained from 47 subjects. Each record has two-channel signals of 30 minutes sampled at 360 Hz. Our training set excludes 4 paced records (102, 104, 107, 217), as per the ANSI/AAMI convention. As the normal QRS complex of ML II is typically prominent in the MIT-BIH database, this experiment only uses 44 ML II recordings. §.§.§ St. Petersburg INCART 12-lead Arrhythmia Database (INCARTDB) The St. Petersburg INCART 12-lead Arrhythmia Database contains 75 ECG recordings with 12 standard leads. Each recording is 30 minutes long and sampled at 257 Hz. §.§.§ European ST-T Dataset (ESTDB) The European ST-T Dataset consists of 90 records from 79 subjects. Each recording lasts two hours and is sampled at 250 Hz. §.§ Experimental setting The implementation of our method is carried out using the PyTorch framework. The optimization of the model parameters is achieved by utilizing the Adam optimizer, which is a widely-used stochastic gradient descent algorithm. The classification loss is measured using the weighted cross-entropy loss function. We set the batch size to 512 and the learning rate and weight decay to 0.001 and 0.0005, respectively. The number of arrhythmic beats is significantly lower compared to the number of normal beats in all three databases. For example, the percentage of normal, ventricular ectopic, supraventricular ectopic, and fusion beats in the MIT-BIH Arrhythmia Database is 89.48%, 6.96%, 2.76%, and 0.80%, respectively. To reduce the existing imbalances, we duplicate the data for ventricular ectopic, supraventricular ectopic, and fusion beats by factors of 2, 5, and 10, respectively, and incorporate them into the datasets. The augmentation by the same factor is performed for all three databases. Table I shows the number of ECG records, ECG segments, and samples for each of the heartbeat categories in all databases before and after augmentation. §.§ Results and discussion To evaluate the efficacy of our proposed approach, we first evaluate it against a method that has an identical network architecture and the same experimental settings as our proposed method but excludes the domain-adaptation aspect. Next, we evaluate five recent approaches <cit.> known for their high performance using the same network architecture and experimental setting as our proposed method. We compare our proposed method against these approaches. Moreover, we perform an ablation analysis to demonstrate the impact of the individual components of our proposed approach. We train our model using the lead II MITDB records and test our method on three datasets: DS 1, DS 2, and DS 3. DS 1 is used to test the model using a cross-database paradigm, where the source and target data are from different databases but come from the same channel. Lead II records from the INCARTDB constitute DS 1. DS 2 is used to test the model using a cross-database and cross-channel paradigm, where the source and target data come from different databases and channels. DS 2 consists of lead V5 INCARTDB records. Lead V5 records from the ESTDB make up DS 3. Although some recent domain-adaptation-based methods report results on ECG classification <cit.>, they use different groups as source-target within the same database or use non-public databases. None of them employ the same train-test configuration as this investigation. Table II presents a comparison of the overall accuracy of two methods: our proposed domain-adaptive method and the method that has the same network architecture and experimental settings as ours but does not include the domain-adaptation aspect. Our method improves the overall accuracy by 14.24%, 10.89%, and 10.21% on DS 1, DS 2, and DS 3, respectively. We show the average results of five trials, as every trial uses random training and test batches. The comparison results in terms of sensitivity (Se), positive predictive value (PPV), and F1 score are shown in Table III. Although the method without domain adaptation yields satisfactory results for normal heartbeats, it encounters major difficulties when dealing with arrhythmic beats. Our method achieves a significant increase in performance and reaches the best F1 scores on ventricular ectopic (79.66%), supraventricular ectopic (46.87%), and fusion (13.96%) compared to 11.15%, 0.51%, and 4.74%, respectively, achieved by the method without domain adaptation. From the confusion matrices (Fig. 3), we can see that although the method excluding domain adaptation identifies normal heartbeats satisfactorily, it identifies most of the arrhythmic heartbeats as normal beats. It identifies more than 99% of supraventricular ectopic beats and over 88% of fusion beats as normal beats in all three test datasets. Our proposed method shows a notable performance improvement in this particular scenario. Overall, we observe a minor enhancement in the detection of normal heartbeats, but a substantial improvement in the case of detecting arrhythmic heartbeats over the method without domain adaptation. In Table I, we see that although we increase the number of arrhythmic beats of all types, they still remain significantly lower compared to normal beats. As an illustration, out of the total number of beats, the percentage of fusion beats is only 6.46% in MITDB, 1.06% in INCARTDB, and 0.48% in ESTDB. The model finds it challenging to achieve satisfactory performance for arrhythmic heartbeats due to the considerably low number of samples available. We employ the same network architecture and experimental setup to evaluate the performance of five recent approaches that are recognized for their high accuracy <cit.>. They worked with different types of data and different types of applications than ours, but they achieved satisfactory performances. By utilizing the data and model employed in this experiment, we evaluate the results of those approaches against our proposed method. The comparisons of overall accuracy across the three datasets are demonstrated in Table IV. Our method outperforms other approaches and achieves the highest overall accuracy of 84.61%, 82.32%, and 76.44% on DS 1, DS 2, and DS 3, respectively. This is 8.53%, 4.17%, and 6.03%, respectively, higher than the second-best approach. Table V illustrates that our proposed method exhibits notable superiority in terms of sensitivity, PPV, and F1 score over all other approaches on the INCARTDB database in a cross-domain scenario. Although the method proposed by Sagawa et al. <cit.> achieves the highest F1 score of 92.47% (0.8% higher than our method) in detecting normal beats, our method performs significantly better in detecting arrhythmic beats. Missing even one arrhythmic beat can have catastrophic consequences, but identifying arrhythmias at an early stage and providing appropriate treatment can prevent heart failure. §.§.§ Ablation study An ablation study is shown in Table VI and Fig. 4. We remove the components of our proposed method, which are the key contributions of this study, one at a time, and evaluate the model to observe the impact of each component. We form four models (Model A, Model B, Model C, Model D) by removing one component at a time while keeping everything else the same. Model A is formed by excluding the two-stage training (cluster-centroid computation and adaptation). Model B eliminates the technique of selecting confident predictions in the cluster centroid computation stage. Model C removes the inclusion of two new objective functions (inter-domain cluster discrepancy loss and running combined loss) in the adaptation stage. Model D excludes the inclusion of three significant time features (RR_curr, RR_pre, RR_pre8) while maintaining all other aspects unchanged. Each of the four modules in the proposed model has an impact on the performance to some extent, as evident from Fig. 4 and Table VI. However, this impact varies across different modules. The two-stage training has a greater impact on performance than the others. Fig. 4 shows that eliminating two-stage training (Model A) leads to a significant reduction in performance, with an average overall accuracy decrease of 8.92% across the three datasets. If we remove the technique of selecting confident predictions (Model B), the average overall accuracy drops by 5.94%. The removal of the two new objective functions (Model C) results in an average overall accuracy reduction of 4.35%. Excluding time features (Model D) leads to an average overall accuracy decrease of 3.33% across the three datasets. Two-stage training leads to a greater enhancement in sensitivity, PPV, and F1 score among the four components (Table VI). The enhancement is more in detecting arrhythmic beats than normal beats. It increases the F1 score for detecting ventricular ectopic, supraventricular ectopic, and fusion beats by 23.32%, 25.24%, and 13.96%, respectively, on the INCARTDB database in a cross-domain paradigm. The contributions of the confident prediction selection technique, the two new objective functions, and the three time features in improving the performance of detecting arrhythmic beats are also significant. The technique of selecting confident predictions is used during the cluster-centroid computing stage, while two new objective functions are employed during the domain adaptation stage of the two-stage training. The cluster centroids obtained through the confidence prediction selection technique are utilized to compute the objective functions. Together, they create well-distinguishable clusters and minimize the difference between the source and target distributions. As a result, removing any of the three components leads to a significant drop in performance. Furthermore, the concatenation of significant time features with the deep features results in an increased feature diversity, ultimately leading to more accurate predictions by the model. §.§.§ Effect of hyperparmeters In our study, we examine the influence of hyperparameters by changing their values between 0 and 1. Fig. 5 displays the overall accuracy of our method on INCARTDB (cross-domain paradigm) for different values of the hyperparameters. We choose the values of the hyperparameters α, γ_1, γ_2, β_1, β_2, β_3, β_4 to be 0.5, 0.1, 0.1, 0.1, 0.1, 0.5, and 0.1, respectively, as they yield the best results for our method. The same hyperparameter values are used for all datasets (DS 1, DS 2, and DS 3) in our experiments. The overall findings demonstrate that the proposed method achieves satisfactory performance when compared to other methods across all three test datasets. It is worth noting, however, that the method's performance on fusion beats is limited by the excessively low number of samples. Nonetheless, our method exhibits a notable performance improvement in detecting arrhythmic beats when compared to other approaches. Furthermore, the experiments conducted in the cross-domain and cross-channel paradigms reveal that our method performs well across various databases and different ECG channels. § CONCLUSION This paper proposes a novel method for classifying ECG arrhythmias that effectively addresses the problem of insufficiently labeled training samples and data distribution shifts across different domains. We design a model based on residual networks and a bi-classifier to achieve results that are comparable to other state-of-the-art models while maintaining a better-balanced performance across various categories. To minimize distribution disparities across domains, we introduce a cluster optimization method that incorporates four distinct objective functions. We propose a novel technique for selecting confident predictions in the unlabeled target domain, which ultimately enhances the precision of separating clusters. Moreover, incorporating three significant time features into the final classifier layer and applying distributionally robust optimization during pre-training improves the model's ability to classify ECG arrhythmias. The proposed approach obviates the need for any annotations for new records and does not entail the introduction of any supplementary computational or storage resources during the inference phase. Our method exhibits the potential to significantly improve the efficacy of deep learning models in various domains and can be readily adapted to unseen data. Our approach still has some limitations, such as when there is a significantly inadequate collection of unlabeled samples, the long-tail effect may prevent the successful improvement of minor category performance. We plan to investigate the issue in our future work. IEEEtran
http://arxiv.org/abs/2306.08914v1
20230615073533
Optimal control of port-Hamiltonian systems: energy, entropy, and exergy
[ "Friedrich Philipp", "Manuel Schaller", "Karl Worthmann", "Timm Faulwasser", "Bernhard Maschke" ]
math.OC
[ "math.OC", "cs.SY", "eess.SY", "math.DS", "80M50, 93D20, 37J25" ]
We consider irreversible and coupled reversible-irreversible nonlinear port-Hamilto­nian systems and the respective sets of thermodynamic equilibria. In particular, we are concerned with optimal state transitions and output stabilization on finite-time horizons. We analyze a class of optimal control problems, where the performance functional can be interpreted as a linear combination of energy supply, entropy generation, or exergy supply. Our results establish the integral turnpike property towards the set of thermodynamic equilibria providing a rigorous connection of optimal system trajectories to optimal steady states. Throughout the paper, we illustrate our findings by means of two examples: a network of heat exchangers and a gas-piston system. Keywords. energy, entropy, exergy, port-Hamiltonian systems, optimal control, turnpike property, thermodynamics, dissipativitiy, passivity [ Olivier RISSER-MAROIX ========================== § INTRODUCTION Port-Hamiltonian systems provide a highly structured framework for energy-based modeling, analysis, and control of dynamical systems <cit.>. A particular feature of port-Hamiltonian systems is that solutions satisfy an energy balance and the supplied energy is given as a product of the input and the colocated output. This motivates and enables the formulation and in-depth analysis of optimal control problems, e.g., output stabilization or state transition, in which the supplied energy is minimized: Whereas the choice of this cost functional is physically meaningful, it leads to singular optimal control problems. However, for linear port-Hamiltonian systems, the port-Hamiltonian structure can be exploited to establish regularity of the optimality system <cit.> (potentially after adding a rank-minimal regularization term) and to analyze the asymptotics of optimal solutions w.r.t. the conservative subspace  <cit.>. These results have been extended to infinite-dimensional linear systems  <cit.> and nonlinear reversible port-Hamiltonian systems  <cit.>. However, when the thermodynamic properties of the control systems have to be taken into account, other corresponding potentials than the free energy such as the internal energy and the entropy, have to be considered. In turn, the dynamics have to reflect both—the energy conservation and the entropy creation due to the irreversible phenomena. The Hamiltonian formulation of controlled thermodynamic systems is an active field of research with various classes of systems ranging from dissipative Hamiltonian (or metriplectic) systems  <cit.> for the so-called GENERIC framework, irreversible port-Hamiltonian systems <cit.>, to port-Hamiltonian systems defined on contact manifolds <cit.>, and symplectic manifolds <cit.>. Different control design methods have been developed for these systems: stabilization based on control of either thermodynamic potentials such as the availability function <cit.> or the entropy creation <cit.>, shaping of the entropy creation for irreversible port-Hamiltonian systems <cit.>, and structure-preserving stabilizing feedback of contact Hamiltonian systems <cit.>. In this work, we are concerned with optimal control of coupled reversible-irreversible port-Hamiltonian systems based on first preliminary steps conducted in our conference paper <cit.>. In view of applications, due to the strong nonlinearity, only stationary optimal control problems were considered for high-dimensional thermodynamic systems (arising, e.g., from discretizations of partial differential equations), see, e.g., <cit.>. In <cit.>, the authors discuss highways in state space as particular sets, in which (or close to which) entropy optimal solutions reside. For non-stationary problems, such a behavior is coined the turnpike property <cit.>. The main contribution of this work is the proof of the turnpike property for optimal control of reversible-irreversible port-Hamiltonian systems. To this end, we formulate the dynamic problem of energy, entropy, and exergy optimization. We show that, for long time horizons, optimal solution trajectories of this problem stay close to the set of thermodynamic equilibria for the majority of the time. The corresponding result is proven for output stabilization and for state-transition problems. We further analyze the underlying steady-state problem in detail and we provide several numerical examples including a network of heat exchangers and a gas-piston system. This paper is structured as follows: In Section <ref>, we recall the definition of the class of reversible-irreversible port-Hamiltonian systems and show that two examples embed into this framework: a heat exchanger and a gas-piston system. We provide conditions in terms of the Hamiltonian energy, which ensure that the set of thermodynamic equlibria is a manifold and characterize its dimension. Next, in Section <ref>, we consider and analyze the problem of optimal state transition, where optimality is understood as minimal energy supply, minimal entropy creation, or a combination of both as the exergy. In Section <ref>, we derive similar results for optimal output stabilization instead of state transition, demonstrating the generality of the developed tools. Finally, we illustrate our findings by reconsidering the two examples in numerical illustrations in Section <ref> before conclusions are drawn in Section <ref>. Notation. We denote the gradient of a scalar valued function F:^n → by F_x and its Hessian by F_xx. The Poisson bracket with respect to a skew-symmetric matrix J∈^n× n of the differentiable functions v,w : ^n→ is defined by {v,w}_J(x) := v_x(x)^⊤ Jw_x(x), x∈^n. Let K⊂^n. We write “α(x)≲β(x) for x∈ K” if there exists c>0 such that α(x)≤ cβ(x) for x∈ K. In addition, we write “α(x)≍β(x) for x∈ K” if α(x)≲β(x) and β(x)≲α(x) for x∈ K. § COUPLED REVERSIBLE-IRREVERSIBLE PH-SYSTEMS In this section we provide the system class we analyze and illustrate it by means of two examples. §.§ Definition of reversible-irreversible pH-Systems The state space will be given by an open set 𝕏⊂^n, n∈. As usual in finite-dimensional pH systems, input and output space coincide; here, inputs and outputs are elements of ^m, m∈. The following definition of coupled reversible-irreversible pH Systems was given in <cit.>.   A Reversible-Irreversible port-Hamiltonian system (RIPHS) is defined by (i) a locally Lipschitz-continuous map J_0 : →ℝ^n× n of skew-symmetric Poisson structure matrices J_0(x), x∈, (ii) a differentiable Hamiltonian function H : →, whose gradient H_x : →^n is locally Lipschitz-continuous, (iii) a differentiable entropy function S : →, whose gradient S_x : →^n is locally Lipschitz-continuous and which is a Casimir function of the Poisson structure matrix function J_0, that is, J_0(x)S_x(x)=0 for all x∈, (iv) N constant skew-symmetric structure matrices J_k∈ℝ^n× n, k=1,…, N, (v) functions γ_1,…,γ_N : ^2n→ such that γ_k( · ,H_x(·)) : → is positive and locally Lipschitz-continuous for all k=1,…, N, (vi) a locally Lipschitz-continuous input matrix function g : ^2n→^n× m, the state equation RIPHSẋ=(J_0(x)+∑_k=1^Nγ_k(x,H_x){ S,H} _J_kJ_k)H_x(x)+g(x,H_x)u, and two conjugate outputs, corresponding to the energy and the entropy, respectively, defined by y_H := g(x,H_x)^⊤ H_x and y_S := g(x,H_x)^⊤ S_x. If J_0≡ 0 and N=1, the system is called an irreversible port-Hamiltonian system (IPHS). By means of direct calculations, any trajectory x∈ C^1(0,T;) satisfying the dynamics (<ref>) can be shown to obey both the energy balance d/dtH(x(t)) = y_H(t)^⊤ u(t) and the entropy balance d/dtS(x_t) = ∑_k=1^N γ_k(x_t,H_x(x_t)) {S,H}_J_k^2(x_t) + y_S(t)^⊤ u(t) where x_t = x(t). Here, the quantity y_H^⊤ u in (<ref>) represents the energy flow, i.e., the power supplied to the system, whereas y_S^⊤ u in (<ref>) stands for the entropy flow injected into the system. The positivity of the sum on the right-hand side of the entropy balance captures the irreversible nature of the particular phenomenon. For closed systems, i.e., where g(x,H_x)≡ 0, it can be directly read off these equations that energy is preserved and entropy is non-decreasing, hence the two fundamental laws of thermodynamics are satisfied. For u=0, a thermodynamic equilibrium is attained if the first term on the right-hand side of (<ref>) vanishes. The set of thermodynamic equilibria 𝒯 := {x∈ : ∑_k=1^Nγ_k(x,H_x(x)){S,H}^2_J_k(x)=0} = {x∈ : {S,H}_J_k(x)=0 for all k=1,…,N}, plays a distinguished role in this paper. In the set definition above the second equality follows from positivity of γ. Notice that, at first glance the power balance (<ref>) suggests that the system does not dissipate energy. However, in reversible-irreversible systems, dissipation corresponds to an increase of temperature and thus it implies entropy growth. In particular, any port-Hamiltonian system with linear dissipation may be written as an RIPHS by adding a virtual heat compartment. Note that the input map in Definition <ref> is linear in the input, contrary to the setting in <cit.> where it is affine. This choice corresponds to a reversible interconnection of the system with its environment as discussed in <cit.>. Given u∈ L^∞_ loc(t_0,t_f), the local Lipschitz conditions on the functions J_0, H_x, S_x, γ_k, and g guarantee at least local existence and uniqueness of solutions of initial value problems with dynamics (<ref>). In the next lemma, we assume that the entropy S is linear in the state x. This is not a restrictive assumption as in many physical examples, where the Hamiltonian H contains the internal energies of subsystems, the total entropy is simply the sum of some coordinates of the state (see Example <ref> below). The set of thermodynamic equilibria is closed in . If S(x) = e^⊤ x with some e∈^n and for all x∈, we have H_xx(x)H_x(x)^⊤ = n, then is a C^1-submanifold of . It is empty if and only if H_x()∩⋂_k=1^N(J_ke)^⊥ = ∅. Otherwise, its dimension is given by = n - [J_1e⋯ J_Ne]. As the Poisson bracket {S,H}_J_k is continuous for any k∈{1,…,N}, it is clear that is closed in . Now, assume that S(x) = e^⊤ x and that (<ref>) holds. Set v_k = J_ke, k=1,…,N. Note that {S,H}_J_k = S_x^⊤ J_kH_x = e^⊤ J_kH_x = -v_k^⊤ H_x. If v_1=…=v_N=0, then = H_x^-1(^n) = and thus = n, as claimed. Otherwise, r:=[v_1… v_n]≥ 1. We assume without loss of generality that v_1,…,v_r are linearly independent and define f : →^r by f(x) = [v_1,…,v_r]^⊤ H_x(x). Note that = f^-1({0}). We have Df(x) = [v_1,…,v_r]^⊤ H_xx(x), x∈. Let x_0∈, i.e., H_x(x_0)^⊤ v_k=0 for k=1,…,r and suppose that Df(x_0) is not surjective, i.e., there exists some w∈^r\{0} such that H_xx(x_0)[v_1,… v_r]w=0. Consequently, H_xx(x)H_x(x)^⊤[v_1,… v_r]w = 0, so that, by assumption, [v_1,… v_r]w = 0. But v_1,…,v_r are linearly independent so that w=0, a contradiction. Therefore, zero is a regular value of the C^1-function f, hence = f^-1({0}) is a C^1-submanifold of of the given dimension, cf. <cit.>. We close this introductory subsection with the observation that the class of reversible-irreversi­ble port-Hamiltonian systems is invariant under linear coordinate transforms. The proof follows by straightforwad computations and hence is omitted. If x solves (<ref>) and V∈^n× n is invertible, then z=Vx solves the reversible-irreversible pH-system ż=( J_0(z)+∑_k=1^Nγ_k(z, H_z){ S, H}_ J_k J_k) H_z(z) + g̃(z, H_z)u with energy Hamiltonian H(z) = H(x), entropy S(z) = S(x), J_0(z) = VJ_0(x)V^⊤, J_k = VJ_kV^⊤, and g(z, H_z) = Vg(V^-1z,V^⊤ H_z), γ_k(z, H_z) = γ_k(V^-1z,V^⊤ H_z). §.§ Tutorial Examples Next, we illustrate the class of (<ref>) systems with two examples from thermodynamics: a heat exchanger and a gas-piston system. [Heat exchanger] Let us consider a very simple model of a heat exchanger as depicted in Figure <ref>. The example is slightly adapted from <cit.>. The variables T_i,S_i, and H_i denote the temperature, the entropy and the energy in compartment i=1,2, respectively. Assuming that the walls are non-deformable and impermeable, the thermodynamic properties of each compartment are given by the following relation between the temperature and the entropy T_i(S_i) = T_ref· e^(S_i-S_ref)/c_i, i=1,2, where S_ref∈ℝ is a reference entropy corresponding to the reference temperature T_ref and c_i, i=1,2, are heat capacities, cf. <cit.>. The energy in each compartment, denoted by H_i(S_i), i=1,2, can be obtained by integrating Gibbs' equation dH_i = T_id S_i as a primitive of the function T_i(S_i), i=1,2. The state of our IPHS is x=[ S_1 S_2 ]^⊤ and the total energy (entropy) is given by the sum of the energies (entropies, resp.) in the compartments, i.e., H(x) = H_1(S_1) + H_2(S_2) and S(x) = S_1+S_2 = [ 1 1 ]x. We first assume that the outer walls are perfectly isolated, i.e., there is only heat flux through the conducting wall in between the two compartments given by Fourier's law q = λ(T_1-T_2). On the other hand, the change in energy is given by the heat flux and thus, by continuity of the latter, we have q = -ddt H_1(S_1(t)) = ddtH_2(S_2(t)). By Gibb's equation we have /ṭ(H_j∘ S_j) = Ḥ_j/Ṣ_j·/ṭS_j = T_j·/ṭS_j. Hence, λ(T_1-T_2)=-T_1ddtS_1(t) = T_2ddtS_2(t). Rearranging this equation yields the Hamiltonian-like formulation ddtS_1(t)S_2(t) = λ(1T_2(t)-1T_1(t)) 0-110_=:JT_1(t)T_2(t). To complete the definition of an uncontrolled IPHS, we set γ(x,H_x) = λT_1T_2. Then, as {S,H}_J = [1 1] J T_1T_2 = T_1-T_2, the above ODE is of the form (<ref>) with J_0≡ 0, N=1, and g(x,H_x)≡ 0. Entropy flow control. The first possible choice of an input would be to control the entropy flow u into or out of compartment one, cf. Figure <ref>, cf. <cit.>. In this case, we have g(x,H_x)≡ 10 and hence the dynamics d/dtS_1S_2 = γ(x,H_x) {S,H}_J(x) J H_x + 10 u. Control by thermostat. A choice which is realizable in practise is to connect the first compartment to a thermostat with a controlled temperature T_e. Consequently, the heat flow between this compartment and the thermostat can be described via q_e = λ_e(T_e-T_1), with λ_e>0 being a heat conduction coefficient. In this case, the entropy flow control (<ref>) is related to the thermostat temperature control T_e by the state dependent control transformation u = T_e - T_1T_1 [Gas-piston system, cf. <cit.>] Let us briefly recall the model of a gas contained in a cylinder closed by a piston subject to gravity. Contrary to the previous example of the heat exchanger which was purely thermodynamic, the gas-piston system will encompass the thermodynamic and the mechanical domain. Energy and co-energy variables. Consider the gas in the cylinder under the piston and assume that it is closed, i.e., there is no leakage. Then, the internal energy U of the gas may be expressed as a function of its entropy S and its volume V. For an ideal gas (see <cit.>), we have U(S,V)=3/2 NRT_0· e^β(S,V), with the ideal gas constant R and β(S,V)=S-Ns_0+RN ln(NRT_0)-RN ln(VP_0)/3/2RN where s_0, T_0,P_0 are positive reference values. Assume that the energy of the piston consists of the sum of the gravity potential energy and kinetic energy: H_mec = 1/2m p^2+mgz, where z denotes the altitude of the piston, p its kinetic momentum, and m its mass. If we assume that the gas fills all the volume below of the piston, then we have Az = V where A stands for the base area of the piston's head. Hence one may choose the vector of state or energy variables x=[S,V,p]^⊤ and the total energy of the system is given by H(x)=U(S,V)+H_mec(V,p) where H_mec(V,p) = 1/2m p^2+mg/AV. The differential of the energy defines the co-energy variables ∂ H/∂ S = ∂ U/∂ S=T_0 e^β(S,V) = T(S,V) ∂ H/∂ V =∂ U/∂ V + ∂ H_ mec/∂ V = -P(S,V)+ mg/A = -NRT_0 e^β(S,V)/V + mg/A ∂ H/∂ p =∂ H_mec/∂ p = p/m = v(p) where T is the temperature, P is the pressure, and v is the velocity of the piston. Dynamics: reversible and irreversible coupling. The model consists of three coupled balance equations. It may be written as a quasi-Hamiltonian system with a skew-sym­metric structure matrix depending on two co-energy variables, the velocity v and the temperature T: d/dt SVp = 00κ v/T00A-κ v/T-A0_=:J_irr(T,v) T-P+ mg/Av_H_x. The first equation is the entropy balance accounting for the irreversible creation of entropy associated with the non-reversible phenomena due to mechanical friction and viscosity of the gas when the piston moves. The second equation relates the variation of the volume of the gas to the velocity of the piston. The last equation is the momentum balance equation accounting for the mechanical forces induced by gravity, the pressure of the gas, and the total resistive forces which are assumed to be linear in the velocity of the piston, i.e., F_r(v)=κ v with a scalar constant κ≥ 0. The system (<ref>) may be written in the form of (<ref>) by decomposing further the structure matrix J_irr(T,v)=J_0+R(x,H_x,S_x)J_1 with the constant Poisson structure matrix associated with the reversible coupling J_0 = 00000A0-A0 Since S_x(x) = [1,0,0]^⊤, we have J_0S_x = 0, i.e., the entropy is a Casimir function of J_0, as required in Definition <ref>(iii). The second structure matrix corresponding to the dissipative phenomenon, that is, the friction of the piston is given by J_1= 001000-100. The modulating function for the irreversible phenomenon is R(x,H_x,S_x) = γ(x,H_x){S,H}_J_1, which is composed by the the Poisson bracket {S,H}_J_1 = [ 1 0 0 ]J_1 T-Pv = v, which is indeed the driving force inducing the mechanical friction and viscosity forces in the gas, and the positive function γ(x,H_x)=γ(T)=κ/T. Control input. For the purpose of illustration, we assume that the system is controlled by a heat flow u entering the first line of the dynamics (<ref>), similar to the entropy flow in (<ref>). Similar to the heat exchanger it would be more realistic to assume that this heat flow is generated by a thermostat at controlled temperature T_e, interacting with the gas through a heat conducting wall leading to the same control transformation (<ref>). § OPTIMAL STATE TRANSITIONS Let a horizon t_f≥ 0, an initial state x^0∈ and a terminal region Φ⊂ be given and assume that 𝕌⊂ℝ^m is compact and convex, containing the origin in its interior, i.e., 0∈int𝕌. Let α_1,α_2 > 0 and consider the prototypical optimal control problem phOCPmin_u∈(0,t_f;𝕌)∫_0^t_f [α_1y_H(t) - α_2 T_0y_S(t)]^⊤ u(t) dt s.t. (<ref>), x(0)=x^0, x(t_f)∈Φ , (<ref>), where T_0>0 is a reference temperature and (0,t_f;𝕌) denotes the space of all measurable functions u : [0,t_f]→𝕌. From this optimal control problem, we can recover three important choices for the cost functional: * α_1 = 1, α_2=0: Minimal energy supply. * α_1 = 0, α_2=1: Minimal entropy extraction. * α_1 = 1, α_2=1: Minimal exergy supply. Setting ℓ_α_1,α_2(x,u) = [α_1y_H - α_2 T_0 y_S]^⊤ u and using the balance equations (<ref>) and (<ref>), we obtain the identity ∫_0^t_fℓ_α_1,α_2(x,u) dt = α_1[H(x(t_f))-H(x^0)] + α_2T_0(S(x^0)- S(x(t_f)) +∑_k=1^N∫_0^t_fγ_k(x,H_x){S,H}^2_J_k dt). §.§ Optimal steady states In this part, we analyze the steady-state counterpart of (<ref>). In the literature considering entropy optimization, in particular in the context of distributed parameter systems, the steady state problem is often considered due to the high complexity of the dynamic problem, cf. e.g. <cit.> for a plug flow reactor model and <cit.> for a binary tray distillation process. In the turnpike result established in Subsection<ref>, we provide a rigorous connection between dynamic and static problem, stating that entropy-, energy-, and entropy-optimal solutions for the dynamic problem are close to optimal solutions of the steady state for the majority of the time. This in particular consolidates the central role of the steady state problem in the context of (dynamic) optimal control for irreversible systems. To simplify notation, we write f(x,u) = J(x)H_x(x) + g(x,H_x)u, where J(x) = J_0(x) + ∑_k=1^Nγ_k(x,H_x)·{S,H}_J_k(x) J_k. Then (<ref>) reads ẋ = f(x,u). A steady state of (<ref>) is a pair (x̅,u̅)∈×𝕌 for which f(x̅,u̅) = 0. In particular, the constant trajectory x(t)≡x̅ is a solution to ẋ = f(x,u̅). The steady state optimal control problem corresponding to (<ref>) reads as follows: min_(x̅,u̅)∈×𝕌ℓ_α_1,α_2(x̅,u̅) s.t. f(x̅,u̅)=0. Any solution of this problem will be called an optimal steady state for (<ref>). If (x̅,u̅) is a steady state of (<ref>), then ℓ_α_1,α_2(x̅,u̅) = α_2 T_0·∑_k=1^Nγ_k(x̅,H_x(x̅))·{S,H}_J_k^2(x̅)≥ 0. In particular, the optimal value of (<ref>) is always non-negative and it is zero if and only if the set = {(x̅,u̅)∈×𝕌 : g(x̅,H_x(x̅))u̅ = -J_0(x̅)H_x(x̅)} is non-empty. In this case, the set coincides with the set of optimal steady states for (<ref>). If H_x(x̂)∈{S_x(x̂)} for some x̂∈, we have (x̂,0)∈. Assuming f(x̅,u̅) = 0, we compute ℓ_α_1,α_2(x̅,u̅) = α_1· y_H^⊤u̅ - α_2T_0· y_S^⊤u̅ = [α_1· H_x(x̅) - α_2T_0· S_x(x̅)]^⊤ g(x̅,H_x(x̅))u̅ = -[α_1· H_x(x̅) - α_2T_0· S_x(x̅)]^⊤ J(x̅)H_x(x̅) = α_2T_0· S_x(x̅)^⊤ J(x̅)H_x(x̅). Now, we substitute J(x̅) with the right hand side from (<ref>), further we note that S_x(x̅)^⊤ J_0(x̅) = 0 (see Definition <ref> (iii)) and obtain (<ref>). Let (x̂,û)∈. Then J_0(x̂)H_x(x̂) + g(x̂,H_x(x̂))û = 0, and, as x̂∈, we also have {S,H}_J_k(x̂) = 0 for all k=1…,N. Hence f(x̂,û) = 0, which means that (x̂,û) is a steady state of (<ref>). The first part of the proposition hence further implies that ℓ_α_1,α_2(x̂,û) = 0, which shows that the optimal value of (<ref>) is indeed zero and that (x̂,û) is an optimal steady state for (<ref>). If (x̅,u̅) is any other optimal steady state for (<ref>), then (<ref>) implies that also {S,H}_J_k(x̅) = 0 for k=1,…,N. That is, (x̅,u̅)∈. If the optimal value of (<ref>) is zero and (x̅,u̅) is a minimizer, then (<ref>) implies that x̅∈ and thus (x̅,u̅)∈. Now suppose that there exists some x̂∈^n such that H_x(x̂)∈{S_x(x̂)}. Let H_x(x̂) = κ· S_x(x̂) for some α∈. Then we have {S,H}_J_k(x̂) = S_x(x̂)^⊤ J_k H_x(x̂) = 0 for all k=1,…,N, thus x̂∈. Moreover, J_0(x̂)H_x(x̂) = 0 by Definition <ref> (iii). This implies (x̂,0)∈. In the irreversible case, i.e., J_0≡ 0 and N=1, we have ×{0}⊂. In particular, if is non-empty, then so is and thus coincides with the set of all optimal steady states for (<ref>). Having defined the sets 𝒯 of thermodynamic equilibria, and the set 𝒮 being closely related to optimal steady states, a third set comes into play, which is the set of thermodynamic equilibria which can be losslessly maintained while obeying the dynamics of (<ref>): _ opt := {x̅∈ : ∃u̅∈𝕌 with (x̅,u̅)∈}. Note that in the irreversible case we have _ opt =. §.§ Turnpikes towards thermodynamic equilibria In this subsection we shall impose the following assumptions on the Hamiltonian H and the entropy function S. foo (a) H∈ C^2(,), the image := H_x() is open in ^n, and H_x: → is a diffeomorphism. (b) S(x)=e^⊤ x with some e∈ℝ^n\{0}, i.e., the entropy is linear in the state. Note that Lemma <ref> implies that under these assumptions the set of thermodynamic equilibria is a C^1-submanifold of ^n. We briefly verify Assumption <ref> for the two examples of the previous section. (a) We consider the heat exchanger from Example <ref>. Here, the state space is = ^2. The entropy S is given by S(x) = e^⊤ x, where e = [ 1 1 ]^⊤, and the Hamiltonian H is a C^2-function with H_x(S_1,S_2) = T_ refe^(S_1-S_ ref)/c_1e^(S_2-S_ ref)/c_2. Hence, = (0,∞)× (0,∞), and H_x : → is bijective with the continuously differentiable inverse function H_x^-1 : →, H_x^-1(T_1,T_2) = S_ ref 11 - (ln T_ ref)c_1c_2 + c_1ln T_1c_2ln T_2. (b) In case of the gas-piston system (Example <ref>), the entropy is a part of the state: S(x) = [ 1 0 0 ]x. Concerning the energy variables S, V, and p, we allow S and p to assume any real value and V only assumes values in an interval (0,V_max), V_max>0. The state space is thus given by = × (0,V_max)×. The image of H_x then equals = { T-P+mg/A : T>0, P > RNT/V_max}×. This easily follows from the representation C_0· e^2S/3RN(P_0V)^-2/3-RNP_0(P_0V)^-5/3 + 0mg/A of the first two components of H_x(S,V,p), where C_0 = T_0^5/3(RN)^2/3e^-2s_0/3R. The inverse map H_x^-1 : → is given by H_x^-1(T,-P+mgA,v) = [ Ns_0 + 5RN/2lnP_0/PlnT/T_0 , RNT/P , mv ]^⊤, which is clearly continuously differentiable. In other works treating the gas-piston system, see, e.g., <cit.>, an additional energy variable in the state, namely the position z of the piston is considered. The corresponding co-energy variable is the constant gravitational force F_g = mg. Hence, in this extended set of coordinates, the gradient of the Hamiltonian H_x is not a diffeomorphism. However, the weaker condition in Lemma <ref> is satisfied and implies that the set of thermodynamic equilibria is also a C^1-submanifold in this case. If {e}∩≠∅, the curve H_x^-1({e}∩) is contained in _ opt and thus, in particular, in . If x̂∈ H_x^-1({e}∩), then we have H_x(x̂)∈{e} = {S_x(x̂)}. Hence, (x̂,0)∈ by Proposition <ref>, which implies x̂∈_ opt. In the case of the heat exchanger (Example <ref>), {e}∩ = (Je)^⊥∩ = {(,)^⊤ : > 0}. It is then easily seen that = _ opt = H_x^-1((Je)^⊥) coicides with the affine subspace v_0 + {v_1}, where v_0 = S_ ref· e and v_1 = [c_1,c_2]^⊤. In particular, = _ opt = {e}, if c_1=c_2. The proof of the following proposition can be found in the Appendix. Let V := ⋂_k=1^N(J_ke)^⊥ and assume that V∩≠∅. Then for each compact set K⊂ we have (x,)^2 ≲ ∑_k=1^Nγ_k(x,H_x){S,H}_J_k^2(x), x∈ K. Next, we introduce the manifold turnpike property that we will prove in the remainder of this section for state transition problems, and in the following section for output stabilization problems. This turnpike property resembles an integral version of the measure turnpike property introduced in <cit.>. Let ℓ∈ C^1(×𝕌), ∈ C^1(), Φ⊂ a closed set and f∈ C^1(^n+m). We say that a general OCP of the form min_u∈(0,t_f;𝕌) φ(x(T)) + ∫_0^t_fℓ(x(t),u(t)) dt s.t. ẋ = f(x,u), x(0)=x^0, x(t_f)∈Φ has the integral turnpike property on a set S_ tp⊂ with respect to a manifold ℳ⊂^n if for all compact X^0⊂ S_ tp there is a constant C>0 such that for all x^0∈ X^0 and all t_f>0, each optimal pair (x^⋆ ,u^⋆) of the OCP (<ref>) with initial datum x^⋆(0)=x^0 satisfies ∫_0^t_f^2(x^⋆(t),ℳ) dt ≤ C. Due to the lack of uniform bounds on the Hessian H_xx of the Hamiltonian, Proposition <ref> holds only on compact sets. To apply this result to optimal trajectories and to render the involved constants uniformly in the horizon, we now assume that optimal trajectories are uniformly bounded in the horizon. For any compact set of initial values X^0⊂ there is a compact set K⊂ such that for all horizons t_f each corresponding optimal state trajectory of (<ref>) with initial datum x^0∈ X^0 and horizon t_f is contained in K. Define the following set of initial values which can be controlled to a state x̅∈_ opt that can be further steered to the terminal set Φ. 𝒞(_ opt,Φ) := {x^0∈ℝ^n :∃ x̅∈_ opt s.t. ∃ t_1≥ 0,u_1 ∈(0,t_1,𝕌) s.t. x(t_1,u_1,x^0) = x̅, ∃ t_2≥ 0,u_2 ∈(0,t_2,𝕌) s.t. x(t_2,u_2,x̅)∈Φ}. Let Assumption as:comp hold and furthermore assume that (_ opt,Φ)≠∅. Then, for any pair α_1,α_2 > 0 the OCP (<ref>) has the integral turnpike property on the set S_ tp = (_ opt,Φ) with respect to . The proof follows the lines of the proof of Theorem 8 in <cit.>. Let X^0⊂ S_tp be compact and let x^0∈ X^0. By optimality, any control u∈(0,t_f;𝕌), which is feasible for (<ref>) with corresponding state trajectory x= x( · ;u,x^0) satisfies ∫_0^t_fℓ_α_1,α_2(x^*(t),u^*(t)) dt ≤∫_0^t_fℓ_α_1,α_2(x(t),u(t)) dt. We choose the constructed control u(t):= u_1(t) t∈ [0,t_1] u̅ t ∈ (t_1,t_f-t_2) u_2(t-(t_f-t_2)) t∈ [t_f-t_2,t_f] . where t_1, t_2, u_1 and u_2 are as defined in (_ opt,Φ). This control steers the system from x^0 via x̅∈_ opt (where it remains from time t_1 until t_f - t_2) to the terminal region Φ. The middle part u̅ is the steady state control that is required to stay at the controlled equilibrium x̅ with zero stage cost, i.e., (x̅,u̅)∈. Hence, we have ∫_0^t_fℓ_α_1,α_2(x(t),u(t)) dt = (∫_0^t_1 + ∫_t_f-t_2^t_f) ℓ_α_1,α_2(x(t),u(t)) dt. Note that this expression is in fact independent of t_f. Denote its norm by C_1. Making use of Proposition <ref> and (<ref>), we obtain ∫_0^t_f^2(x^⋆,) dt ≤ C_2∫_0^t_f∑_k=1^Nγ_k(x^⋆,H_x(x^⋆)){S,H}_J_k^2(x^⋆) dt ≤C_2α_2T_0(C_1 - α_1[H(x^⋆(t_f)) - H(x^0)] - [S(x^0) - S(x^⋆(t_f))]). Since H and S are continuous, the right-hand side can be estimated independently of t_f due to Assumption <ref>.   In the proof of Theorem <ref>, reachability of a controlled equilibrium is used to bound the cost functional uniformly with respect to the time horizon t_f. This reachability can be relaxed to exponential reachability of the subspace, i.e., for all x^0 there is a measurable control u:[0,∞)→𝕌 such that (x(t;x^0,u),x̅)≤ Me^-ω tx^0 with M ≥ 1 and ω > 0, cf. e.g. <cit.>. Correspondingly, one has to impose reachability of the terminal state from a ball around the steady state with radius depending on the time horizon and the compact set of initial values. Other decay rates are also possible, as long as they are integrable on the positive real line. § OPTIMAL OUTPUT STABILIZATION Let a horizon t_f≥ 0, an initial state x^0∈ℝ^n be given and assume that 𝕌⊂ℝ^m is compact and convex. Consider an output matrix C∈^p× n and y_ref∈im(C). Let α_1,α_2 ∈^+ and consider the optimal control problem with phOCP-stabilizationmin_u∈(0,t_f;𝕌)∫_0^t_f[Cx(t) - y_ref^2 + ℓ_α_1,α_2(x(t),u(t))] dt s.t. (<ref>), x(0)=x^0 Compared to (<ref>), we do not consider a state transition with a terminal set but aim to stabilize an output y_ref in the cost functional. Again, we use the energy balance to reformulate the OCP, where ℓ_α_1,α_2 is the stage cost in (<ref>): ∫_0^t_fℓ_α_1,α_2(x,u) dt= α_1[H(x(t_f))-H(x(0))] + ∫_0^t_fCx(t) - y_ref^2 dt + α_2T_0(S(x(0))- S(x(t_f)) + ∫_0^t_fγ(x,H_x){S,H}^2_J dt) The long-term behavior is now governed by two terms. As in (<ref>), the term ∫_0^t_fγ(x,H_x) {S,H}^2_J dt corresponds to the distance to the set of thermodynamic equilibria, as shown in Subsection <ref>. However, we obtained the additional output stabilization term ∫_0^t_fCx(t) - y_ref^2 dt penalizing the distance to the preimage of y under C as shown in the following. Let y∈im(C). Then Cx-y ≍ (x,C^-1{y}), x∈^n, where C^-1{y} is the preimage of y under C. Let A = C^⊤ C. By <cit.> we have z^⊤ Az≍^2(z, A) for z∈^n. Setting z = x - x_0, we note that (z, A) = (x,x_0+ C) = (x,C^-1{y}) and z^⊤ Az = C(x-x_0)^2 = Cx - y^2. We now provide the turnpike result for the output stabilization problem. Let Assumption as:comp hold for the problem (<ref>). Suppose (_ opt',^n)≠∅, where _ opt' := _ opt∩ C^-1{y_ ref}. Then, for any pair α_1,α_2 > 0 the OCP (<ref>) has the integral turnpike property on the set S_tp = (_ opt',^n) with respect to and C^-1{y_ ref}. If is a subspace, then this turnpike property holds with respect to ∩ C^-1{y_ ref}. The proof follows analogously to the proof of Theorem <ref>. As we do not have to fulfill a terminal constraint, we can consider the constructed control u(t):= u_1(t) t∈ [0,t_1] u̅ t ∈ (t_1,t_f]. where u_1 steers x^0∈(_ opt',^n) into _ opt'. This construction allows (analogously to the argumentation of the proof of Theorem <ref>) to bound the cost of the constructed trajectory by a constant C_1≥ 0 independent of the horizon t_f>0. Then, an optimality argument and invoking Assumption <ref> yields ∫_t_0^t_fCx^*(t) - y_ref^2 + ∑_k=1^Nγ_k(x^⋆,H_x(x^⋆)){S,H}_J_k^2(x^⋆) dt ≤ C_2 with C_2≥ 0 independent of t_f. Proposition <ref> and Lemma <ref> yield ∫_t_0^t_f(x^*(t),C^-1{y_ref}) + (x^*(t),) dt ≤ C_3 with C_3≥ 0 independent of t_f. This implies the claimed manifold turnpike property w.r.t. C^-1{y_ref} and . If is a subspace, we can invoke Lemma <ref> such that for all x∈^n, (x,C^-1{y_ref}∩) ≲(x^*(t),C^-1{y_ref}) + (x^*(t),). Inserting this inequality into (<ref>) implies the manifold turnpike property w.r.t. the intersection ∩ C^-1{y_ ref}. We briefly comment on an application of this result to a heat exchanger network, which we will later also inspect numerically in Subsection <ref>. To this end, consider a heat exchanger network consisting of three identical subsystems. A straightforward extension of the modeling performed in Example <ref> reveals that for a system of three coupled heat exchangers the thermodynamic equilibria are given by the two-dimensional subspaces 𝒯_1 = {(S_1,S_2) | T_1=T_2} and 𝒯_2 = {(S_2,S_3) | T_2=T_3}. Note that, for simplicity we assumed that the constants in the temperature-entropy relation are identical for all subsystems. Clearly, we have the one-dimensional subspace 𝒯 := 𝒯_1 ∩𝒯_2 = {S_1=S_2=S_3}. Let us briefly discuss three possible output configurations in view of the OCP (<ref>): 1) No output in the cost functional: As the output term is not present, setting Φ = 𝕏 in Theorem <ref> yields a subspace turnpike towards the one-dimensional subspace 𝒯. 2) Output stabilization with scalar output and prescribing the temperature (or equivalently the entropy) in subsystem i∈{1,2,3}: Then C^-1{y_ref} = {S_i= y_ref}, i∈{1,2,3} and we have the zero-dimensional turnpike manifold 𝒯∩ C^-1{y_ref} = {S_1=S_2=S_3=y_ref}. Such an example will be considered in Subsection <ref>. 3) Stabilizing temperature/en­tropy in two subsystems: Here C∈^2× 3 and C^-1{y_ref} = {S_i = (y_ref)_i, S_j =(y_ref)_j } for some i,j∈{1,2,3}, i≠ j such that the intersection with 𝒯={S_1=S_2=S_3} is empty if there is i,j∈{1,2,3} such that (y_ref)_i≠ (y_ref)_j. In this case, Theorem <ref> is not applicable. However, we will present a numerical example with a network of heat exchangers in Subsection <ref> and observe a turnpike-like behavior towards an optimal tradeoff. § NUMERICAL RESULTS In this part, we conclude an extensive numerical case study to illustrate the results of Sections <ref> and <ref>, in particular the turnpike property proven in Theorems <ref> for state-transition problems and in Theorem <ref> for output stabilization. §.§ Set-point transition for a heat exchanger First, we consider a heat exchanger with entropy flow control (<ref>) to illustrate the the manifold turnpike proven in Theorem <ref>. We set T_ref=c_1=c_2=1 and S_ref = 0 and obtain the temperature-entropy relation T_i = e^S_i, i=1,2. Correspondingly, the manifold of thermodynamic equilibria is actually a subspace as the bracket reads {S,H}_J(x) = T_1(S_1)-T_2(S_2) and thus 𝒯 = {(S_1,S_2) : T_1(S_1) = T_2(S_2)} = {(S_1,S_2) : S_1=S_2}. The control constraint set is given by 𝕌 = [-10,10] and we aim to perform a state transition between two thermodynamic equilibria: T_1^0 = T_2^0 = 1 and T_1^t_f=T_2^t_f = 20. In view of (<ref>), the choice of α_1 in the cost functional does not matter, as the initial and terminal state are fixed. Thus, we choose α_1=0 and T_0α_2=1. In view of the optimal control problem (<ref>), this corresponds to the initial value in entropy variables x^0 = [ 0 0 ]^⊤ and the terminal set Φ = {[ ln20 ln20 ]^⊤}, by means of the relation T_i = e^S_i, i=1,2. It is clear that this state transition is only possible through providing heat—or, equivalently, entropy—to the first compartment, cf. Figure <ref>. In Figure <ref>, we show the optimal state, control and the Poisson bracket corresponding to entropy creation. It is clear, that we can not perform the state transition in the set of thermodynamic equilibria, as by the form of the input vector in (<ref>), no control action that is non-zero leaves 𝒯 invariant. However, for increasing time horizons, the state trajectories evolve closer to the manifold, as the necessary rate of entropy supply to perform the state transition in the given horizon can be chosen smaller and smaller. Furthermore, we observe a turnpike behavior of the control towards zero. As mentioned above, this is the only control that leaves the set of thermodynamic equilibria invariant. It can be seen in the upper plot of Figure <ref> that the time derivative of the individual entropies in the compartments is constant for the majority of the time. This behavior is called a velocity turnpike and is also central in mechanical sytems with symmetries cf. <cit.>. Here, this velocity turnpike behavior can can be explained as the Poisson bracket is mostly constant and small—the state has a turnpike towards the manifold—and the control is mostly constant and small—the zero control is the only control that leaves this manifold invariant—and thus the dynamics imply that ẋ_1 = Ṡ_1≈const. and ẋ_2=Ṡ_2≈const.. The corresponding extensive variable given by the temperature H_x(S_1,S_2)=(T_1,T_2) in Figure <ref>. Due to the algebraic relation T=exp(S), the upper plot of Figure <ref> shows exponential evolution of the temperature. Further, as can be observed in the lower plot, he temperature is moving further and further away from the subspace. The reason is that, by means of the turnpike property, we have an optimal rate of travel in the state variable, that is, e.g. for the first compartment, const.=Ṡ_1 = T_1-T_2T_1. For increasing temperatures T_1, the latter fraction can only be constant if also T_1-T_2 increases. §.§ Stabilization of a heat exchanger We modify the previous numerical example and do not impose any terminal constraint, i.e., Φ = ℝ^n and add a output stabilization term to the cost functional in the spirit of Section <ref>. We aim to track a desired reference temperature of T_ref=25 degrees. For the cost functional in (<ref>), we choose α_1=0 and T_0α_2=1, i.e., we minimize the entropy production. To obtain a linear output of the system state, we translate the aim temperature into an entropy, that is, we aim to minimize, in addition to the entropy creation, the term |S_2(t) - S_ref|^2 = |Cx(t) - S_ref|^2 with S_ref=log(25) and C=[0 1]. Correspondingly, we have C^-1{S_ ref} = {(S_1,S_2)∈ℝ^2 : S_2=S_ref} which is a one-dimen­sional affine subspace of ^2. In addition, the manifold of thermodynamic equilibria is 𝒯={(S_1,S_2)∈^2:S_1=S_2}. Thus, Theorem <ref> yields a turnpike w.r.t. C^-1{y_ ref}∩𝒯 = {(S_1,S_2)∈^2 : S_1=S_2=S_ref}. This can be observed in Figure <ref>. There, both entropies appraoch the target entropy S_ref=log(25)≈ 3.22. In order to approach this value, the entropy (or equivalently, the temperature) in the first compartment has to be increased. This yields an increase of the entropy (or temperature) in the second compartment due to the heat flux across the wall, inevitably coupled to entropy generation, cf. bottom left of Figure <ref>. Having reached the desired target entropy in the second compartment, the control is switched off such that the system is at equilibrium with u=0 and no entropy is generated. Thus, both integrands in the cost functional (approximately) vanish for this state: the output term in the cost vanishes as S_2 ≈ S_ref and the entropy production vanishes as S_1=S_2 and thus {S,H}_J = T_1 - T_2 ≈ 0. §.§ Set-point transition for a gas-piston system Last, we consider the gas-piston system of Example <ref>. As an initial configuration of intensive and extensive variables in the thermodynamic domain, we choose S(0) = Ns_0, V(0) = NRT_0/P_0 such that β(S(0),P(0))=0 as defined in (<ref>) and T(0)=T_0, P(0)=P_0. Moreover, we assume that the mechanical subsystem is at equilibrium, that is v(0)=0 and p(0)=0. The mass is also chosen such that the momentum ODE (last line of (<ref>)) is in equilibrium for zero velocity, i.e., m = AP_0/g = 5.1644 kg. This yields a controlled equilibrium of the system (<ref>) endowed with the input map (<ref>) when choosing the external temperature u = T_0. We summarize the chosen parameters in Table <ref>. We now aim to steer the piston to the target volume V(t_f) = 1.3V(0) with target zero momentum p(t_f) = 0. Analogously to Subsection <ref> and in view of (<ref>), the choice of α_1 does not matter due to fixed initial and terminal state, such that we set α_1=0 and T_0α_2=1. The optimal intensive variables, the corresponding extensive quantities, the entropy production by means of the Poisson bracket and the optimal control are depicted in Figure <ref>. In order to change the volume of the piston, we need to induce a temperature increase by means of the heating jacket. As a result, the volume increases, whereas the pressure stays constant. In view of Theorem <ref>, we observe a turnpike in the velocity component, as {S,E}_J_1 = v, that is, the manifold of thermodynamic equilibria consists of the states with zero velocity. The longer the time horizon, the smaller the velocity for the majority of the time. §.§ Network of heat exchangers Last, we present an example with five compartments exchanging heat, which are coupled as depicted in Figure <ref>. The corresponding state variables are the entropies in the compartments S_i, i=1,…,5 with Hamiltonian energies H_i(S_i) = e^S_i. i=1,…,5. Along the lines of the Example <ref> and Subsections <ref> and <ref> we get by Fourier's law and continuity of the heat flux for the first, fourth and fifth compartment λ_1(T_1-T_2) = -d/dt H_1(S_1(t)) = -T_1d/dtS_1(t), λ_2(T_3-T_4) = -d/dt H_4(S_4(t)) = -T_4d/dtS_4(t), λ_3(T_3-T_5) = -d/dt H_5(S_5(t)) = -T_5d/dtS_5(t). For the second compartment, we compute T_2d/dtS_2(t) = d/dt H_2(S_2(t)) = λ_1(T_1-T_2)-λ_2(T_2-T_3), and for the third compartment, T_3d/dtS_3(t) = d/dt H_3(S_3(t)) = λ_2(T_2-T_3)-λ_2(T_3-T_4)- λ_4(T_3-T_5). Thus, corresponding to the four coupling interfaces between the five compartments we define the skew-symmetric structure matrices J_1 = [ 0 -1 0 0 0; 1 -0 0 0 0; 0 -0 0 0 0; 0 -0 0 0 0; 0 -0 0 0 0 ], J_2= [ 0 0 -0 0 0; 0 0 -1 0 0; 0 1 -0 0 0; 0 0 -0 0 0; 0 0 -0 0 0 ], J_3 = [ 0 0 0 -0 0; 0 0 0 -0 0; 0 0 0 -1 0; 0 0 1 -0 0; 0 0 0 -0 0 ], J_4 = [ 0 0 0 0 -0; 0 0 0 0 -0; 0 0 0 0 -1; 0 0 0 0 -0; 0 0 1 0 -0 ] and the positive functions γ_1(x,H_x) = λ_1/T_1T_2, γ_2(x,H_x) = λ_2/T_2T_3, γ_3(x,H_x) = λ_3/T_3T_4, γ_4(x,H_x) = λ_3/T_3T_5. The corresponding Poisson brackets giving rise to the irreversible phenomena read {S,H}_J_1 = T_1-T_2, {S,H}_J_2 = T_2-T_3, {S,H}_J_3 = T_3-T_4, {S,H}_J_4 = T_3-T_5. such that d/dtS_1(t) = -γ_1(x,H_x){S,H}_J_1 T_2, d/dtS_4(t) = -γ_3(x,H_x){S,H}_J_3 T_3, d/dtS_5(t) = -γ_4(x,H_x){S,H}_J_4 T_3 and d/dtS_2(t) = γ_1(x,H_x){S,H}_J_2T_1-γ_2(x,H_x){S,H}_J_2 T_3 d/dtS_3(t) = γ_2(x,H_x){S,H}_J_2T_2-γ_3(x,H_x){S,H}_J_3 T_4 -γ_4(x,H_x){S,H}_J_4 T_5. Eventually, we obtain with x=(S_1,S_2,S_3,S_4,S_5) the dynamics d/dtx(t) = (∑_i=1^4γ_i(x,H_x){S,H}_J_iJ_i)H_x(x(t)). We endow our system with an a entropy flow control at the first, the fourth and the fifth compartment and aim to track the temperature in these compartments. More precisely, we add the input term g(x,H_x)u= [ 1 0 0; 0 0 0; 0 0 0; 0 1 0; 0 0 1 ][ u_1; u_2; u_3 ] and define the output stabilization term Cx-y_ref^2 = [ 1 0 0 0 0; 0 0 0 1 0; 0 0 0 0 1 ]x - [ S_1,ref; S_4,ref; S_5,ref ]^2. for the cost functional. If S_1,ref≠ S_4,ref or S_1,ref≠ S_5,ref, then Theorem <ref> does not apply as C^-1{[ S_1,ref, S_4,ref, S_5,ref ]^⊤} = {x∈^5 : x_1 = S_1,ref, x_4 = S_4,ref, x_5=S_5,ref} and 𝒯 = {x∈^5 : x_1=x_2=…=x_5} such that C^-1{[ S_1,ref, S_4,ref, S_5,ref ]^⊤}∩𝒯_opt⊂ C^-1{[ S_1,ref, S_4,ref, S_5,ref ]^⊤}∩𝒯 = ∅. The results are shown in Figure <ref>. We see that S_1, S_4 and S_5 do not approach the given reference reference value, as this value would not allow for zero entropy creation. We rather observe a trade-off between the terms in the cost functional: The temperature (or equivalently the entropy) in the fifth and the first compartment is higher than the given reference, whereas in the fourth compartment the temperature is lower as desired. As can be seen in the bottom right plot of Figure <ref>, also the entropy creation is not small for the majority of the time as we have a trade-off between output stabilization and entropy creation. The reason for this trade-off is that due to (<ref>) there is no state for which both terms in the cost functional vanish, which in particular prohibits an application of Theorem <ref>. However, we still can observe a turnpike behavior, which should be investigated in future work. § CONCLUSION We have considered an optimal control problem, intrinsically defined as minimizing the energy supply to the system, the irreversible entropy creation, or a linear combination of both, the minimal exergy destruction. To this end, we have formulated the physical model of the system as a reversible-irreversible port-Hamiltonian system, which is defined w.r.t. a quasi-Poisson bracket and two functions: the total energy acting as a generating function and the total entropy function. We have characterized optimal state-control pairs of the steady state problem in terms of the manifold of thermodynamic equilibria. For dynamic state-transition and output stabilization problems we have derived conditions, under which the optimal solutions of the dynamic problem reside close to the manifold of thermodynamic equilibria for the majority of the time. Last, we have illustrated our results by means of various examples, including purely irreversible systems such as a network of heat exchangers or a reversible-irreversible gas-piston problem consisting of coupled mechanical and thermodynamic systems. abbrv § DISTANCES The proof of the following proposition can be found in Appendix <ref>. For a closed convex set M⊂^n we denote by P_M the orthogonal projection onto M. Let V_1,…,V_N⊂^n be linear subspaces. Then (x,⋂_k=1^N V_k) ≍ ∑_k=1^N (x,V_k) for x∈^n. Since (x,V_j)≤(x,⋂_k=1^NV_k) for j=1,…,N, it is evident that the left-hand side of (<ref>) is not smaller than 1/N∑_k=1^N(x,V_k). Moreover, it obviously suffices to prove the opposite inequality for N=2. For this, we set V = V_1 and W = V_2. Note that for any subspace U we have (x,U) = P_U^⊥x. Suppose that the claim is false. Then there exists a sequence of vectors (x_ℓ) such that for all ℓ∈ P_(V∩ W)^⊥x_ℓ > ℓ(P_V^⊥x_ℓ + P_W^⊥x_ℓ). This in particular implies that y_ℓ := P_(V∩ W)^⊥x_ℓ≠ 0. Note that P_V^⊥y_ℓ = P_V^⊥x_ℓ as V^⊥⊂ (V∩ W)^⊥. Setting z_ℓ := y_ℓ/y_ℓ, it follows that P_V^⊥z_ℓ + P_W^⊥z_ℓ < 1ℓ for ℓ∈. As z_ℓ=1 for all ℓ∈, we may assume that z_ℓ→ z as ℓ→∞ for some vector z with z=1. The latter inequality then shows that P_V^⊥z = P_W^⊥z = 0 and thus z∈ V∩ W. But z_ℓ∈ (V∩ W)^⊥ for ℓ∈, which implies z=0, contradicting z=1. Let V⊂^n be a linear subspace such that V∩∅. Then we have (x,H_x^-1(V∩)) ≲ (H_x(x),V), x∈ K, for any compact set K⊂. Suppose that the claim is false. Then there exist a compact set K⊂ and a sequence (x_n)⊂ K such that (x_n,H_x^-1(V∩)) > n·(H_x(x_n),V), n∈. Since K is compact, we may assume WLOG that x_n→ x as n→∞ with some x∈ K. Set y_n := H_x(x_n) and y := H_x(x). Choose >0 such that B_(y)⊂. We have P_V^⊥y_n = (y_n,V) < 1n·(x_n,H_x^-1(V∩))→ 0 and thus also y-P_Vy_n≤y-y_n + P_V^⊥y_n → 0 as n→∞. Therefore, there exists n_0∈ such that y_n,P_Vy_n∈ B_(y) for n≥ n_0. In particular, for n≥ n_0 we have P_Vy_n∈ V∩ and hence nP_V^⊥y_n < (H_x^-1(y_n),H_x^-1(V∩)) ≤H_x^-1(y_n) - H_x^-1(P_Vy_n) ≤[sup_ξ∈ [y_n,P_Vy_n]D_ξ H_x^-1]·y_n - P_Vy_n≤[sup_ξ∈B_(y)D_ξ H_x^-1]·P_V^⊥y_n, which is a contradiction for large n. First of all, note that {S,H}_J_k(x) = -H_x(x)^⊤ J_ke for k=1,…,N. In particular, = H_x^-1(V∩). Also, |w^⊤ v| = v(w,v^⊥) for v,w∈^n. Hence, by Lemmas <ref> and <ref>, (x,) = (x,H_x^-1(V∩)) ≲ (H_x(x),V) ≲∑_k=1^N(H_x(x),(J_ke)^⊥) ≲∑_k=1^NJ_ke·(H_x(x),(J_ke)^⊥) = ∑_k=1^N|H_x(x)^⊤ J_ke| = ∑_k=1^N|{S,H}_J_k(x)|. Note that this also holds if J_ke=0 for some k. Finally, squaring left- and right-hand side of the latter inequality and applying Cauchy-Schwarz, the claim follows from the fact that there is some c>0 such that γ_k(x,H_x(x))≥ c for x∈ K and all k=1,…,N.
http://arxiv.org/abs/2306.03043v1
20230605170654
Ind-geometric stacks
[ "Sabin Cautis", "Harold Williams" ]
math.AG
[ "math.AG", "math.RT" ]
Sabin Cautis]Sabin Cautis [Sabin Cautis]University of British Columbia Vancouver BC, Canada [email protected] Harold Williams]Harold Williams [Harold Williams]University of Southern California Los Angeles CA, USA [email protected] We develop the theory of ind-geometric stacks, in particular their coherent and ind-coherent sheaf theory. This provides a convenient framework for working with equivariant sheaves on ind-schemes, especially in derived settings. Motivating examples include the coherent Satake category, the double affine Hecke category, and related categories in the theory of Coulomb branches. Ind-geometric stacks [ July 31, 2023 ==================== § INTRODUCTION empty This paper develops the basic theory of ind-geometric stacks, which provide a convenient language for equivariant sheaf theory in infinite-dimensional settings. Recall that if G is an algebraic group acting on a variety X, then G-equivariant sheaves on X can be reformulated as sheaves on the quotient X/G. In general X/G is an Artin stack, and it will contain strictly less information than X together with its G-action. But it is useful to discuss X/G independently of X and G, for the same reasons it is useful to discuss smooth manifolds independently of an atlas: many results are stated and proved most clearly in coordinate-free terms, and fixing coordinates can obscure symmetries. Often we want to study equivariant sheaves on more general objects. For example, given a complex reductive group G, we have the affine Grassmannian _G, which is acted on by the group G_ of maps Ø→ G (where Ø = [[t]]). The affine Grassmannian is a central object in geometric representation theory, and fits into a larger family of G_-spaces _G,N introduced by Braverman-Finkelberg-Nakajima <cit.>. These depend additionally on a G-representation N, specializing to _G when N ≅ 0. They are motivated by the fact that the Coulomb branches of certain gauge theories can be interpreted as the spectra of their G_-equivariant Borel-Moore and K-homology. In <cit.> we study the category ^G_(_G,N) of G_-equivariant coherent sheaves on _G,N. This setting differs from that of the first paragraph in three ways. First, _G,N is an ind-scheme rather than a variety. Second, G_ is an infinite-type group scheme rather than an algebraic group. And third, _G,N is an object of derived rather than classical algebraic geometry. The considerations in the first pagragraph are significantly amplified by this last point. In classical settings, avoiding the language of stacks avoids the appearance of higher category theory, but in derived settings this is no longer the case: a derived object like _G,N is already an object of higher-categorical mathematics. On the other hand, the first two points imply that _G,N/G_ is not a (derived) Artin stack: it lacks a flat cover by a union of affine schemes, and it has points with infinite-type stabilizers. Instead, _G,N/G_ is an example of an ind-geometric stack. In developing the basic theory of such objects, the present paper thus provides an efficient formalism for working with categories such as ^G_(_G,N). This is useful even in classical settings. For example, in <cit.> we use it to provide a shorter, more conceptual proof of the rigidity of the coherent Satake category ^ G_Ø_coh(_G) than that of <cit.>. The term ind-geometric stack reflects the following principle: it is better to view _G,N/G_ as a direct limit of quotients rather than a quotient of a direct limit. On one hand, this lets various results be reduced to the affine case more efficiently, subsuming non-affine schemes in the more general case of geometric stacks. In particular, this streamlines our study of tamely presented morphisms and coherent pullback in <cit.>. On the other hand, compared to quotients, direct limits have a more dramatic effect on sheaf theory, as they break the close relationship between quasi-coherent and ind-coherent sheaves. Certain technical complications are thus avoided if direct limits are delayed until after all quotients are taken. §.§ Summary of definitions and results We refer to Section <ref> for our detailed conventions. For now the reader may take to be a field of characteristic zero, and to be the (enhanced homotopy) category of nonpositively graded commutative dg -algebras. In Section <ref> we review the basic theory of geometric stacks. Following <cit.> a geometric stack will mean a functor X: → which satisfies flat descent, has affine diagonal, and admits a flat cover A → X (here is the category of spaces). We caution that the term geometric stack varies in the literature, and in particular its usage in <cit.> is different but closely related. Section <ref> contains the definition and basic theory of ind-geometric stacks. An ind-geometric stack is a filtered colimit X ≅ X_ of truncated geometric stacks along closed immersions in the category of convergent stacks. Here truncated means the structure sheaves of the X_ are bounded. This definition naturally extends the notion of dg ind-scheme introduced in <cit.>. In particular, a (classical or dg) ind-scheme is also an ind-geometric stack, at least under mild separatedness conditions. An ind-geometric stack is reasonable if the maps among the X_ are almost finitely presented, and this in turn naturally extends the notion of reasonableness considered in <cit.> and <cit.>. Section <ref> discusses coherent and ind-coherent sheaf theory on ind-geometric stacks. In general we use the term coherent sheaf for what would be called a bounded pseudocoherent complex in <cit.>. We approach non-Noetherian ind-coherent sheaf theory via the theory of anticomplete t-structures <cit.>. That is, when X is a geometric stack (X) is defined as the anticompletion of the category (X) of quasi-coherent sheaves. This definition characterizes (X) by a bounded, colimit-preserving functor (X) →(X) satisfying a universal property, and specializes to the category of injective complexes <cit.> when X is classical. The category of coherent or ind-coherent sheaves on an ind-geometric stack X ≅ X_ is then the colimit under pushforward of the categories of sheaves on the X_. This is consistent in particular with the treatment of equivariant coherent sheaves on coherent classical ind-schemes in <cit.>. When X is an ind-geometric stack, (X) is in general only a subcategory of (X) and not of (X), hence the latter is of minimal use in studying (X). In particular, if X is an ind-scheme acted on by a group scheme G, to study G-equviariant sheaves we are forced to navigate around the fact that does not satisfy flat descent: in general (X/G) should not be defined as the category of G-equivariant objects of (X) <cit.>. It is by treating X/G as the direct limit of quotients of G-invariant subschemes X_⊂ X — i.e. by viewing X/G as an ind-geometric stack — that one minimizes the complications this causes. That is, (X/G) is best described in terms of the categories (X_/G) (rather than in terms of (X)), since these are controlled via anticompletion by the categories (X_/G), which are in turn controlled by descent. We caution that, despite the notation, (X) is not compactly generated in general. However, this holds for the class of coherent ind-geometric stacks, which includes our motivating examples. A geometric stack X is coherent if it admits a flat cover A → X with A a coherent ring, and if the abelian category (X)^ is compactly generated. The second condition is automatic if A is Noetherian, and beyond this it can be managed using the notion of tamely presented morphism studied in <cit.>. An ind-geometric stack is coherent if every reasonable geometric substack is coherent. In particular, in finite type our definition of (X) is consistent with that of <cit.>. Sections <ref> and <ref> study !-pullback and sheaf Hom in detail, in particular establishing their continuity properties and compatibility with pushforward and *-pullback. We emphasize that even though coherent sheaves are our primary interest, it is necessary to introduce ind-coherent sheaves in order to define these adjoint functorialities. While ind-coherent sheaves do not have an internal tensor product in general, they do admit external tensor products. This is sufficient to have a useful notion of ind-coherent sheaf Hom, which satisfies various compatibilities with the usual quasi-coherent notion. The results of these two sections will play an essential role in establishing the rigidity of the monoidal category ^G_(_G,N) in <cit.>. toc §.§ Acknowledgements We are deeply grateful to Sam Raskin, Hiro Lee Tanaka, Aaron Mazel-Gee, and Chang-Yeon Chough for taking the time to discuss numerous technical issues that arose in the preparation of this paper and its companions <cit.>. S.C. was supported by NSERC Discovery Grant 2019-03961 and H. W. was supported by NSF grants DMS-1801969 and DMS-2143922. § CONVENTIONS We collect here our notational and terminological conventions. Our default references for categorical and geometric background are <cit.>, and we follow their conventions up to a few exceptions noted below. * We use the terms category and ∞-category interchangeably, and say ordinary category when we specifically mean a category in the traditional sense. We write _(X,Y) for the mapping space between X, Y ∈, and regard ordinary categories as ∞-categories with discrete mapping spaces. * We write () for the category of commutative algebra objects of a monoidal category . We write for (^cn), where ^cn is the category of connective spectra (this would be ^cn in <cit.>). * We fix once and for all a Noetherian base ∈. * We use cohomological indexing for t-structures. If has a t-structure (^≤ 0, ^≥ 0) with heart ^ we write τ^≤ n : →^≤ n, H^n: →^, etc., for the associated functors. In this notation, the condition that is Noetherian is the condition that H^0() is an ordinary Noetherian ring and H^n() is finitely generated over H^0() for all n < 0. We use the terms left bounded and right bounded interchangeably with (cohomologically) bounded below and bounded above. * We write τ_≤ n for the subcategory of n-truncated objects in an ∞-category . In particular, τ_≤ 0 is the ordinary category of ordinary commutative rings. Note the distinction between subscripts and superscripts in this and the previous convention, e.g. τ_≤ n(^≤ 0) and ^[-n, 0] refer to the same subcategory of . * Given A ∈, we write _A for the category of A-modules (i.e. A-module objects in the category of spectra). If A is an ordinary ring this is the (enhanced) unbounded derived category of ordinary A-modules (i.e. of _A^). * An A-module M is coherent if it is bounded and almost perfect (i.e. τ^≥ nM is compact in _A^≥ n for all n). If A is coherent (i.e. A is a coherent ordinary ring and H^n(A) is a finitely presented A-module for all n), then M is coherent if and only if it is bounded and H^n(M) is a finitely presented A-module for all n. We write _A ⊂_A for the full subcategory of coherent modules. * Given A ∈, we write _A := _A/≅(_A^≤ 0), and write A := ∪_n A for the subcategory of truncated A-algebras (i.e. n-truncated for some n). If A is an ordinary ring containing , then _A is equivalently the (enhanced homotopy) category of nonpositively graded commutative dg A-algebras, or of simplicial/animated commutative A-algebras. * We implicitly fix two universes and associated category sizes: small and large. We write for the ∞-category of large ∞-categories (in <cit.> this would be , and would be its subcategory of small ∞-categories). We write ⊂ for the subcategory of presentable ∞-categories and left adjoints, and ⊂ for the further subcategory of presentable stable ∞-categories. * Given categories and , we write both (, ) and ^ for the category of functors from to . * All limit or colimit diagrams are implicitly small unless otherwise stated. Thus in “let X ≅ X_ be a filtered colimit” the indexing diagram is assumed to be small. By extension, () will refer to the category freely generated by under small filtered colimits even if is large (as in <cit.>). * If admits filtered colimits, a functor F: → is continuous if it preserves them. Suppose further that , are presentable, stable, and equipped with t-structures that are compatible with filtered colimits, and that F is exact. Then F is almost continuous if its restriction to ^≥ n is continuous for all n (equivalently, for n = 0). * A prestack (implicitly over ) is a functor from to the category of (possibly large) spaces. We write for the category of prestacks, and , for the variants with , in place of . We write : → for the Yoneda embedding. * A stack is a prestack which is a sheaf for the fpqc topology <cit.>. We write ⊂ for the category of stacks, and ⊂, ⊂ for its variants. (Note that does not admit arbitrary pushouts, but the use of <cit.> in defining the fpqc topology only requires closure under flat pushouts.) * If admits finite limits, we write () for the ∞-category of correspondences in <cit.>. This has the same objects as , but a morphism from X to Z in () is a diagram X Y Z in . * Let | and be classes of morphisms in which contain all isomorphisms and are stable under composition, and under base change along each other. Suppose also that ' ⊂ is a full subcategory such that Y ∈' whenever h: Y → X is in and X ∈'. Then we write (')_|, for the 1-full subcategory of () which only includes correspondences X Y Z such that h ∈, f ∈|, and X, Z ∈' (hence Y ∈'). Note that ' need not be closed under arbitrary pullbacks. The subcategory (')_|,⊂(')_|, which only includes correspondences in which h is an isomorphism is equivalent to the 1-full subcategory '_|⊂', likewise for the subcategory where f is an isomorphism and '^_⊂'^. * We presume our constructions and results remain valid if we replace with the category ^Δ of simplicial/animated commutative rings. We work with mostly to make some references easier to pinpoint. However, we do appeal to Tannaka duality in the proofs of Propositions <ref> and <ref>, and we do not explicitly know how to adapt this to the simplicial setting. On the other hand, any derived prestack has an underlying spectral prestack, and by definition these share the same category of quasi-coherent sheaves. Since our focus is on sheaves, it is in this sense more natural to work in the spectral setting. This distinction is also irrelevant to our intended applications, in which our base is and we have _≅_^Δ. § GEOMETRIC STACKS In this section we recall the basic theory of geometric stacks, and establish some results we will need in later sections. Many of these are simple extensions of existing results about (possibly derived) Artin stacks, but which seem to lack references in our desired generality. In particular we collect the basic properties of the main classes of morphisms we will need: morphisms of finite Tor-dimension or finite cohomological dimension, proper morphisms, and closed immersions. Two important technical results are that geometric stacks are convergent (Proposition <ref>) and are compact in the category of convergent 1-stacks (Proposition <ref>). These will play an important role in the next section, but require a different approach than that used for Artin stacks in <cit.> §.§ Definitions Recall our convention that a stack refers to a functor → satisfying fpqc descent (here is our fixed Noetherian base), and that the category of stacks is denoted by . Our terminology follows <cit.> (up to the presence of the base ). A stack X is geometric if its diagonal X → X × X is affine and there exists faithfully flat morphism B → X in . A morphism X → Y in is geometric if for any morphism A → Y, the fiber product X ×_Y A is geometric. We write ⊂ for the full subcategory of geometric stacks. Note here that products are taken in , hence are implicitly over . Also note that affineness of X → X × X implies that any morphism B → X is affine. In particular, (faithful) flatness of such a morphism is defined by asking that its base change to any affine scheme is such. More generally, a morphism X → Y in is (faithfully) flat if its composition with any faithfully flat A → X is (faithfully) flat. A faithfully flat morphism of geometric stacks will also be called a flat cover. Geometric morphisms are stable under composition and base change in . If f: X → Y is a morphism in , then f is geometric if X and Y are, and X is geometric if f and Y are. In particular, is closed under fiber products in . Over the sphere spectrum S this is <cit.>. Note that by <cit.>, <cit.> we have an equivalence ≅_/, where := _S. It then suffices to show is the preimage of under the forgetful functor to . Clearly X ∈ has a flat cover A → X in if and only if its image in does. Now note that if f: Y → Z, g: Z → W are morphisms in and g is affine, then f is affine if and only if g ∘ f is (since any B → Z factors through Z ×_W B → Z). The morphism X ×_ X → X ×_ S X is affine since it is a base change of → (_S ), hence X → X ×_ X is affine if and only if X → X ×_ S X is. Recall that an ordinary commutative ring A is coherent if every finitely generated ideal is finitely presented. More generally, A ∈ is coherent (resp. Noetherian) if A is coherent (resp. Noetherian) and H^n(A) is a finitely presented A-module for all n. A geometric stack X is locally coherent (resp. locally Noetherian) if there exists a flat cover A → X such that A is coherent (resp. Noetherian). It coherent if it is locally coherent and (X)^ is compactly generated. A locally Noetherian geometric stack is coherent by <cit.>. §.§ Truncated and classical geometric stacks The definition of ind-geometric stack will involve the following class of geometric stacks. A geometric stack X is n-truncated if it admits a flat cover A → X such that A is n-truncated. We say X is classical if it is zero-truncated, and truncated if it is n-truncated for some n. We denote by ⊂ the full subcategory of truncated geometric stacks. Alternatively, note that the restriction functor (-)_≤ n: → takes to <cit.>. Write i_≤ n: → for the left adjoint of this restriction and τ_≤ n: → for their composition. Then if X is geometric, τ_≤ n X is an n-truncated geometric stack called the n-truncation of X, and X is n-truncated if and only if the natural map τ_≤ n X → X is an isomorphism <cit.>. In particular, if is a field and X ∈ is an ordinary algebraic variety, then i_≤ 0 X is a zero-truncated geometric stack. The functor i_≤ 0: → embeds the category of ordinary varieties (more generally, ordinary quasi-compact, semi-separated schemes, or quasi-compact Artin stacks with affine diagonal) as a full subcategory of , and by default we will identify these categories with their images in . Our terminology follows <cit.>, but we caution that what we call n-truncatedness is called n-coconnectedness in <cit.>. We also note that this use of the symbol τ_≤ n and of the term truncation are different from their usual meaning in terms of truncatedness of mapping spaces, but in practice no ambiguity will arise (and this abuse has the feature that τ_≤ n A ≅τ_≤ n A). §.§ Coherent sheaves Recall that for any stack X, the category (X) of quasi-coherent sheaves on X is the limit of the categories _A over all maps A → X. If X is geometric (X) is presentable and is equivalent to the corresponding limit over the Cech nerve of any flat cover <cit.>. We say an A-module M is coherent if it is bounded and almost perfect (i.e. τ^≥ n M is compact in _A^≥ n for all n). If X ∈, then ∈(X) is coherent if f^*() is a coherent A-module for some (equivalently, any) flat cover A → X. We write (X) ⊂(X) for the full subcategory of coherent sheaves. While the above definition makes sense when X is not truncated, without additional hypotheses the resulting category (X) may be degenerate (for example, it may contain no nonzero objects). It will be convenient to exclude such degenerate cases from our discussion, though our treatment of coherent sheaves on ind-geometric stacks will include well-behaved non-truncated geometric stacks within its scope. If X is locally coherent, the standard t-structure on (X) restricts to one on (X). If X is coherent, it follows from <cit.> that specifically (X)^ is compactly generated by (X)^. If X is zero-truncated but not locally coherent, our use of the term coherent sheaf corresponds to the notion of bounded pseudocoherent complex in <cit.>, see <cit.>. Coherent sheaves have the following basic functoriality. A morphism f: X → Y in is of Tor-dimension ≤ n if f^*((Y)^≥ 0) ⊂(Y)^≥ n, and is of finite Tor-dimension if it is of Tor-dimension ≤ n for some n. We have the following variant of standard results. Morphisms of Tor-dimension ≤ n are stable under base change in , and morphisms of finite Tor-dimension are also stable under composition. A morphism f: X → Y of geometric stacks is of Tor-dimension ≤ n if and only if its base change along any given flat cover h: A → Y is. In this case f^*: (Y) →(X) takes (Y) to (X). Stability under composition is immediate. Flat locality on the target follows since if h' and f' are defined by base change, then h'^* is conservative, t-exact, and satisfies h'^* f^* ≅ f'^* h^*. If h: A → Y is arbitrary, then h' is affine, hence h'_* is conservative, t-exact, and satisfies f^*h_* ≅ h'_* f'^* <cit.>. Stability under base change along affine morphisms follows, and arbitrary base change now follows by composing an arbitrary Y' → Y with a flat cover B → Y'. The last claim then follows since almost perfect modules are stable under extension of scalars. §.§ Pushforward and base change Given a morphism f: X → Y in , the pushforward f_*: (X) →(Y) is defined as the right adjoint of f^*. In general f_* is poorly behaved, but in the geometric case we have the following results. A different proof in a slightly different context is sketched in <cit.>, <cit.>. The template we use below will be used again to prove Propositions <ref> and <ref>, and follows the proof of <cit.>. Recall that f_* being almost continuous means its restriction to (X)^≥ n is continuous for all n. If f: X → Y is a morphism of geometric stacks, then f_*: (X) →(Y) is almost continuous. Let the following be a Cartesian diagram of geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); If h is of finite Tor-dimension, then the Beck-Chevalley map h^* f_*() → f'_* h'^*() is an isomorphism for all ∈(X)^+. Proposition <ref> is true when Y is affine. In this case (Y) is compactly generated by perfect sheaves. If ∈(Y) is perfect so is f^*(), hence the claim follows by applying <cit.> to τ^≥ n f^*: (Y) →(X)^≥ n. Proposition <ref> is true when Y and Y' are affine. Let Y ≅ A and Y' ≅ B. Since h is affine h_* conservative, hence it suffices to show h_* h^* f_*() → h_* f'_* h'^*() is an isomorphism. Rewriting the second term using h_* f'_* ≅ f_* h'_*, one sees this is the specialization of the Beck-Chevalley map θ_M: f_*() M → f_*( f^*(M)) in the case M = B. Write for the full subcategory of M ∈_A such that θ_M is an isomorphism. The assignment M ↦θ_M extends to a functor _A →_A^Δ^1, which is exact since the source and target of θ_M are exact in M. It follows that is a stable subcategory closed under retracts, as isomorphisms form such a subcategory of _A^Δ^1. Clearly A ∈, hence contains all perfect A-modules. If M is of Tor-dimension ≤ n, then we can write it as a filtered colimit M ≅_ M_ of perfect A-modules of Tor-dimension ≤ n <cit.>. The claim now follows since tensoring is continuous, since the f^*(M_) are uniformly bounded below, and since f_* is almost continuous by Lemma <ref>. Proposition <ref> is true when Y' is affine and h is faithfully flat. Let Y_ denote the Cech nerve of h (so Y_0 = Y') and f_k: X_k → Y_k the base change of f. Given a morphism p: i → j in Δ_s, let h_p: Y_j → Y_i denote the associated map and h'_p: X_j → X_i its base change. The categories (X_k)^≥ 0 and (Y_k)^≥ 0, together with the functors h^*_p, h'^*_p, and τ^≥ 0∘ f^*_k, form a diagram Δ^1 ×Δ_s →. By Lemma <ref> the Beck-Chevalley transformation h_p^* f_i*→ f_j* h'^*_p restricts to an isomorphism of functors (X_i)^≥ 0→(Y_j)^≥ 0 for any p. Since h is faithfully flat we have (X)^≥ 0≅lim_Δ_s(X_i)^≥ 0 and (Y)^≥ 0≅lim_Δ_s(Y_i)^≥ 0, and the claim follows from <cit.>. Let ≅_ be a filtered colimit in (X)^≥ 0, let h: X' ≅ A → Y be a flat cover, and define f': X' → Y', h': X' → X by base change. Since h^* is continuous and conservative it suffices to show h^* f_*(_) → h^* f_*() is an isomorphism. By Lemma <ref> and left t-exactness of f_* this is equivalent to f'_* h'^*(_) → f'_* h'^*() being an isomorphism. Since h'^* is t-exact this follows from Lemma <ref>. Let ϕ: U ≅ A → Y and θ: U' ≅ A' → U ×_Y Y' be flat covers. We obtain a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) Z'; (ad) at (++,0) U'; (ba) at (0,) X'; (bc) at (+,) Y'; (cb) at (,+) Z; (cd) at (++,+) U; (da) at (0,++) X; (dc) at (+,++) Y; [->] (ab) to node[above] g' (ad); [->] (ab) to node[above left, pos=.25] ψ' (ba); [->] (ab) to node[right,pos=.2] ξ' (cb); [->] (ad) to node[below right] ψ (bc); [->] (ad) to node[right] ξ (cd); [->] (ba) to node[left] (da); [->] (cb) to node[above,pos=.25] g (cd); [->] (cb) to node[above left, pos=.25] ϕ' (da); [->] (cd) to node[below right] ϕ (dc); [->] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [->] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); in which all but the left and right faces are Cartesian. Note that ψ is faithfully flat and ξ is of finite Tor-dimension, since they are the compositions of θ with the base changes of ϕ and h, respectively. Since ψ^* is conservative, it suffices to show the top left arrow in [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) ψ^* h^* f_*(); (ab) at (,0) ψ^* f'_* h'^*(); (ac) at (+,0) g'_*ψ'^* h'^*(); (ba) at (0,) ξ^* ϕ^*f_*(); (bb) at (,) ξ^* g_* ϕ'^*(); (bc) at (+,) g'_* ξ'^* ϕ'^*(); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); is an isomorphism. This follows since the bottom left and top right arrows are isomorphisms by Lemma <ref>, and the bottom right is by Lemma <ref>. Following <cit.>, we can encode the coherence properties of base change isomorphisms using correspondence categories. Let be a category with finite limits, and let | and be classes of morphisms which are stable under composition and under base change along each other. Recall that we have an associated category ()_|, whose morphisms are correspondences X Y Z such that f is in | and h is in . We say a functor Φ: ^→ is left |-adjointable if for every Cartesian square [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); with f ∈|, the associated Beck-Chevalley transformation Φ(f)^L Φ(h) →Φ(h') Φ(f')^L is an isomorphism. When = all contains all morphisms, we have the following universal property, where () and are the (∞,2)-categorical enhancements of () and . Restriction along ^⊂()_|,all induces a monomorphism _(()_|,all, ) →_(^, ) with essential image the left |-adjointable functors. In particular, any left |-adjointable functor Φ: ^→ extends canonically to a functor ()_|,all→ whose value on a correspondence X Y Z is Φ(f)^L Φ(h): Φ(X) →Φ(Z). This property was identified in <cit.>. We have followed the formulation of <cit.>, which closely follows the proof due to <cit.>. We will refer to this proposition also to invoke any of its variants in which left is replaced with right and/or the roles of | and are swapped <cit.>. Returning to the case at hand, let ftd denote the class of morphisms of finite Tor-dimension in , so that morphisms in ()_all,ftd are correspondences X Y Z such that h is of finite Tor-dimension. By Propositions <ref> and <ref> the assignment X ↦(X)^+ extends to a functor ^+: ()_all,ftd→ whose value on the above correspondence is f_* h^*: (X)^+ →(Z)^+. §.§ Cohomological dimension We have better control of f_* in the following case. Recall that a morphism f: X → Y in is of cohomological dimension ≤ n if f_*((X)^≤ 0) ⊂(Y)^≤ n, and is of finite cohomological dimension if it is of cohomological dimension ≤ n for some n. We caution that morphisms of infinite cohomological dimension are ubiquitous in our motivating context. For example, if G is a complex reductive group, BG_→ is of infinite cohomological dimension. Morphisms of finite cohomological dimension are stable under composition and base change in . A morphism f: X → Y of geometric stacks is of finite cohomological dimension if and only if its base change along any given flat cover A → Y is. In this case f_*: (X) →(Y) is continuous, and for any Cartesian square [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); in the Beck-Chevalley transformation h^*f_*() → f'_* h'^*() is an isomorphism for all ∈(X). Stability under composition is immediate, while stability under base change and flat locality on the target follow from <cit.> (whose proof applies to geometric stacks, not just Artin stacks with affine diagonal). The remaining properties then follow from <cit.> or <cit.>. Let ()_fcd,all denote the 1-full subcategory of () which only includes correspondences X Y Z such that f is of finite cohomological dimension. By Propositions <ref> and <ref> the assignment X ↦(X) extends to a functor : ()_fcd,all→ whose value on the above correspondence is f_* h^*: (X) →(Z). §.§ Proper and almost finitely presented morphisms Recall following <cit.> that a morphism f: X → Y of stacks is (locally) almost of finite presentation if, for any n and any filtered colimit A ≅ A_ in , the canonical map X(A_) → X(A) ×_Y(A) Y(A_) is an isomorphism (we omit the word locally by default, as all morphisms we consider will be quasi-compact or effectively so). [<cit.>] Almost finitely presented morphisms are stable under composition and base change in . If f and g are composable morphisms in such that g ∘ f and g are almost of finite presentation, then so is f. We mostly consider this condition together with properness. We say f: X → Y is proper if for any A → Y, the fiber product X ×_Y A is proper over A in the sense of <cit.>. In particular, this requires that X ×_Y A be a quasi-compact, separated (spectral) algebraic space. Proper morphisms are stable under composition and base change in . If f and g are morphisms in such that g ∘ f and g are proper, then so is f. Proper morphisms of geometric stacks are of finite cohomological dimension. If f is a proper, almost finitely presented morphism of truncated geometric stacks, then f_* takes (X) to (Y). Stablity under base change is immediate, and the claims about composition follow from <cit.>. By Proposition <ref> finiteness of cohomological dimension can be checked after base change along a flat cover h: A → Y, where it follows from <cit.>. If f is almost of finite presentation and f' is its base change along h, then f'_* preserves coherence by <cit.>. Then so does h^* f_* by Proposition <ref>, hence so does f_* by definition. Let ()_prop,ftd denote the 1-full subcategory of () which only includes correspondences X Y Z such that h is of finite Tor-dimension, f is proper and almost finitely presented, and X and Z are truncated (hence so is Y). By Propositions <ref> and <ref>, we can restrict the domain and values of either (<ref>) or (<ref>) to obtain a functor : ()_prop,ftd→ whose value on the above correspondence is f_* h^*: (X) →(Z). §.§ Closed immersions A morphism f: X → Y in is a closed immersion if for any A → Y, the morphism τ_≤ 0(X ×_Y A) →τ_≤ 0 A is a closed immersion of ordinary affine schemes. The following statement follows immediately from its classical counterpart. Closed immersions are stable under composition and base change in . If f and g are composable morphisms in such that g ∘ f and g are closed immersions, then so is f. A closed immersion need not be affine (for example, the inclusion of a closed subscheme of a classical ind-scheme is typically not affine in the derived sense), but in we have the following. Closed immersions between geometric stacks are affine. Let f: X → Y be a closed immersion between geometric stacks, and let A → Y be arbitrary. The base change f': X' → A of f factors through a map f”: X' → B, where B := f'_*(_X') ∈_A. We claim f” is an isomorphism. By Tannaka duality <cit.> it suffices to show f”_*: (X') →_B is an equivalence. By Barr-Beck-Lurie <cit.> it further suffices to show f'_*: (X') →_A is conservative and preserves small colimits. Since f is a closed immersion τ_≤ 0 f”: τ_≤ 0 X' →τ_≤ 0 B is an isomorphism, hence f”_* restricts to an equivalence (X')^≅_B^. It follows that the restriction of f'_* to (X')^ is conservative, continuous, and factors through _A^. It follows as in <cit.> that f'_* is t-exact. Suppose now that f'_*() ≅ 0 for some ∈(X'). It follows from the preceding paragraph that ^n () ≅ 0 for all n ∈. Since (X') is t-complete we then have ≅ 0, hence f'_* is conservative. Similarly, suppose ≅_ is a filtered colimit in (X'), and let ϕ: f'_*(_) → f'_*() denote the natural map. Since the t-structures on (X') and _A are compatible with filtered colimits, it follows from the preceding paragraph that ^n(ϕ) is an isomorphism for all n ∈. That ϕ is an isomorphism, hence that f'_* is continuous, now follows from the t-completeness of _A. But then f'_* preserves all small colimits since it is exact <cit.>. Let f: X → Y_, h: Y' → Y_, and i: Y_→ Y_ be morphisms of geometric stacks. If i is a closed immersion, the map τ_≤ 0(X ×_Y_ Y') →τ_≤ 0(X ×_Y_ Y') is an isomorphism. Let X'_ := X ×_Y_ Y', X'_ := X ×_Y_ Y', and suppose first that X and Y' are affine. Fixing a flat cover B_→ Y_ we obtain flat covers A'_→ X'_, A'_→ X'_ by base change. It suffices to show the induced morphism τ_≤ 0 A'_→τ_≤ 0 A'_ is an isomorphism <cit.>. If A → X, B' → Y', and B_→ Y_ are also obtained by base change from B_→ Y_, then τ_≤ 0 B_→τ_≤ 0 B_ is a closed immersion since i is. But then τ_≤ 0 A'_→τ_≤ 0 A'_ is an isomorphism since A'_≅ A _B_ B' and A'_≅ A _B_ B'. In the general case, fix flat covers C → X and D' → Y', and let C'_ := C ×_Y_ D' and C'_ := C ×_Y_ D'. By the preceding paragraph τ_≤ 0 C'_→τ_≤ 0 C'_ is an isomorphism. But since it is the base change of τ_≤ 0 X'_→τ_≤ 0 X'_ along the flat cover τ_≤ 0 C'_→τ_≤ 0 X'_, the claim follows. §.§ Convergence Recall that denotes the category of functors →, and ⊂ the subcategory of functors satisfying fpqc descent. The restriction functor (-)_< ∞: → has a fully faithful right adjoint, by which we generally regard as a subcategory of . One says a prestack is convergent (or nilcomplete) if it is contained in this subcategory. Explicitly, X ∈ is convergent if for all A ∈ the natural morphism X(A) →lim X(τ_≤ n A) is an isomorphism. We have an induced notion of convergent stack, which is unambiguous in the following sense. The inclusion ⊂ identifies with ∩. That ∩⊂ follows from the definition of the fpqc topology and from <cit.> (closure of under pushouts along flat morphisms is sufficient to apply this to ). Now suppose X ∈⊂⊂. Note that τ_≤ n: ^→τ_≤ n^ preserves finite products since it preserves colimits and ^ is additive, hence τ_≤ n: → preserves finite products by <cit.>. Then if A ≅∏_i=1^n A_i is a finite product in , we have X(A) ≅lim_n X(τ_≤ n A) ≅lim_n,i X(τ_≤ n A_i) ≅lim_i X(A_i). Similarly, if A → A^0 in is faithfully flat and A^∙ its Cech nerve, then X(A) ≅lim_n X(τ_≤ n A) ≅lim_n,i X(τ_≤ n A^i) ≅lim_i X(A^i). Thus X ∈ by <cit.>. We now have the following result in the geometric case. Geometric stacks are convergent. Suppose X ∈ and A ∈. Recall again that using <cit.>, <cit.> we have an equivalence ≅_/, where := _S for S the sphere spectrum. We define convergent objects in the same way as in . By <cit.> we have that _( A, X) is the fiber of the map _( A, X) →_( A, ) induced by composition with X → over the point corresponding to A →. Since the same holds for each τ_≤ nA, and since is convergent in , it follows that X is convergent in if its image in is convergent. Consider the natural diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) _( A, X); (b) at (7,0) _()( (X)^≤ 0, _A^≤ 0); (c) at (0,-1.5) lim_(τ_≤ n A, X); (d) at (7,-1.5) lim_()( (X)^≤ 0, _τ_≤ nA^≤ 0).; [->] (a) to node[above] (b); [->] (b) to node[right] (d); [->] (a) to node[left] (c); [->] (c) to node[above] (d); By <cit.> the top map is a monomorphism if the corresponding map from _( A, X) to _()( (X), _A) is a monomorphism, which in turn follows from Tannaka duality <cit.>. Since this also holds replacing A with τ_≤ nA, and since monomorphisms are stable under limits, it follows similarly that the bottom map is a monomorphism. The right map is an isomorphism, since we have _A^≤ 0≅lim_m _A^[m,0]≅lim_m,n_τ_≤ n A^[m,0]≅lim_n _τ_≤ n A^≤ 0, in (). Here the first and third equivalences follow from the relevant t-structures being left complete and () → preserving limits <cit.>, the second from _A^[m,0]→_τ_≤ n A^[m,0] being an equivalence for m ≤ - n. It follows that the left map in (<ref>) is a monomorphism, so we must show it is essentially surjective. Since the horizontal maps are monomorphisms, it suffices to show the right isomorphism restricts to an essential surjection between their essential images. By Tannaka duality <cit.> the essential image of the bottom map consists of systems {G_n: (X)^≤ 0→_τ_≤ nA^≤ 0} of symmetric monoidal functors which preserve small colimits and flat objects, similarly for the top map. Let G: (X)^≤ 0→_A^≤ 0 be the functor associated to some such system {G_n} under the right isomorphism. Then G preserves small colimits since is closed under limits in <cit.>. Moreover, if G_n() ≅τ_≤ nA ⊗_A G() is flat over τ_≤ nA for all n, then since τ_≤ nG() ≅τ_≤ n(τ_≤ nA ⊗_A G()) by <cit.> it follows from the definition of flatness that G() is flat over A. Thus G is in the image of the top map in (<ref>), establishing the claim. Since is closed under products and targets of flat morphisms in , the restriction functor (-)_≤ n: → takes to <cit.>. We write i_≤ n: → for the resulting left adjoint and τ_≤ n: → for their composition. The functors (-)_≤ n induce an equivalence ≅lim in <cit.>. Moreover, for any X ∈ the natural map τ_≤ n X → X is an isomorphism. This is a special case of Lemma <ref>, but more explicitly if τ_≤ n^ pre: → denotes the composition of (-)_≤ n: → and its left adjoint, then τ_≤ n^ pre X → X is an isomorphism since = ∪_n and since colimits in are computed objectwise. But τ_≤ n is the sheafification of τ_≤ n^ pre, so τ_≤ n X → X is an isomorphism since sheafification is continuous. In particular, if X is a geometric stack Proposition <ref> implies τ_≤ n X ≅τ_≤ n X, hence we obtain the following corollary. For any geometric stack X we have X ≅τ_≤ n X in . Now let ⊂ denote the full subcategory consisting of X such that X(A) is an (n+1)-truncated space for all A ∈. Then Proposition <ref> is refined by the following result, the first half of which is a variant of <cit.>, the second of <cit.>. Geometric stacks are objects of . Moreover, is closed under filtered colimits in , and truncated geometric stacks are compact as objects of . Let X be a geometric stack. To show X ∈, it suffices to show that X_≤ n belongs to τ_≤ n+1, the category of (n+1)-truncated objects of , since for A ∈ we have X(A) ≅_(( A)_≤ n, X_≤ n). Let B → X be a flat cover, so that ( B)_≤ n→ X_≤ n is a flat cover in . If ( B)_≤ n^∙ is its Cech nerve, we have X_≤ n≅ ( B)_≤ n^m in . Equivalently, X_≤ n is the sheafification of the same colimit taken in . Since sheafification is left exact it preserves (n+1)-truncated objects <cit.>, hence it suffices to show the colimit taken in is (n+1)-truncated. For any m an object of is m-truncated if its values on are m-truncated spaces <cit.>. Each ( B)_≤ n^m is affine since X is geometric, hence is then n-truncated in since is an (n+1)-category. The claim then follows since values of colimits in are computed objectwise, and since the geometric realization of a groupoid of n-truncated spaces is (n+1)-truncated. We now claim X_≤ n is compact in τ_≤ n+1. By the argument of <cit.> we have that τ_≤ n+1 is closed under filtered colimits in (noting that by <cit.> we have τ_≤ n+1 = ∩τ_≤ n+1 since is closed under limits in ). It follows that each ( B)_≤ n^m is compact in τ_≤ n+1 since it is so in . Moreover, X_≤ n is their colimit in τ_≤ n+1 since it is their colimit in . But τ_≤ n+1 is an (n+2)-category, so by the proof of <cit.> X_≤ n is also the colimit in τ_≤ n+1 of the ( B)_≤ n^m over the finite subdiagam Δ_s, ≤ n+2^op⊂Δ_s^op. It follows that X_≤ n is compact in τ_≤ n+1 <cit.>. Next note that <cit.> also implies is the full subcategory of X ∈ such that X_≤ n∈ for all n. But then τ_≤ n+1 = ∩τ_≤ n+1 implies is the full subcategory of X ∈ such that X_≤ n∈τ_≤ n+1 for all n. Closure of under filtered colimits then follows since (-)_≤ n: → is continuous and since as recalled above τ_≤ n+1 is closed under filtered colimits in . Now suppose X is an n-truncated geometric stack, let Y ≅ Y_ be a filtered colimit in , and consider the following diagram. [baseline=(current bounding box.center),thick,>=] ; ; ; ; (aa) at (0,0) _(X, Y_); (ab) at (,0) _τ_≤ n+1(X_≤ n, (Y_)_≤ n); (bb) at (,) _τ_≤ n+1(X_≤ n, (Y_)_≤ n); (ca) at (0,+) _(X, Y); (cb) at (,+) _τ_≤ n+1(X_≤ n, Y_≤ n); [->] (aa) to node[above] (ab); [->] (ca) to node[above] (cb); [->] (ab) to node[above] (bb); [->] (bb) to node[left] (cb); [->] (aa) to node[right] (ca); The bottom right map is an isomorphism since by the preceding paragraph (-)_≤ n restricts to a continuous functor →τ_≤ n+1. Since X is n-truncated it is the image of X_≤ n under the left adjoint i_≤ n: → of (-)_≤ n, hence the horizontal maps are isomorphisms. The top right map is an isomorphism since X_≤ n∈τ_≤ n+1 is compact, hence the left map is an isomorphism and X is compact in . § IND-GEOMETRIC STACKS We now define our main objects of study and establish their basic properties. Key technical results include the consistency of the notions of truncated and reasonable geometric substack (Proposition <ref>), and the closure of ind-geometric stacks under fiber products (Proposition <ref>) and ind-closed filtered colimits (Proposition <ref>). §.§ Definitions Recall from Proposition <ref> that is contained in the full subcategory ⊂ of convergent stacks. This subcategory plays a central role in our discussion because colimits in are typically more natural than colimits in . For example, any A is the colimit of its truncations τ_≤ n A in , but not in unless A is itself truncated. In particular, the inclusion ⊂ does not preserve colimits in general. An ind-geometric stack is a convergent stack X which admits an expression X ≅_ X_ as a filtered colimit in of truncated geometric stacks along closed immersions. We call such an expression an ind-geometric presentation of X. This is the natural extension of the derived notion of ind-scheme introduced in <cit.>, see Proposition <ref>. We write ⊂ for the full subcategory of ind-geometric stacks. Any geometric stack X is ind-geometric since X ≅τ_≤ n X in by Proposition <ref>. A reasonable presentation is an ind-geometric presentation X ≅_ X_ in which the structure maps are almost finitely presented. An ind-geometric stack is reasonable if it admits a reasonable presentation, and coherent if it admits a reasonable presentation whose terms are coherent geometric stacks. This is the natural extension of the derived notion of reasonableness introduced in <cit.>, which in turn extends the classical notion of reasonableness in <cit.>. We write ⊂ (resp. ⊂) for the full subcategory of reasonable (resp. coherent) ind-geometric stacks. A non-truncated geometric stack need not be reasonable as an ind-geometric stack (a basic example being the self-intersection of the origin in ^∞), though we do have the following result. If a geometric stack X is locally coherent (resp. coherent), then it is reasonable (resp. coherent) as an ind-geometric stack. Let A → X be a flat cover such that A is coherent. Then τ_≤ n A →τ_≤ n+1 A is almost of finite presentation for all n <cit.>. This is the base change of the affine morphism τ_≤ n X →τ_n+1 X along a flat cover, and it follows from <cit.> that τ_≤ n X →τ_n+1 X is also almost finitely presented. Thus X is reasonable by Proposition <ref>. If additionally (X)^ is compactly generated, hence X is coherent as a geometric stack, then X is coherent as an ind-geometric stack since (X)^≅(τ_≤ nX)^ for all n. §.§ Truncated and classical ind-geometric stacks The following extends Definition <ref>. An ind-geometric stack X is n-truncated if it admits an ind-geometric presentation X ≅ X_ in which each X_ is an n-truncated geometric stack. We say X is classical if it is zero-truncated. A typical example of a classical ind-geometric stack is the following. Suppose X ≅ X_ is a presentation of a classical ind-scheme as a filtered colimit of quasi-compact, semi-separated ordinary schemes (regarded as objects of ) along closed immersions, and that G is a classical affine group scheme acting on X. For each the induced map X_× G → X factors through some X_. The closure X'_⊂ X_ of its image is a G-invariant closed subscheme of X. We obtain a presentation X ≅ X'_ by closed G-invariant subschemes, and it follows that the quotient X/G (in ) is a classical ind-geometric stack with ind-geometric presentation X/G ≅ X'_/G. Note that the quotients X'_/G taken in are again geometric <cit.>, hence convergent (Proposition <ref>), hence they coincide with the quotients X'_/G taken in . Recall that (-)_≤ n: → identifies the category of n-truncated geometric stacks with a full subcategory of , the inverse equivalence being given by the left adjoint i_≤ n: →. Note that (-)_≤ n takes filtered colimits in to filtered colimits in τ_≤ n+1, since it restricts from a continuous functor →, and since as in Proposition <ref> and its proof ⊂ and τ_≤ n+1⊂ are closed under filtered colimits. Since i_≤ n is continuous it follows that (-)_≤ n identifies the category of n-truncated ind-geometric stacks with the obvious subcategory of . Again letting τ_≤ n: → denote the composition of (-)_≤ n and i_≤ n, the following result states in particular that Definition <ref> is indeed the obvious extension of <cit.> from schemes to geometric stacks (it is stated differently so that reasonableness may be introduced more easily). A convergent stack X is ind-geometric if and only if τ_≤ n X is an n-truncated ind-geometric stack for all n. The only if direction follows since τ_≤ n preserves closed immersions of geometric stacks, and since by the above discussion its restriction to is continuous. The if direction follows from Proposition <ref> below (which does not depend on the current result), since each τ_≤ nX →τ_≤ n+1X is classically an isomorphism, hence is an ind-closed immersion. §.§ Geometric substacks Truncatedness plays an essential role in our discussion due to the following variant of <cit.> (though see Remark <ref>). The claim follows immediately from Proposition <ref>, but would fail if Y were not truncated: in this case Y is not compact in , since e.g. 𝕀_Y does not factor through any truncation of Y. Let X ≅ X_ be an ind-geometric presentation. Then for any truncated geometric stack Y, the natural map _(Y, X_) →_(Y, X) is an isomorphism. To discuss ind-geometric stacks more intrinsically, without referring to particular ind-geometric presentations, the following notion is useful. Let X be an ind-geometric stack. A truncated (resp. reasonable) geometric substack of X is a truncated geometric stack X' equipped with a closed immersion X' → X (resp. an almost finitely presented closed immersion X' → X). Let X ≅ X_ be an ind-geometric (resp. reasonable) presentation. Then for all , the structure morphism i_: X_→ X realizes X_ as a truncated (resp. reasonable) geometric substack of X. Any other truncated (resp. reasonable) geometric substack X' → X can be factored as X' X_ X for some , and in any such factorization j_ is a closed immersion (resp. almost finitely presented closed immersion), hence affine. To show i_ is a closed immersion, fix A → X and let Z := X_×_X A. Since τ_≤ 0 Z ≅τ_≤ 0(X_×_Xτ_≤ 0 A) we may assume A is classical. By Proposition <ref> we can then factor A → X through some X_', which we may assume satisfies ' ≥. For ”≥' let Z_” := X_×_X_” A. We have Z ≅_”≥' Z_” since filtered colimits in are left exact <cit.>. Moreover, τ_≤ 0 Z ≅_”≥'τ_≤ 0 Z_” since by Proposition <ref> and its the proof all terms are in and Y ↦τ_≤ 0 Y preserves filtered colimits in . To show τ_≤ 0 Z →τ_≤ 0 A is a closed immersion it then suffices to show τ_≤ 0 Z_'→τ_≤ 0 Z_” is an isomorphism for any ”≥', since τ_≤ 0 Z_'→τ_≤ 0 A is a closed immersion by hypothesis. But this follows from Proposition <ref>. Now suppose the given presentation is reasonable, and let A ≅ A_ be a filtered colimit in for some n. By Proposition <ref> we have X(A_) ≅_≥ X_(A_) for all β, and likewise X(A) ≅_≥ X_(A). We then have X_(A) ×_X(A)_ X(A_) ≅_≥(X_(A) ×_X_(A)_ X_(A_)) since filtered colimits of spaces are left exact <cit.>. But the right hand colimit is isomorphic to _ X_(A_) since each individual term is by hypothesis. Finally, by Proposition <ref> we can factor X' → X through a morphism j_: X' → X_ for some . Then j is a closed immersion by Proposition <ref>, hence is affine by Proposition <ref>, and is almost of finite presentation in the reasonable case by Proposition <ref>. §.§ Properties of morphisms Notions such as ind-properness extend from ind-schemes to ind-geometric stacks in the obvious way. Let f: X → Y be a morphism of ind-geometric stacks, and let X ≅_ X_ be an ind-geometric presentation. The following conditions are equivalent. * [t]-- For every diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) X; (c) at (0,-1.5) Y'; (d) at (3,-1.5) Y; (e) at (5.0,-1.5) ; [->] (a) to node[above] (b); [->] (b) to node[right] f (d); [->] (a) to node[left] f'(c); [->] (c) to node[above] (d); in which X' → X and Y' → Y are truncated geometric substacks, the map f' is proper (resp. a closed immersion, of finite cohomological dimension). * [t]-- For every X_ there exists a diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X_; (b) at (3,0) X; (c) at (0,-1.5) Y_; (d) at (3,-1.5) Y; (e) at (5.0,-1.5) ; [->] (a) to node[above] (b); [->] (b) to node[right] f (d); [->] (a) to node[left] f_(c); [->] (c) to node[above] (d); in which Y_ is a truncated geometric substack of Y and f_ is proper (resp. a closed immersion, of finite cohomological dimension). A morphism f: X → Y of ind-geometric stacks is ind-proper (resp. an ind-closed immersion, of ind-finite cohomological dimension) if it satisfies the equivalent conditions of Proposition <ref>. Fix an ind-geometric presentation Y ≅ Y_. That (1) implies (2) follows since f ∘ i_ factors through some Y_ by Proposition <ref>. To show (2) implies (1), fix a diagram (<ref>). By hypothesis and Proposition <ref> there exists a diagram of the left-hand form for some , [baseline=(current bounding box.center),thick,>=] [matrix] at (0,0) ; ; ; ; ; (aa) at (0,0) X'; (ac) at (+,0) Y'; (bb) at (,) X; (bd) at (++,) Y; (ca) at (0,+) X_; (cc) at (+,+) Y_; [->] (aa) to node[above] f' (ac); [->] (bb) to node[above] f (bd); [->] (ca) to node[above] f_ (cc); [->] (aa) to node[above] (bb); [->] (aa) to node[above] (ca); [->] (ca) to node[above] (bb); [->] (ac) to node[above] (bd); [->] (cc) to node[above] (bd); ; [matrix] at (7.0,0) ; ; ; ; ; ; (aa) at (0,0) X'; (ac) at (+,0) Y'; (bd) at (+,) Y_; (be) at (++,) Y,; (ca) at (0,+) X_; (cc) at (+,+) Y_; [->] (aa) to node[above] f' (ac); [->] (ca) to node[above] f_ (cc); [->] (aa) to node[above] (ca); [->] (ac) to node[above] (bd); [->] (cc) to node[above] (bd); [->] (ac) to node[above] (be); [->] (cc) to node[above] (be); [->] (bd) to node[above] (be); ; where f_ is proper and Y_→ Y is a truncated geometric substack. We claim this extends to a diagram of the right-hand form for some Y_. To see this, note that for any finite diagram p: K →^+, the natural map _(K,)(p, Y_) →_(K,)(p, Y) is an isomorphism, where we let Y and Y_ denote the associated constant diagrams. This follows since p is compact in (K,) by <cit.> and Proposition <ref>. The claim at hand follows by taking p to be the subdiagram on the left spanned by X', Y', X_, and Y_. In the right-hand diagram, the vertical maps are closed immersions by Proposition <ref>, hence f' is proper by Proposition <ref>. The other classes of morphisms are treated the same way, using Propositions <ref> and <ref>, and the following observation: if f and g are composable morphisms in such that g ∘ f is of finite cohomological dimension and g is affine (hence g_* conservative and t-exact), then f is of finite cohomological dimension. If f: X → Y is of ind-finite cohomological dimension and there exists an n such that any morphism f' as in (<ref>) is of cohomological dimension ≤ n, then we say f is of finite cohomological dimension. For example, an ind-closed immersion is of finite cohomological dimension (with n= 0), while the projection ^∞ := ⋃^n → is of ind-finite, but not finite, cohomological dimension. Ind-proper morphisms, ind-closed immersions, and morphisms of ind-finite cohomological dimension are stable under composition in . Let X, Y, and Z be ind-geometric stacks, f: X → Y and g: Y → Z ind-proper morphisms, and X ≅ X_ an ind-geometric presentation. By definition there exist truncated geometric substacks Y_→ Y and Z_→ Z such that the restrictions of f and g factor through proper morphisms f_: X_→ Y_ and g_: Y_→ Z_ (note that Y_→ Y may always be extended to a reasonable presentation, but the existence of the desired Z_→ Z doesn't depend on this). But then g_∘ f_ is proper, hence g ∘ f is ind-proper. The other classes of morphisms are treated the same way. Let f: X → Y and g: Y → Z be morphisms of ind-geometric stacks. If g ∘ f and g are ind-proper (resp. ind-closed immersions), then so is f. Let X ≅ X_ be a reasonable presentation. By Proposition <ref> there exist truncated geometric substacks Y_→ Y and Z_→ Z such that the restrictions of f and g factor through morphisms f_: X_→ Y_ and g_: Y_→ Z_. By hypothesis g_∘ f_ and g_ are proper (resp. closed immersions), hence so is f_ by Proposition <ref> (resp. Proposition <ref>). Let f: X → Y be a morphism of geometric stacks. Then f is ind-proper (resp. an ind-closed immersion, of ind-finite cohomological dimension) if and only if it is proper (resp. a closed immersion, of finite cohomological dimension). Recall that X ≅τ_≤ n X and X ≅τ_≤ n Y are ind-geometric presentations. Properness of f is equivalent to properness of τ_≤ 0 f <cit.>, hence to properness of each τ_≤ n f, hence to ind-properness. The corresponding claim for closedness is immediate, while for finiteness of cohomological dimension it follows from <cit.> and the fact that (X)^≅(τ_≤ 0 X)^. We will often say a morphism of reasonable ind-geometric stacks is almost ind-finitely presented if it is almost finitely presented (i.e. in the sense of (<ref>)). This is justified by the following result. Let f: X → Y be a morphism of reasonable ind-geometric stacks, and let X ≅_ X_ be a reasonable presentation. The following conditions are equivalent. * The morphism f is almost finitely presented. * For every diagram (<ref>) in which X' → X and Y' → Y are reasonable geometric substacks, the map f' is almost finitely presented. * For every X_ there exists a diagram (<ref>) in which Y_ is a reasonable geometric substack of Y and f_ is almost finitely presented. That (1) implies (2) follows from Propositions <ref> and <ref>, and that (2) implies (3) is immediate. To show (3) implies (1) let A ≅ A_ be a filtered colimit in for some n. Then we have _ X(A_) ≅_, X_(A_) ≅_( X_(A) ×_Y(A)_ Y(A_) ), the first isomorphism using Proposition <ref> and the second Proposition <ref>. But the last expression is then isomorphic to X(A) ×_Y(A)_ Y(A_) by the left exactness of filtered colimits of spaces. Ind-closed immersions have the following closure property. Here ⊂ denotes the 1-full subcategory which only includes ind-closed immersions, similarly for ⊂. Recall that a subcategory is 1-full if for n>1 it includes all n-simplices whose edges belong to the indicated class of morphisms. The canonical functor () → factors through an equivalence () ≅. In particular, ind-geometric stacks are closed under filtered colimits along ind-closed immersions in . By definition is the essential image of (). Let X ≅ X_, Y ≅ Y_ be ind-geometric presentations. By abuse we denote the corresponding objects of () by X and Y as well, so that _()(X, Y) ≅lim___(X_, Y_). Now the natural map lim___(X_, Y_) →lim___(X_, Y_) ≅_(X, Y) is a monomorphism since monomorphisms are stable under limits and filtered colimits (note that the isomorphism on the right follows from Proposition <ref>). It thus suffices to show its image is exactly the subspace of ind-closed immersions, but this follows from the definitions. Note that a closed immersion of non-truncated geometric stacks is also an ind-closed morphism of ind-geometric stacks. It follows from Proposition <ref> that is the essential image of the (not fully faithful) functor () →, where ⊂ is the 1-full subcategory which only includes closed immersions. In other words, we obtain the same class of objects if in Definition <ref> we do not require the X_ to be truncated. The following variant of Proposition <ref> is proved the same way. Here ⊂ denotes the 1-full subcategory which only includes almost ind-finitely presented ind-closed immersions, similarly for ⊂, ⊂, and ⊂. The canonical functor () → factors through an equivalence () ≅, and () → factors through an equivalence () ≅. In particular, reasonable (resp. coherent) ind-geometric stacks are closed under filtered colimits along almost ind-finitely presented ind-closed immersions in . §.§ Fiber Products Now we consider fiber products of ind-geometric stacks, and the base change properties of the classes of morphisms considered above. Ind-geometric stacks are closed under finite limits in (and ). Note that is closed under limits in , so the two claims are equivalent. Since contains the terminal object , it suffices to show closure under fiber products <cit.>. Let f: X → Y and h: Y' → Y be morphisms of ind-geometric stacks, and let X' := X ×_Y Y'. Suppose first that X and Y' are truncated geometric stacks, and let Y ≅ Y_ be an ind-geometric presentation. By Proposition <ref> we can factor f and h through Y_ for some . We have X' ≅_≥ X'_, where X'_ := X ×_Y_ Y', by left exactness of filtered colimits in <cit.>. The transition maps are closed immersions of not necessarily truncated geometric stacks by Proposition <ref>. It follows they are ind-closed as morphisms of ind-geometric stacks, hence X' is ind-geometric by Proposition <ref>. Now suppose X ≅ X_ and Y' ≅ Y'_ are ind-geometric presentations. Then as above X' ≅ X_×_Y Y'_ expresses X' as a filtered colimit in of ind-geometric stacks along ind-closed immersions, so again X' is ind-geometric by Proposition <ref>. Ind-proper morphisms (resp. ind-closed immersions, morphisms of ind-finite cohomological dimension) are stable under base change in . Let f: X → Y and h: Y' → Y be morphisms in such that f is ind-proper. If X ≅ X_ is an ind-geometric presentation, we have for all a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) X_'; (ad) at (++,0) Y_'; (ba) at (0,) X'; (bc) at (+,) Y'; (cb) at (,+) X_; (cd) at (++,+) Y_; (da) at (0,++) X; (dc) at (+,++) Y; [->] (ab) to node[above] ϕ' (ad); [->] (ab) to node[above] (ba); [->] (ab) to node[left,pos=.8] ψ' (cb); [->] (ad) to node[above] (bc); [->] (ad) to node[right] ψ (cd); [->] (ba) to node[above] (da); [->] (cb) to node[above,pos=.2] ϕ (cd); [->] (cb) to node[above] (da); [->] (cd) to node[above] (dc); [->] (da) to node[above,pos=.6] f (dc); [-,line width=6pt,draw=white] (ba) to (bc); [->] (ba) to node[above,pos=.75] f' (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] h (dc); in such that all but the top and bottom faces are Cartesian, Y_→ Y is a truncated geometric substack, and ϕ is proper. Let Y'_≅_ Y'_ be an ind-geometric presentation. Then, letting X'_ := X'_×_Y'_ Y'_, we have X'_≅_ X'_ by left exactness of filtered colimits in . Note that for all the morphisms X'_→ X' and Y'_→ Y' are closed immersions since Y'_→ Y'_ and Y_→ Y are, and in particular Y'_ is a truncated geometric substack of Y'. Now let X' ≅ X'_ be an ind-geometric presentation and fix some . By Proposition <ref> we can choose so that X'_→ X factors through X_, hence so that X'_→ X' factors through X'_. Proposition <ref> then implies that X'_→ X' factors through X'_ for some . This map X'_→ X'_ is a closed immersion since X'_→ X' and X'_→ X' are, while X'_→ Y'_ is proper since it is a base change of ϕ. Thus the composition X'_→ X'_→ Y'_ is proper, hence f' is ind-proper. The other classes of morphisms are treated the same way. Already the self-intersection of the origin in ^∞ illustrates that reasonable ind-geometric stacks are not closed under arbitrary fiber products. To formulate a more limited result, we say a morphism h: X → Y of ind-geometric stacks is of Tor-dimension ≤ n (resp. of finite Tor-dimension) if it is geometric and its base change to any geometric stack is of Tor-dimension ≤ n (resp. of finite Tor-dimension) in the sense of sense of Section <ref>. If X and Y are geometric, this is consistent with our previous terminology by Proposition <ref>, which also implies the following stability properties. Morphisms of finite Tor-dimension are stable under composition and base change in . We then have the following closure result in the reasonable case. Let h: X → Y be a morphism of finite Tor-dimension between ind-geometric stacks. If Y is reasonable, so is X. In particular, let the following be a Cartesian diagram of ind-geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); If X, Y, and Y' are reasonable and h is of finite Tor-dimension, then X' is reasonable. Let Y ≅ Y_ be a reasonable presentation. Each X_ := X ×_Y Y_ is a truncated geometric stack since h is of finite Tor-dimension. We have X ≅ X_ since filtered colimits are left exact in , and this is a reasonable presentation since almost finitely presented closed immersions are stable under base change. The last claim now follows by Proposition <ref>. The coherent case is more delicate, as even coherent affine schemes are not closed under fiber products <cit.>. We give two positive results in this setting. Let the following be a Cartesian diagram of ind-geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); Suppose that X and Y are reasonable, that Y' is coherent, and that f is an almost ind-finitely presented ind-closed immersion. Then X' is coherent and f' is an almost ind-finitely presented ind-closed immersion. Let f: X → Y be an affine morphism of geometric stacks. If (Y)^ is compactly generated, then so is (X)^. Since f is affine f_*: (X) →(Y) is t-exact and conservative, hence restricts to a conservative functor (X)^→(Y)^. This restriction is continuous and has a left adjoint, the restriction of τ^≥ 0∘ f^*. Thus compact generation of (Y)^ implies that of (X)^ by <cit.> (whose proof applies to compact generation, not just compact projective generation). Let f: X → Y be an almost finitely presented closed immersion of geometric stacks. If Y is locally coherent (resp. coherent), then so is X. Let A → Y be a flat cover with A coherent, and f': B → A the base change of f (recall that f is affine by Proposition <ref>). By <cit.> B is almost perfect as an A-module, hence H^n(B) is finitely presented over H^0(A) for all n ≤ 0. Moreover H^0(B) is a quotient of H^0(A) by a finitely generated ideal, so H^0(B) is coherent and the H^n(B) are finitely presented over H^0(B) <cit.>. If (Y)^ is compactly generated, then so is (X)^ by Lemma <ref>. Suppose first that X and Y' are truncated geometric stacks, and let Y ≅ Y_ be a reasonable presentation. We may assume f and h factor through maps f_: X → Y_, h_: Y' → Y_ for all . Letting X'_ := X ×_Y_ Y', each f'_: X'_→ Y' is an almost finitely presented closed immersion by base change and Proposition <ref>, hence X'_ is coherent by Lemma <ref>. For any ≥ the induced map i'_: X'_→ X'_ is an almost finitely presented closed immersion since f'_∘ i'_≅ f'_ (Proposition <ref>). Since X' ≅ X ×_Y_ Y' in by left exactness of filtered colimits, it follows that X' is coherent by Proposition <ref>. In general, fix reasonable presentations X ≅ X_ and Y' ≅ Y'_. Then as above X' ≅ X_×_Y Y'_ presents X' as a filtered colimit of coherent ind-geometric stacks along almost ind-finitely presented ind-closed immersions, hence X' is coherent by Proposition <ref>. That f' is an almost ind-finitely presented ind-closed immersion follows from Propositions <ref> and <ref>. We say an ind-geometric stack X is locally Noetherian if it has a reasonable presentation X ≅ X_ in which each X_ is locally Noetherian (as noted before, this implies X is coherent). The proof of Proposition <ref> extends to show locally Noetherian ind-geometric stacks are closed under almost finitely ind-closed immersions. If f: X → Y is a proper, almost finitely presented morphism of geometric stacks and Y is locally Noetherian, it follows that X is as well by base changing f to a Noetherian flat cover A → Y. The proof of Proposition <ref> then extends to show the following result (which will be strengthened in <cit.> once we have developed the notion of a tamely presented morphism.). Let the following be a Cartesian diagram of ind-geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); Suppose that X and Y are reasonable, that Y' is locally Noetherian, and that f is ind-proper and almost ind-finitely presented. Then X' is locally Noetherian. § COHERENT AND IND-COHERENT SHEAVES In this section we consider coherent and ind-coherent sheaves on ind-geometric stacks. We begin with the former, establishing the basic functorialities of ind-proper pushforward and finite Tor-dimension pullback. We then extend our discussion to ind-coherent sheaves, which are needed to discuss adjoint functorialities such as ind-proper !-pullback and sheaf Hom. The most significant complication compared to the treatment of ind-schemes in <cit.>, <cit.> lies in the definition of ind-coherent sheaves. If X is a geometric stack we define (X) as the left anticompletion of (X), following a construction of <cit.>. This characterizes (X) in terms of a universal property satisfied by bounded colimit-preserving functors out of it. If X is classical, (X) is (the dg nerve of) the category of injective complexes in (X)^, introduced in <cit.>. However, in full generality we do not have (X) ≅((X)), and the latter may be poorly behaved. Nonetheless, our notation (perhaps abusively) reflects that in most cases of interest these categories do coincide. Specifically, they agree when X is coherent (Proposition <ref>), so in this case one may safely define (X) via ind-completion and bypass the discussion of anticompletion. But it is often convenient to have (X) defined in greater generality, since for example the class of coherent ind-geometric stacks (or even of coherent affine schemes) is not closed under fiber products. §.§ Coherent sheaves We first define (X) for a reasonable ind-geometric stack X. A posteriori, it will be computed by the formula (X) ≅(X_) in , where X ≅ X_ is any reasonable presentation. In particular, any ∈(X) can be written as ≅ i_*(_) for some and _∈(X_). Suppose that Y is another reasonable ind-geometric stack and that f: X → Y is ind-proper and almost ind-finitely presented. The pushforward f_*: (X) →(Y) will be defined so that f_*() ≅ j_* f_*(_), where X_ Y_ Y is any factorization of f∘ i_ through a reasonable geometric substack of Y. If h: Y → X is of finite Tor-dimension, the pullback h^*: (Y) →(X) will be defined so that h^*() ≅ i'_* h_^*(_), where i'_ and h_ are defined by base change from i_ and h. Recall from (<ref>) that the corresponding functorialities for coherent sheaves on truncated geometric stacks were packaged as a functor : ()_prop,ftd→. We now let ()_prop,ftd denote the 1-full subcategory of () which only includes correspondences X Y Z such that h is of finite Tor-dimension, f is ind-proper and almost ind-finitely presented, and X and Z are reasonable (hence so is Y by Proposition <ref>). These are indeed stable under composition of correspondences by Propositions <ref>, <ref>, <ref>, and <ref>, and we note that ()_prop,ftd is a full subcategory of ()_prop,ftd. We define a functor : ()_prop,ftd→ by left Kan extending (<ref>) along (^+)_prop,ftd⊂()_prop,ftd. This Kan extension exists by <cit.>. Taking either h or f to be the identity, the values of (<ref>) on the correspondence X Y Z define functors h^*: (X) →(Y) and f_*: (Y) →(Z). The formula (<ref>) is a consequence of the following result. The restriction of to preserves filtered colimits along almost ind-finitely presented ind-closed immersions. To show this we need the following variant of a special case of <cit.>, whose proof we include since the cited statement is significantly more general. In the statement is a category with finite limits, and | and are classes of morphisms in which contain all isomorphisms and are stable under composition and under base change along each other. Recall that if ' ⊂ is a full subcategory such that Y ∈' whenever h: Y → X is in and X ∈', then (')_|, is the 1-full subcategory of () which only includes correspondences X Y Z such that h ∈, f ∈|, and X, Z ∈' (hence Y ∈'). <cit.> Let , |, and be as above, let ”⊂' ⊂ be full subcategories which both satisfy the above condition with respect to , and let be another category. Let F”: (”)_|,→ be a functor, F': (')_|,→ a left Kan extension of F”, and G': '_|→ a left Kan extension of F”|_”_|. Then the canonical transformation G' → F'|_'_| is an isomorphism. It suffices to show (”_|)_/Z→ ((”)_|,)_/Z is left cofinal for all Z ∈', since the canonical morphism G'(Z) → F'(Z) is given by taking colimits over the restrictions of F' to these diagrams. It further suffices to show ((”_|)_/Z)_(h,f)/ is weakly contractible for any object X Y Z of ((”)_|,)_/Z <cit.>. The category ((”_|)_/Z)_(h,f)/ can be identified with the category of diagrams [baseline=(current bounding box.center),thick,>=] ; ; ; ; (aa) at (0,2*) X; (ab) at (.5*,) Y'; (ac) at (,0) Y; (bb) at (,2*) W'; (bc) at (1.5*,) W; (cc) at (2*,2*) Z; [<-] (aa) to node[above left] h (ab); [<-] (ab) to node[above left] ξ' (ac); [<-] (bb) to node[above left] ξ (bc); [->] (ac) to node[above right] ϕ (bc); [->] (ab) to node[above right] ϕ' (bb); [->] (bc) to node[above right] ψ (cc); in which W ∈”, f ≅ψ∘ϕ, ξ and ξ' are isomorphisms, and ϕ and ψ belong to |. Up to contractible choices such a diagram is determined by the subdiagram spanned by the top right edges, hence this category further identifies with ((”_|)_/Z)_f/. Since X ∈”, the hypothesis on h implies that Y ∈”. Thus f itself belongs to (”_|)_/Z, hence ((”_|)_/Z)_f/ has an initial object and is weakly contractible. Write := ∩ for the category of truncated geometric stacks and proper, almost finitely presented morphisms, and let and denote the restrictions of to and , respectively. It follows from Proposition <ref> that is the left Kan extension of along the inclusion ⊂. The proof of Proposition <ref> adapts to show that the canonical continuous functor () → identifies () with a subcategory of , and that is the intersection of with () in . In particular, identifies with a full subcategory of (). By the transitivity of left Kan extensions <cit.>, is the restriction to of , the left Kan extension of to (). Now write ⊂ for the subcategory which only includes almost ind-finitely presented ind-closed immersions. Proposition <ref> states that admits filtered colimits and its inclusion into , hence into (), is continuous. But is continuous <cit.>, hence so is its restriction to . The category (X) is small, stable, and idempotent complete for any reasonable ind-geometric stack X. If X is geometric this follows from <cit.>. Given (<ref>), the general case follows from <cit.>. §.§ Anticompletion Before turning to ind-coherent sheaves, we review the notion of anticompleteness from <cit.> in slightly adapted form. Recall that a t-structure on a stable ∞-category is left complete if the natural functor →lim_n ^≥ n is an equivalence, and is right complete if →lim_n ^≤ n is an equivalence. The category := lim_n ^≥ n is called the left completion of . It has a canonical t-structure such that → is t-exact and restricts to an equivalence ^≥ 0^≥ 0 <cit.>. If is another stable ∞-category with a t-structure, an exact functor F: → is bounded if there exist m, n such that F(^≥ 0) ⊂^≥ m and F(^≤ 0) ⊂^≤ n. If and are presentable, we write ^b(, ) ⊂(, ) for the full subcategory of bounded colimit-preserving functors. In the presentable case, a t-structure on is accessible if ^≥ 0 is also presentable, and is compatible with filtered colimits if ^≥ 0 is closed under filtered colimits in . We let denote the ∞-category whose objects are presentable stable ∞-categories equipped with accessible t-structures which are right complete and compatible with filtered colimits, and whose morphisms are bounded colimit-preserving functors. Explicitly, given ∈ we consider the set of cores ^≤ 0 (subcategories closed under small colimits and extensions), partially ordered by ^≤ 0_1 < ^≤ 0_2 if ^≤ 0_1 ⊂^≤ 0_2[n] for some n. These posets are contravariantly functorial under taking preimages along exact functors, and is a full subcategory of the associated Cartesian fibration over . We say ∈ is left anticomplete if composition with → induces an equivalence ^b(, ) ^b(, ) for any ∈. We further let and denote the full subcategories of defined by only including t-structures which are respectively left anticomplete and left complete. We then have the following variant of <cit.>. The inclusion admits a left adjoint, which acts on objects by ↦. The inclusion admits a right adjoint, which we denote by ↦. The restrictions of these adjoints define inverse equivalences between and . Given , ∈, write ^b, ≤ n(, ) ⊂^b(, ) for the full subcategory of functors which take ^≤ 0 to ^≤ n. Since → is t-exact, composition with it induces a functor ^b, ≤ n(, ) →^b, ≤ n(, ). The existence of the desired adjoint follows if this is an equivalence for all n and for all ∈ <cit.>. By shifting we can reduce to n = 0. That composition with → induces an equivalence between right t-exact functors follows from <cit.> (noting that ≅(^≤ 0) <cit.>). But this further identifies right t-exact functors which are bounded since ^≥ 0^≥ 0. Now let := (^≤ 0) ∈, where ^≤ 0 is the anticompletion of ^≤ 0 in the sense of <cit.>. We claim is left anticomplete in the sense of Definition <ref>. It suffices to show that for any ∈, the functor ^b, ≤ n(, ) →^b, ≤ n(, ) given by composition with → is an equivalence for all n. By shifting we can reduce to the case n = 0, which follows from <cit.>. The left-exact functor ^≤ 0→^≤ 0 of <cit.> induces a t-exact functor → <cit.>. The existence of the desired adjoint follows if the induced functor ^b(, ) →^b(, ) is an equivalence for all ∈ <cit.>. But this follows from Definition <ref>, given that the left completions of and are equivalent <cit.>. Finally, it follows by adjunction and Definition <ref> that ↦ has fully faithful restriction to , likewise for ↦ and . That these restrictions are inverse equivalences now follows since ∈ implies is the left completion of <cit.>. §.§ Ind-coherent sheaves Recall that if X is a geometric stack, the standard t-structure on (X) is accessible, left and right complete, and compatible with filtered colimits <cit.>. The category of ind-coherent sheaves on a geometric stack X is (X) := (X), the left anticompletion of its category of quasicoherent sheaves. We write Ψ_X: (X) →(X) for the left completion functor. In particular, Ψ_X restricts to an equivalence (X)^+ (X)^+. Unwinding the definitions, we see that (X) is uniquely characterized by the following universal property: for all ∈ we have ^b((X), ) ≅^b((X), ). By Proposition <ref>, ind-coherent sheaves on geometric stacks inherit all bounded, colimit-preserving functorialities of quasicoherent sheaves. Recall from (<ref>) that pullback and pushforward of quasicoherent sheaves were packaged as a functor : ()_fcd,all→. By construction its restriction to ()_fcd;ftd, the 1-full subcategory which only includes correspondences X Y Z in which h is of finite Tor-dimension and f is of finite cohomological dimension, lifts to a functor : ()_fcd;ftd→. We define a functor : ()_fcd;ftd→. by composing (<ref>) with the equivalence of Proposition <ref>. If f: X → Y is a morphism of finite cohomological dimension in , we write f_*: (X) →(Y) for the associated functor. To distinguish it from f_*: (X) →(Y) we sometimes denote them by f_IC* and f_QC*, respectively, but usually we arrange for the meaning to be clear from context. The two are related by a canonical isomorphism Ψ_Y f_IC*≅ f_QC*Ψ_X, and the same remarks apply to the functor h^* associated to a morphism h of finite Tor-dimension. To extend our discussion to ind-geometric stacks, first consider the functor : ()_fcd;ftd→ obtained by restricting (<ref>) to correspondences of truncated geometric stacks and composing with the forgetful functor →. Now let ()_fcd;ftd denote the 1-full subcategory of () which only includes correspondences X Y Z such that h is of finite Tor-dimension and f is of ind-finite cohomological dimension. These are stable under composition of correspondences by Propositions <ref>, <ref>, and <ref>. We define a functor : ()_fcd;ftd→ by left Kan extending (<ref>) along ()_fcd;ftd⊂()_fcd;ftd. This Kan extension exists by <cit.>. Taking either h or f to be the identity, the values of (<ref>) on the correspondence X Y Z define functors h^*: (X) →(Y) and f_*: (Y) →(Z). We have the following variant of Proposition <ref>, which is proved the same way. It implies in particular that (X) ≅(X_) in , where X ≅ X_ is any ind-geometric presentation. Here we write ⊂ for the subcategory which only includes morphisms of ind-finite cohomological dimension, identifying it with a subcategory ()_fcd;ftd as before. The restriction of to preserves filtered colimits along ind-closed immersions. §.§ !-pullback and t-structures If a morphism f: X → Y ind-geometric stacks is ind-proper (hence of ind-finite cohomological dimension by Proposition <ref>), we write f^!: (Y) →(X) for the right adjoint of f_*. As with other functors, we write f_QC^! and f_IC^! when the meaning of f^! is not otherwise clear from context. For ind-geometric X we define a standard t-structure on (X) in terms of these adjoints, following <cit.>. If X is an ind-geometric stack, (X) has a t-structure defined by (X)^≥ 0:= ∈(X) such that i^!() ∈(X')^≥ 0 for any truncated geometric substack i: X' → X. This t-structure is accessible, compatible with filtered colimits, right complete, and left anticomplete. If X ≅_ X_ is an ind-geometric presentation, the functors i_* are t-exact and induce equivalences (X)^≤ 0≅_(X_)^≤ 0, (X)^≥ 0≅_(X_)^≥ 0 in . We will use the following standard result, see e.g. <cit.>. Let ≅_ be the colimit of a diagram A →, with F_: _→ the canonical functors and G_: →_ their right adjoints. Then for any X ∈, the objects X ≅ F_ G_ (X) assemble into a diagram whose colimit is X. Let ⊂ denote the subcategory which only includes t-exact functors. Then admits filtered colimits, and these are preserved by the functors to given by ↦, ↦^≤ 0, and ↦^≥ 0. Moreover, the subcategory ⊂ which only includes left anticomplete t-structures is closed under filtered colimits. Let denote the category of Grothendieck prestable ∞-categories and left-exact functors. By <cit.> ↦() and ↦^≤ 0 induce inverse equivalences of and . The claims about ↦ and ↦^≤ 0 now follow since is closed under filtered colimits in <cit.>, and since ↦() preserves small colimits in <cit.>. Let ≅_ be a filtered colimit in . Since the structure functors F_: _→_, F_: _→ are t-exact their right adjoints are left t-exact. Then since ≅lim_ in and since the inclusions _^≥ 0⊂_ are morphisms in , they identify the limit of the _^≥ 0 in as the full subcategory of X ∈ such that F_^R(X) ∈_^≥ 0 for all . This is equivalent to _(F_(Y_), X) ≅ 0 for all and all Y_∈_^<0. But this is equivalent to _(Y, X) ≅ 0 for all Y ∈^<0, since by Lemma <ref> and the previous paragraph ^<0 is generated under small colimits by objects of the form F_(Y_) with Y_∈_^<0. It follows that ^≥ 0≅_^≥ 0 in . Finally, it follows from <cit.> and the discussion above that is closed under all colimits that exist in . Let ⊂ denote the subcategory which only includes ind-closed immersions, similarly for ⊂. By construction the restriction of (<ref>) to factors through , and we write ^t: → for its left Kan extension. Adapting again the proof of Proposition <ref>, we find that ^t(X) ≅^t(X_) in . But by Proposition <ref> (X) is the underlying category of ^t(X). The claims now follow from Lemma <ref> and Proposition <ref> (the Lemma ensures each i_^! is left t-exact, the Proposition ensures i^! is left t-exact for all i: X' → X). Since (X) is left anticomplete, it can be recovered functorially from (X)^+. By contrast, let '(X) denote the colimit of the categories (X_) in . Left completeness is stable under limits rather than colimits in , so '(X) will in general be neither left complete nor left anticomplete. In particular, '(X) is a wilder category in that it cannot be recovered functorially from its bounded below objects. Let X be a classical ind-geometric stack and X ≅_ X_ an ind-geometric presentation by classical geometric stacks. By construction we have (X_)^≅(X_)^ for all , and by Proposition <ref> we have (X)^≅_(X_)^ in . It follows from <cit.> that (X) is the dg nerve of the category of injective complexes in (X)^, a construction first studied in <cit.>. The category (X)^ is, in the case of ind-schemes, the category of ^!-modules considered in <cit.>. Using t-structures we can address the potential ambiguity in the definition of (X) when X is a non-truncated geometric stack. Given a non-truncated geometric stack X, the categories (X) defined by Definitions <ref> and <ref> are canonically equivalent, and this equivalence identifies the t-structure of the former with that of Proposition <ref>. Temporarily denote the two categories by _geom(X) and _ind(X). For each n the morphism i_n: τ_≤ n X → X is affine, hence yields a t-exact functor i_n*: (τ_≤ n X) →_geom(X). By Propositions <ref> and <ref> it suffices to show the induced t-exact functor _ind(X) ≅(τ_≤ n X) →_geom(X) is an equivalence. Now for all a ≤ b (<ref>) restricts to an equivalence _ind(X)^[a,b]≅_geom(X)^[a,b], since for all n ≥ b - a we have (τ_≤ n X)^[a,b]≅(τ_≤ n X)^[a,b]≅( X)^[a,b]≅(X)_geom^[a,b]. By right completeness of the two t-structures it follows that (<ref>) restricts to an equivalence _ind(X)^≥ a≅_geom(X)^≥ a for any a. But then (<ref>) induces an equivalence of left completions, hence by Proposition <ref> is itself an equivalence since its source and target are left anticomplete. §.§ Relation to coherent sheaves We now turn to the relationship between (X) and (X) when X is reasonable. For any reasonable ind-geometric stack X there is a fully faithful functor (X) (X). It is induced from a canonical natural transformation between the functors ()_prop;ftd→ obtained from (<ref>) and (<ref>). In particular, we have a diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) (X'); (b) at (3.2,0) (X'); (c) at (0,-1.5) (X); (d) at (3.2,-1.5) (X),; [right hook->] (a) to node[above] (b); [->] (b) to node[right] i_* (d); [->] (a) to node[left] i_*(c); [right hook->] (c) to node[above] (d); for any reasonable geometric substack i: X' → X. By construction the inclusion (X) ⊂(X) for geometric X enhances to a natural transformation of functors ()_prop;ftd→. The variant of (<ref>) appearing in the statement is the left Kan extension of its restriction to ()_prop;ftd, since following the proof of Proposition <ref> the restrictions of both to are left Kan extended from . The desired natural transformation and the pictured diagram then follow from the characteristic adjunction of left Kan extensions <cit.>. Given ∈(X) we can write ≅ i_*(_) for some _∈(X_), increasing if needed. Identifying , with their images in (X) we then have _IC(X)(, ) ≅_IC(X_)(_, i_^! i_*(_)) ≅_IC(X_)(_, _≥ i_^! i_*(_)) ≅_≥_IC(X_)(_, i_^! i_*(_)) ≅_≥_IC(X_)(i_*(_), i_*(_)). Here the second isomorphism follows from Proposition <ref> and <cit.>, and the third follows since _ and _ are coherent and each i_^! i_* is left t-exact. But since (X_) is a full subcategory of (X_) for all ≥, the last expression is equivalent to _(X)(, ) <cit.>. Recall from <cit.> that when X is geometric, (X) can characterized as the full subcategory of bounded, almost compact objects in (X) (i.e. of ∈(X)^+ such that τ^≥ n is compact in (X)^≥ n for all n). The equivalence (X)^+ ≅(X)^+ thus also identifies (X) with the full subcategory of bounded, almost compact objects in (X). We will see in Proposition <ref> that, when X is reasonable, the image of (X) in (X) still consists of bounded, almost compact objects. When X is coherent, the two categories in fact determine one another. If X is a coherent ind-geometric stack then the canonical functor ((X)) →(X) is an equivalence, and induces equivalences ((X)^≤ 0) ≅(X)^≤ 0, ((X)^≥ 0) ≅(X)^≥ 0. For a coherent geometric stack this follows from <cit.>. The ind-geometric case then follows from Propositions <ref>, <ref>, and <ref>, since ind-completion of idempotent-complete categories admitting finite colimits commutes with filtered colimits <cit.>. In light of Proposition <ref>, the notion of coherent ind-geometric stack is in a sense formally dual to the notion of perfect stack considered in <cit.>. §.§ Adjunction of pushforward and pullback Suppose f: X → Y is both of finite Tor-dimension and ind-finite cohomological dimension. Then we have separately defined functors f_IC* and f^*_IC, but the definition does not explicitly entail any direct relationship between them. Nonetheless, the two functors are adjoint in the expected way. Let X and Y be ind-geometric stacks and f: X → Y a morphism which is both of finite Tor-dimension and of ind-finite cohomological dimension. Then f_IC* is right adjoint to f^*_IC. Let , be the left completions of , ∈. Let F: → and G: → be bounded colimit-preserving functors, and let F: → and G: → be their images under the equivalences ^b(, ) ≅^b(, ) and ^b(, ) ≅^b(, ). Then G is right adjoint to F if and only if G is right adjoint to F. If this is the case, the equivalences ^b(, ) ≅^b(, ) and ^b(, ) ≅^b(, ) together identify pairs of a compatible unit and counit for the two adjunctions. We consider the if direction, the other being symmetric. Recall that G being right adjoint to F is equivalent to the existence of unit and counit transformations u: 𝕀_→GF, : FG→𝕀_ and diagrams [baseline=(current bounding box.center),thick,>=] ; ; [matrix] at (0,0) (ba) at (0,0) F; (ab) at (,) FGF; (bc) at (+,0) F; [->] (ba) to node[above left] 𝕀_F·u (ab); [->] (ab) to node[above right] ·𝕀_F (bc); [->] (ba) to node[below] 𝕀_F (bc); ; [matrix] at (6,0) (ba) at (0,0) G; (ab) at (,) GFG; (bc) at (+,0) G.; [->] (ba) to node[above left] u·𝕀_G (ab); [->] (ab) to node[above right] 𝕀_G· (bc); [->] (ba) to node[below] 𝕀_G (bc); ; This is a reformulation of <cit.>, which is equivalent to <cit.> by <cit.>. By hypotheses u and are morphisms in ^b(,) and ^b(,), and we write u and for the corresponding morphisms in ^b(,) and ^b(,). The above diagrams are respectively in ^b(,) and ^b(,), and we claim the corresponding diagrams in ^b(,) and ^b(,) witness u and as the unit and counit of an adjunction between F and G. In other words, we claim the equivalence ^b(,) ≅^b(,) takes FGF, 𝕀_F·u, and ·𝕀_F respectively to FGF, 𝕀_F·u, and ·𝕀_F, similarly for the right diagram. Write Ψ_: → and Ψ_: → for the canonical functors. Then F is characterized in ^b(, ) by the condition Ψ_F≅FΨ_, similarly for G. It follows that Ψ_FGF≅FGFΨ_, which likewise characterizes FGF as the functor corresponding to FGF. Now consider the following diagram, in which all horizontal arrows equivalences. [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) ^b(,); (ab) at (,0) ^b(,); (ac) at (+,0) ^b(,); (ba) at (0,) ^b(,); (bb) at (,) ^b(,); (bc) at (+,) ^b(,); [->] (aa) to node[above] - Ψ_ (ab); [<-] (ab) to node[above] Ψ_ - (ac); [->] (ba) to node[above] - Ψ_ (bb); [<-] (bb) to node[above] Ψ_ - (bc); [->] (aa) to node[left] F - (ba); [->] (ab) to node[right] F - (bb); [->] (ac) to node[right] F - (bc); Here the compositions around the left square are evidently isomorphic, while those around the right square are because of the isomorphism Ψ_F≅FΨ_. The morphism 𝕀_F·u is the image of u under the left vertical map, while 𝕀_F·u is the image of u under the right. But by definition u is the image of u under the overall top equivalence, hence 𝕀_F·u is the image of 𝕀_F·u under the overall bottom equivalence. The remaining conditions are checked the same way. We let subscripts be implicit in the proof, f_* always meaning f_IC*, etc. If X and Y are truncated and geometric, the claim follows immediately from Lemma <ref>. In general, let Y ≅ Y_ be an ind-geometric presentation and f_: X_→ Y_ the base change of f. We then have an ind-geometric presentation X ≅ X_ since f is of finite Tor-dimension and filtered colimits in are left exact. Let A denote the index category and Δ^1_⊂ A the morphism associated to ≤. By construction we have a functor Δ^1 × A → taking Δ^1 ×Δ^1_ to the diagram witnessing the isomorphism i_*f_*≅ f_ * i'_ *, and a similar functor (Δ^1)^× A → packaging the isomorphisms f_^* i_*≅ i'_ * f_^*. Passing to right adjoints, these are equivalent to the data of a functor A^→((Δ^1)^, ) taking to f_*^R and a functor A^→(Δ^1, ) taking to f_^*R. Note that the unit/counit compatibility of Lemma <ref> implies more precisely that the isomorphism f_^* i_*≅ i'_ * f_^* is the Beck-Chevalley transformation associated to the isomorphism i_*f_*≅ f_ * i'_ *. In the notation of <cit.>, the above functors thus take values in ^((Δ^1)^, ) and ^(Δ^1, ), respectively, and correspond under the equivalence ^((Δ^1)^, ) ≅^(Δ^1, ) of <cit.>. By the same result these subcategories are closed under limits in ((Δ^1)^, ) and (Δ^1, ). But by Proposition <ref> and <cit.> we have f_*^R ≅lim f_*^R and f^*R≅lim f_^*R, hence f_*^R is right adjoint to f^*R, hence f_* is right adjoint to f^*. The definition of ind-coherent pushforward and *-pullback also defines base change isomorphisms for suitable Cartesian squares. These isomorphisms are compatible with the adjunction of Proposition <ref> in the following sense. Let the following be a Cartesian diagram of ind-geometric stacks in which f is both of finite Tor-dimension and of ind-finite cohomological dimension. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); If h is of finite Tor-dimension (resp. of ind-finite cohomological dimension), the isomorphism f^* h_* ≅ h'_* f'^* of functors (Y') →(X) is the Beck-Chevalley transformation associated to the isomorphism h'^* f^* ≅ f'^* h^* of functors (Y) →(X') (resp. the isomorphism f_* h'_* ≅ h_* f'_* of functors (X') →(Y)). Consider the case with h of ind-finite cohomological dimension, the finite Tor-dimension case following by a variation of the same argument. If X, Y, and Y' are truncated and geometric the claim follows immediately from Lemma <ref>. If h is the immersion of an ind-geometric substack the claim was established during the proof of Proposition <ref>. We pass to right adjoints and identify the isomorphisms f^*R h_*^R ≅ h'^R_* f'^*R and f_*^R h'^R_* ≅ h^R_* f'^R_* respectively with a morphism f^*R→ f'^*R in (Δ^1, ) and a morphism f_*^R→ f'^R_* in ((Δ^1)^, ), performing such identifications without comment in the rest of the proof. In the notation of <cit.>, we want to show these belong to ^(Δ^1, ) and ^((Δ^1)^, ), respectively, and correspond to each other under the equivalence ^(Δ^1, ) ≅^((Δ^1)^, ) of <cit.>. First suppose X and Y are truncated and geometric, let Y' ≅ Y'_ be an ind-geometric presentation, and write f'_: X'_→ Y'_ for the base change of f'. As in the proof of Proposition <ref>, we have f'^*R≅lim f'^*R_ in both ^(Δ^1, ) and (Δ^1, ), hence f^*R→ f'^*R is in ^(Δ^1, ) since each f^*R→ f'^*R_ is. Similarly, f_*^R→ f'^R_*≅lim f'^R_* is in ^((Δ^1)^, ) since each f_*^R→ f'^R_* is, and it corresponds to f^*R→ f'^*R under <cit.> since each f_*^R→ f'^R_* corresponds to f^*R→ f'^*R_. Now let Y ≅ Y_ be an ind-geometric presentation. For any we have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) X'_; (ad) at (++,0) Y'_; (ba) at (0,) X'; (bc) at (+,) Y'; (cb) at (,+) X_; (cd) at (++,+) Y_; (da) at (0,++) X; (dc) at (+,++) Y; [->] (ab) to node[above] f'_ (ad); [->] (ab) to node[above left, pos=.25] j'_ (ba); [->] (ab) to node[right,pos=.2] h'_ (cb); [->] (ad) to node[below right] j_ (bc); [->] (ad) to node[right] h_ (cd); [->] (ba) to node[left] (da); [->] (cb) to node[above,pos=.25] f_ (cd); [->] (cb) to node[above left, pos=.25] i'_ (da); [->] (cd) to node[below right] i_ (dc); [->] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [->] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); with Cartesian faces. We have already shown that the compositions f^*R→ f^*R_→ f'^*R_ and f^R_* → f^R_*→ f'^R_* belong to ^(Δ^1, ) and ^((Δ^1)^, ), and correspond under <cit.>. By closure of these subcategories under limits, and by Proposition <ref>, it suffices to show f'^*R_→ f'^*R_ and f'^R_*→ f'^R_* belong to ^(Δ^1, ) and ^((Δ^1)^, ) and correspond under <cit.>. Consider the following diagram in ((X_), (Y'_)). [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) j^R_ * h^R_* f^*R_; (ab) at (,0) j^R_ * f'^*R_ h'^R_*; (ac) at (+,0) f'^*R_ j'^R_ * h'^R_*; (ba) at (0,) h_*^R i^R_ * f^*R_; (bb) at (,) h_*^R f^*R_ i'^R_ *; (bc) at (+,) f'^*R_ h'^R_* i'^R_ *; [->] (aa) to node[above] ∼ (ab); [->] (ab) to node[above] ∼ (ac); [->] (ba) to node[above] ∼ (bb); [->] (bb) to node[above] ∼ (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); We have already shown f'^*R_ and f'^R_*, etc., are adjoint, and the claim at hand is equivalent to the top right arrow being the Beck-Chevalley transformation associated to the isomorphism j'^R_ * f'^R_*≅ f'^R_* j^R_ *. But this follows since we have shown the corresponding claim for the top left and bottom arrows, and since all arrows in the diagram are isomorphisms. § IND-PROPER !-PULLBACK In this section we further study the functor f^!: (Y) →(X) associated to an ind-proper morphism f: X → Y of ind-geometric stacks. We first establish the almost continuity of f^! in the almost ind-finitely presented case (Proposition <ref>). This is then used to extend the compatibility between pushforward and !-pullback to the ind-geometric setting (Proposition <ref>). §.§ Almost continuity We begin with the geometric case, where we have the following generalizations of <cit.>. Recall that f^! being almost continuous means its restriction to (Y)^≥ 0 is continuous for all n. If f: X → Y is a proper, almost finitely presented morphism of geometric stacks, then f^!: (Y) →(X) is almost continuous. Let the following be a Cartesian diagram of geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); If h is of finite Tor-dimension and f is proper and almost finitely presented, then the Beck-Chevalley map h'^* f^!() → f'^! h^*() in (X') is an isomorphism for all ∈(Y)^+. Proposition <ref> is true when Y' is affine and h is faithfully flat. Let Y_ denote the Cech nerve of h (so Y_0 = Y') and f_k: X_k → Y_k the base change of f. Given a morphism p: i → j in Δ_s, let h_p: Y_j → Y_i denote the associated map and h'_p: X_j → X_i its base change. By Proposition <ref> we can choose n so that f_* takes (X)^≤ 0 to (Y)^≤ n, hence so does each f_k since Y_k is affine over Y. Then τ^≥ 0∘ f_* and f^! restrict to an adjunction between (X^≥ -n) and (Y)^≥ 0, likewise for each f_k. The categories (X_k)^≥ -n and (Y_k)^≥ 0, together with the functors h^*_p, h'^*_p, and τ^≥ 0∘ f_k*, form a diagram Δ^1 ×Δ_s →. By <cit.> the Beck-Chevalley transformation h'^*_p f^!_i → f'^!_j h^*_p restricts to an isomorphism of functors (Y_i)^≥ 0→(X_j)^≥ -n for any p. Since h is faithfully flat we have (X)^≥ -n≅lim_Δ_s(X_i)^≥ -n and (Y)^≥ 0≅lim_Δ_s(Y_i)^≥ 0, and the claim follows from <cit.>. Let ≅_ be a filtered colimit in (Y)^≥ 0, let h: X' ≅ A → Y be a flat cover, and define f': X' → Y', h': X' → X by base change. Since h'^* is continuous and conservative it suffices to show h'^* f^!(_) → h'^* f^!() is an isomorphism. By Lemma <ref> and left boundedness of f^! this is equivalent to f'^! h^*(_) → f'^! h^*() being an isomorphism. Since h^* is t-exact this follows from Lemma <ref>. Let ϕ: U ≅ A → Y and θ: U' ≅ A' → U ×_Y Y' be flat covers. We obtain a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) Z'; (ad) at (++,0) U'; (ba) at (0,) X'; (bc) at (+,) Y'; (cb) at (,+) Z; (cd) at (++,+) U; (da) at (0,++) X; (dc) at (+,++) Y; [->] (ab) to node[above] g' (ad); [->] (ab) to node[above left, pos=.25] ψ' (ba); [->] (ab) to node[right,pos=.2] ξ' (cb); [->] (ad) to node[below right] ψ (bc); [->] (ad) to node[right] ξ (cd); [->] (ba) to node[left] (da); [->] (cb) to node[above,pos=.25] g (cd); [->] (cb) to node[above left, pos=.25] ϕ' (da); [->] (cd) to node[below right] ϕ (dc); [->] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [->] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); in which all but the left and right faces are Cartesian. Note that ψ is faithfully flat and ξ is of finite Tor-dimension, since they are the compositions of θ with the base changes of ϕ and h, respectively. Since ψ'^* is conservative, it suffices to show the top left arrow in [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) ψ'^* h'^* f^!(); (ab) at (,0) ψ'^* f'^! h^*(); (ac) at (+,0) g'^!ψ^* h^*(); (ba) at (0,) ξ'^* ϕ'^*f^!(); (bb) at (,) ξ'^* g^! ϕ^*(); (bc) at (+,) g'^! ξ^* ϕ^*(); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); is an isomorphism. This follows since the bottom left and top right arrows are isomorphisms by Lemma <ref>, and the bottom right is by <cit.>. If f: X → Y is a proper morphism of geometric stacks, f_QC^! and f_IC^! are left bounded since f_QC* and f_IC* are bounded. Given the equivalences (-)^+ ≅(-)^+ and the isomorphism Ψ_Y f_IC*≅ f_QC*Ψ_X, Propositions <ref> and <ref> imply the following. If f: X → Y is a proper, almost finitely presented morphism of geometric stacks, then f^!: (Y) →(X) is almost continuous. Under the hypotheses of Proposition <ref>, the Beck-Chevalley map h'^* f^!() → f'^! h^*() in (X') is an isomorphism for all ∈(Y)^+. We then have the following ind-geometric extension of Proposition <ref>. Let X and Y be reasonable ind-geometric stacks and f: X → Y an ind-proper, almost ind-finitely presented morphism. If f is of finite cohomological dimension, then f^!: (Y) →(X) is almost continuous. If f is of ind-finite cohomological dimension and X and Y are coherent, then f^! is continuous. The second claim follows since f_* preserves coherence (Proposition <ref>) and (X) and (Y) are compactly generated by coherent sheaves in this case (Proposition <ref>). For the first claim, let X ≅ X_ be a reasonable presentation. By the proof of <cit.>, categories admitting filtered colimits and continuous functors among them form a subcategory of which is closed under limits. Since the restriction i_^!: (X_)^≥ 0→(X_)^≥ 0 is continuous for all ≥ (Corollary <ref>), it follows that i_^! is almost continuous for any . Now let ≅_ be a filtered colimit in (Y)^≥ 0. For any we can refactor f ∘ i_ as i'_∘ f_ for some reasonable geometric substack i'_: Y_→ Y and some proper, almost finitely presented map f_: X_→ Y_. Note that i'^!_ is almost continuous since we may extend i'_ to a reasonable presentation of Y. Since (X) ≅lim(X_) in , it suffices to show the second factor in _ i^!_ f^! (_) → i^!__ f^! (_) → i^!_ f^! (__) is an isomorphism for all . The first factor is an isomorphism since f is of finite cohomological dimension, hence f^! is left bounded, and since i_^! is almost continuous. But i_^! f^! ≅ f^!_ i'^!_, so the composition is an isomorphism by the left t-exactness of i'^!_ and the almost continuity of f^!_ and i'^!_. We can now establish the following claim from Section <ref>. If X is a reasonable ind-geometric stack, the full subcategory (X) ⊂(X) consists of bounded, almost compact objects. Fix a reasonable presentation X ≅_ X_. Given ∈(X) we can write ≅ i_*(_) for some and some _∈(X_). The t-exactness of i_* implies is bounded in (X), while the left boundedness and almost continuity of i_^! (Proposition <ref>) imply is almost compact. §.§ Pushforward and !-pullback We turn next to the commutation of pushforward and ind-proper !-pullback, generalizing the result of <cit.> for ind-schemes of ind-finite type. To simplify the proof we assume the stacks involved are coherent, then discuss how the result may be generalized. We do caution that in the following statement the coherence of X' is an additional hypothesis beyond the coherence of X, Y, and Y'. Let the following be a Cartesian diagram of ind-geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); Suppose that all stacks in the diagram are coherent, that h is of ind-finite cohomological dimension, and that f is ind-proper and almost ind-finitely presented. Then for any ∈(Y') the Beck-Chevalley map h'_* f'^!() → f^! h_*() is an isomorphism. By Proposition <ref>, all functors in the statement are continuous, hence by coherence of Y it suffices to show the claim for ∈(Y). When X, Y, and Y' are geometric, the functors in the statement are left bounded and compatible with the equivalences (-)^+ ≅(-)^+, hence it suffices to show the map h'_QC* f'^!_QC() → f^!_QC h_QC*() is an isomorphism. But this follows since it is obtained from the isomorphism f'_QC* h'^*_QC h_QC^* f_QC* in ((X),(Y')) by taking right adjoints. Next suppose that f is geometric and that h is the inclusion of a reasonable geometric substack, which we may assume is a term in a reasonable presentation Y ≅ Y_. Writing f_: X_→ Y_ and i'_: X_→ X_ for the base changes of f and i_, the functors f^!_, i^!_, and i'^!_ form a diagram (A ×Δ^1)^→, where A is our index category. Each X_ is coherent by Proposition <ref> and is geometric since f is geometric, hence by the first paragraph i'_* f^!_→ f^!_ i_* is an isomorphism for all ≤. Since (X) ≅(X_) and (Y) ≅(Y_) in (Proposition <ref>), the claim follows by <cit.>. Suppose now that X and Y are geometric (hence so are f and f'), and let Y' ≅ Y'_ be a reasonable presentation. Since ∈(Y') we have ≅ i_*(_) for some  and some _∈(Y'_). We want to show the second factor of h'_* i'_* f^!_(_) → h'_* f'^! i_*(_) → f^! h_* i_*(_) is an isomorphism, where f_: X'_→ Y'_ is the base change of f. Given that X'_ is coherent by Proposition <ref>, this follows since the first factor is an isomorphism by the previous paragraph and the composition is by the first paragraph. Next suppose that that f is the inclusion of a reasonable geometric substack, which again we may assume is a term in a reasonable presentation Y ≅ Y_. Writing h_: Y'_→ Y_ and i'_: Y'_→ Y'_ for the base changes of h and i_, the functors h_*, i_*, and i'_* form a diagram A ×Δ^1 →, where A is our index category. Each Y'_ is coherent by Proposition <ref>, hence by the previous paragraph h_* i'^!_→ h_* i^!_ is an isomorphism for all ≤. Since (Y') ≅(Y'_) and (Y) ≅(Y_) in (Proposition <ref>), the claim follows by <cit.>. In the general case, let X ≅ X_ be a reasonable presentation. For any we can factor f ∘ i_ through some reasonable geometric substack j_: Y_→ Y, and have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) X'_; (ad) at (++,0) Y'_; (ba) at (0,) X'; (bc) at (+,) Y'; (cb) at (,+) X_; (cd) at (++,+) Y_; (da) at (0,++) X; (dc) at (+,++) Y; [->] (ab) to node[above] f'_ (ad); [->] (ab) to node[above left, pos=.25] i'_ (ba); [->] (ab) to node[right,pos=.2] h'_ (cb); [->] (ad) to node[below right] j'_ (bc); [->] (ad) to node[right] h_ (cd); [->] (ba) to node[left] h' (da); [->] (cb) to node[above,pos=.25] f_ (cd); [->] (cb) to node[above left, pos=.25] i_ (da); [->] (cd) to node[below right] j_ (dc); [->] (da) to node[above,pos=.75] f (dc); [-,line width=6pt,draw=white] (ba) to (bc); [->] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] h (dc); with all faces but the top and bottom Cartesian. We then have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) h'_* i'^!_ f'^!(); (ab) at (,0) i^!_ h'_* f'^!(); (ac) at (+,0) i^!_ f^! h_*(); (ba) at (0,) h'_* f'^!_ j'^!_(); (bb) at (,) f^!_ h_* j'^!_(); (bc) at (+,) f^!_ j^!_ h_* (); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); in (Y_). Since the functors i^!_ determine an isomorphism (Y) ≅lim(Y_) in , it suffices to show the top right arrow is an isomorphism for all . But, given that X'_ and Y'_ are coherent by Proposition <ref>, the top left and bottom right arrows are isomorphisms by the previous paragraph, and the bottom left is by the third paragraph. As mentioned above, the coherence of X, Y, and Y' does not imply that of X', though in motivating applications X, Y, and Y' satisfy stronger hypotheses that do imply this. A basic case is when Y' is locally Noetherian, in which case so is X' by the hypotheses on f. More generally, we will see in <cit.> that if Y' is ind-tamely presented and h is affine, then coherence of X' follows from that of X. One can remove the coherence hypotheses in Proposition <ref> at the cost of adding other hypotheses, and in particular adding boundedness conditions on . The most obvious complication that arises is that f^! need not take (Y)^+ to (X)^+ unless f is of finite (rather than merely ind-finite) cohomological dimension. Beyond imposing this condition on f, we can also note that f^! does in general take (Y)^+_lim to (X)^+_lim, where (Y)^+_lim⊂(Y) is the full subcategory of such that i^!() ∈(Y”)^+ for every truncated geometric substack i: Y”→ Y. Though h_* does not in general take (Y')^+_lim to (Y)^+_lim, it does if we further assume h is formally geometric. Here we say an ind-geometric stack Z is formally geometric if its underlying classical stack τ_≤ 0Z is geometric, and we define formally geometric morphisms by base change to affine schemes. For example, the inclusion of any truncated geometric substack is a formally geometric morphism. With significant additional care, the proof of Proposition <ref> can then be extended to show the following. Let the following be a Cartesian diagram of ind-geometric stacks. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) X'; (b) at (3,0) Y'; (c) at (0,-1.5) X; (d) at (3,-1.5) Y; [->] (a) to node[above] f' (b); [->] (b) to node[right] h (d); [->] (a) to node[left] h'(c); [->] (c) to node[above] f (d); Suppose that X and Y are reasonable, that h is of ind-finite cohomological dimension, and that f is ind-proper and almost ind-finitely presented. Suppose also that f is of finite cohomological dimension (resp. that h is formally geometric). Then for any ∈(Y')^+ (resp. ∈(Y')^+_lim) the Beck-Chevalley map h'_* f'^!() → f^! h_*() is an isomorphism. § EXTERNAL PRODUCTS AND SHEAF HOM Given a geometric stack Y, sheaf Hom from ∈(Y) is defined by the adjunction - ⊗ : (Y) ⇆(Y) : (, -). If Y is a reasonable ind-geometric stack and ∈(Y), there is still a natural functor (, -): (Y) →(Y), despite the absence of a tensor product of ind-coherent sheaves in general. This is because we do still have an external tensor product, and for any ind-geometric X we can consider the adjunction - ⊠ : (X) ⇆(X × Y) : (- ⊠)^R. To make explicit their dependence on X, we will denote these functors by e_,X and e_,X^R. When X and Y are geometric, we have an isomorphism (, -) ≅ e_,Y^R Δ_Y*, and this formula provides a useful definition of sheaf Hom in the ind-geometric setting. In this section we show that ind-coherent sheaf Hom, and more fundamentally the functor e_,X^R, retains many basic properties from the quasi-coherent setting. In particular, it is almost continuous (Proposition <ref>) and compatible with pushforward (Propositions <ref> and <ref>). We also show the ind-coherent external product is itself compatible with ind-proper !-pullback (Proposition <ref>). A basic technical theme is the close analogy between e_,X^R and ind-proper !-pullback, with various key proofs in this section being parallel to corresponding proofs in Section <ref>. §.§ Quasi-coherent sheaf Hom We begin by establishing the basic facts we will need about quasi-coherent sheaf Hom on geometric stacks. As in Section <ref>, many of these are simple extensions to the geometric setting of existing results about Artin stacks. If Y is a geometric stack and ∈(Y) is almost perfect, then (,-): (Y) →(Y) is left bounded and almost continuous. Let X and Y be geometric stacks, h: X → Y a morphism of finite Tor-dimension, and ∈(Y) an almost perfect sheaf. Then the Beck-Chevalley map h^* (, ) →(h^*(), h^*()) is an isomorphism for all ∈(Y)^+. Proposition <ref> is true when Y is affine. In this case (Y) is compactly generated by perfect sheaves. If ∈(Y) is perfect then ⊗ is almost perfect, hence almost continuity follows by applying <cit.> to τ^≥ n ( -): (Y) →(Y)^≥ n. Left boundedness follows since and hence - are right bounded. Proposition <ref> is true when X and Y are affine. Let Y ≅ A and X ≅ B. Since h is affine h_* conservative, hence it suffices to show h_* h^* (, ) → h_* (h^*(), h^*()) is an isomorphism. Rewriting the second term using the projection formula, one sees this is the specialization of the Beck-Chevalley map θ_M: (, ) M →(, M) in the case M = B. Write for the full subcategory of M ∈_A such that θ_M is an isomorphism. The assignment M ↦θ_M extends to a functor _A →_A^Δ^1, which is exact since the source and target of θ_M are exact in M. It follows that is a stable subcategory closed under retracts, as isomorphisms form such a subcategory of _A^Δ^1. Clearly A ∈, hence contains all perfect A-modules. If M is of Tor-dimension ≤ n, then we can write it as a filtered colimit M ≅_ M_ of perfect A-modules of Tor-dimension ≤ n <cit.>. The claim now follows since tensoring is continuous, since the M_ are uniformly bounded below, and since (,-) is almost continuous by Lemma <ref>. Proposition <ref> is true when X is affine and h is faithfully flat. Let X_ denote the Cech nerve of h (so X_0 = X), and let h_k: X_k → Y denote the natural map. Given a morphism p: i → j in Δ_s, let h_p: X_j → X_i denote the associated map. Choose n so that ∈(Y)^≤ n. Then τ^≥ n( -) and (,-) restrict to an adjunction between (Y)^≥ 0 and (Y)^≥ n, similarly for _k = h_k^*() ∈(X_k)^≤ n. The categories (X_k)^≥ 0 and (X_k)^≥ n, together with the functors h^*_p and τ^≥ n(_k -), form a diagram Δ^1 ×Δ_s →. By Lemma <ref> the Beck-Chevalley transformation h_p^* (_i,-) →(_j, h^*_p(-)) restricts to an isomorphism of functors (X_i)^≥ n→(X_j)^≥ 0 for any p. Since h is faithfully flat we have (X)^≥ m≅lim_Δ_s(X_i)^≥ m for any m, and the claim follows from <cit.>. Again left boundedness follows since is right bounded. Let ≅_ be a filtered colimit in (Y)^≥ 0 and let h: X ≅ A → Y be a flat cover. Since h^* is continuous and conservative it suffices to show h^* (,_) → h^* (,) is an isomorphism. By Lemma <ref> this is equivalent to (h^*(),h^*(_)) →(h^*(),h^*()) being an isomorphism. Since h^* is t-exact this follows from Lemma <ref>. Let ϕ: V ≅ A → Y be a flat cover, and consider the diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) U; (b) at (3,0) X; (c) at (0,-1.5) V; (d) at (3,-1.5) Y; [->] (a) to node[above] ψ (b); [->] (b) to node[right] h (d); [->] (a) to node[left] g(c); [->] (c) to node[above] ϕ (d); induced by a flat cover ξ: U ≅ B → V ×_Y X. Note that ψ is faithfully flat and g is of finite Tor-dimension, since they are the compositions of ξ with the base changes of ϕ and f, respectively. Since ψ^* is conservative it suffices to show the top left arrow in [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) ψ^* h^* (,); (ab) at (,0) ψ^* (h^*(),h^*()); (ac) at (+,0) (ψ^*h^*(),ψ^*h^*()); (ba) at (0,) g^* ϕ^* (,); (bb) at (,) g^* (ϕ^*(),ϕ^*()); (bc) at (+,) (g^*ϕ^*(),g^*ϕ^*()); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); is an isomorphism. This follows since the bottom left and top right arrows are isomorphisms by Lemma <ref>, and the bottom right is by Lemma <ref>. We note that a different proof of Proposition <ref> is given in <cit.> when h is flat. We also note the more evident compatibility that for any morphism f: X → Y in and any ∈(Y), the isomorphism f^*(- ⊗) ≅ f^*(-) ⊗ f^*() gives rise to an isomorphism of right adjoints (, f_*(-)) ≅ f_* (f^*(), -). §.§ Quasi-coherent external products Let X and Z be geometric stacks, p_X: X × Z → X and p_Z: X × Z → Z the projections, and ∈(Z). By definition the external product e_,X: (X) →(X × Z) and its right adjoint are given by e_,X := p_X^*(-) p_Z^*(), e_,X^R := p_X*(p_Z^*(), - ). Propositions <ref>, <ref>, <ref>, and <ref> immediately imply the following results. If Y and Z are geometric stacks and ∈(Z) is almost perfect, then e_,X^R: (X × Z) →(X) is left bounded and almost continuous. Let X, Y, and Z be geometric stacks, h: X → Y a morphism of finite Tor-dimension, and ∈(Z) an almost perfect sheaf. Then the Beck-Chevalley map h^* e_,Y^R() → e_,X^R (h ×𝕀_Z)^*() is an isomorphism for all ∈(Y × Z)^+. We will be most interested in external products when is an ordinary (Noetherian) ring of finite global dimension, in which case we have the following boundedness condition. Suppose is an ordinary ring of finite global dimension. Then e_,X is left bounded for any X, Z ∈ and ∈(X)^+. In particular, if X and Z are truncated and ∈(X), ∈(Z), then X× Z is truncated and ⊠∈(X × Z). If ∈(Z)^≥ n and is of global dimension d, then we claim e_,X takes (X)^≥ 0 to (X × Z)^≥ n-d. When X and Z are affine, this follows from the fact that (⊠)_≅ ()__ ()_, where (-)_ denotes restriction of scalars to _. If g: U ≅ A → X and h: V ≅ B → Z are flat covers, the general case follows since (g × h)^*e_,X() ≅ e_h^*(), U g^*(). The last claim follows since almost perfect sheaves are closed under pullbacks and tensor products. Suppose is an ordinary ring of finite global dimension. If X and Y are reasonable ind-geometric stacks, then so is X × Y. Proposition <ref> also ensures that external products commute with the various almost continuous functorialities we have considered, as described by the following three results. Suppose is an ordinary ring of finite global dimension. Let X, Y, and Z be geometric stacks, f: X → Y a morphism, and ∈(Z)^+. Then the Beck-Chevalley map e_,Y f_*() → (f ×𝕀_Z)_* e_,X() is an isomorphism for all ∈(X)^+. First suppose Y and Z are affine. Then p_Y: Y × Z → Y is affine, hence p_Y* is conservative and it suffices to show p_Y* e_,Y f_*() → p_Y* (f ×𝕀_Z)_* e_,X() f_* p_X* e_,X() is an isomorphism. Using the projection formula for the affine morphisms p_X and p_Y <cit.>, we can identify this with the Beck-Chevalley map f_*() p_Y* p^*_Z() → f_*( f^* p_Y* p^*_Z()). But p_Y* p^*_Z() ≅ q^*_Yq_Z*(), where q_Y, q_Z are the structure maps to , hence p_Y* p^*_Z() is of finite Tor-dimension by our hypotheses on and left boundedness of . The claim now follows from the proof of Lemma <ref>. In the general case, let g: U ≅ A → Y and h: V ≅ B → Z be flat covers. We obtain a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) W × V; (ad) at (++,0) U × V; (ba) at (0,) X × Z; (bc) at (+,) Y × Z; (cb) at (,+) W; (cd) at (++,+) U; (da) at (0,++) X; (dc) at (+,++) Y; [->] (ab) to node[above] (ad); [->] (ab) to node[above left, pos=.25] g' × h (ba); [->] (ab) to node[right,pos=.2] (cb); [->] (ad) to node[above left, pos=.75] (bc); [->] (ad) to node[right] (cd); [->] (ba) to node[left] (da); [->] (cb) to node[above,pos=.25] (cd); [->] (cb) to node[above left, pos=.25] g' (da); [->] (cd) to node[below right] g (dc); [->] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [->] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); in which all but the left and right faces are Cartesian. Since (g × h)^* is conservative, it suffices to show the top left arrow in [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) (g × h)^* e_,Y f_*(); (ab) at (,0) (g × h)^* (f ×𝕀_Z)_* e_,X (); (ac) at (+,0) (f' ×𝕀_V)_* (g' × h)^* e_,X (); (ba) at (0,) e_h^*(),U g^* f_*(); (bb) at (,) e_h^*(),U f'_* g'^*(); (bc) at (+,) (f' ×𝕀_V)_* e_h^*(),W g'^*(); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); is an isomorphism. This follows since the bottom left arrow is an isomorphism by Proposition <ref>, the top right is by this and Proposition <ref>, and the bottom right is by the previous paragraph. Suppose is an ordinary ring of finite global dimension. Let X, Y, and Z be geometric stacks, f: X → Y a proper, almost finitely presented morphism, and ∈(Z)^+. Then the Beck-Chevalley map e_,Xf^!() → (f ×𝕀_Z)^!e_, Y () is an isomorphism for all ∈(Y)^+. First suppose Y and Z are affine. Then p_X: X × Z → X is affine, hence p_X* is conservative and it suffices to show the first factor of p_X* e_,X f^!() → p_X* (f ×𝕀_Z)^! e_, Y() → f^! p_Y* e_, Y() is an isomorphism. The second factor is an isomorphism by the first paragraph of the proof of Proposition <ref>, so it suffices to show the composition is. Using the projection formula for the affine morphisms p_X and p_Y <cit.>, we can identify this with the Beck-Chevalley map f^!() f^*p_Y* p^*_Z() → f^!( p_Y* p^*_Z()). But p_Y* p^*_Z() ≅ q^*_Yq_Z*(), where q_Y, q_Z are the structure maps to , hence p_Y* p^*_Z() is of finite Tor-dimension by our hypotheses on and left boundedness of . The claim now follows from <cit.>. In the general case, let g: U ≅ A → Y and h: V ≅ B → Z be flat covers. We obtain a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) W; (ad) at (++,0) W × V; (ba) at (0,) X; (bc) at (+,) X × Z; (cb) at (,+) U; (cd) at (++,+) U × V; (da) at (0,++) Y; (dc) at (+,++) Y × Z; [<-] (ab) to node[above] (ad); [->] (ab) to node[above left, pos=.25] g' (ba); [->] (ab) to node[right,pos=.2] f' (cb); [->] (ad) to node[below right] (bc); [->] (ad) to node[right] (cd); [->] (ba) to node[left] f (da); [<-] (cb) to node[above,pos=.25] (cd); [->] (cb) to node[above left, pos=.25] g (da); [->] (cd) to node[below right] g × h (dc); [<-] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [<-] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); in which all but the top and bottom faces are Cartesian. Since (g' × h)^* is conservative, it suffices to show the top left arrow in [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) (g' × h)^* e_,X f^!(); (ab) at (,0) (g' × h)^* (f ×𝕀_Z)^! e_,Y (); (ac) at (+,0) (f' ×𝕀_V)^! (g × h)^* e_,Y (); (ba) at (0,) e_h^*(),W g'^* f^!(); (bb) at (,) e_h^*(),W f'^! g^*(); (bc) at (+,) (f' ×𝕀_V)^! e_h^*(),U g^*(); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); is an isomorphism. This follows since the bottom left arrow is an isomorphism by Proposition <ref>, the top right is by this and Proposition <ref>, and the bottom right is by the first paragraph. To express the compatibility of external products and sheaf Hom, we write _,X: (X) →(Z × X) for ⊠ -, the external product with on the left. Given ' ∈(Z'), the associativity isomorphism ' ⊠ (- ⊠) ≅ (' ⊠ -) ⊠, is written as _',X × Ze_,X≅ e_, Z' × X_',X in this notation. We obtain a Beck-Chevalley transformation, which in standard notation would be written as ' ⊠ p_X*(p_Z^*(),-) → p_Z' × X *(p_Z^*(), ' ⊠ -). Suppose is an ordinary ring of finite global dimension. Let X Z be geometric stacks, let ∈(X) be almost perfect, and let ' ∈(Z)^+ be arbitrary. Then the Beck-Chevalley map _',X(,) →(p_X^*(), _',X()) is an isomorphism for all ∈(X)^+. First suppose X and Z are affine. Then p_X: X × Z → X is affine, hence p_X* is conservative and it suffices to show show p_X*_', X(,) → p_X*(p_X^*(), _', X()) (, p_X*_',X()) is an isomorphism. Using the projection formula for the affine morphisms p_X and p_Y <cit.>, we can identify this with the Beck-Chevalley map p_X* p^*_Z(') ⊗(,) →(, p_X* p^*_Z(') ⊗). But p_X* p^*_Z() ≅ q^*_Xq_Z*(), where q_X, q_Z are the structure maps to , hence p_X* p^*_Z() is of finite Tor-dimension by our hypotheses on and left boundedness of . The claim now follows from <cit.>. In the general case, let g: U ≅ A → X and h: V ≅ B → Z be flat covers. Since (h × g)^* is conservative, it suffices to show the top left arrow in [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) (h × g)^* _',X(,); (ab) at (,1.0) (h × g)^* (p^*_X(), _',X()); (ac) at (+,0) ( (h × g)^* p^*_X(), (h × g)^* _',X()); (ba) at (0,) _h^*('),U g^* (,); (bb) at (,) _h^*('),U(g^*(),g^*()); (bc) at (+,) (p^*_U g^*(), _h^*('),Ug^*()); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); is an isomorphism. This follows since the bottom left arrow is an isomorphism by Proposition <ref>, the top right is by this and Proposition <ref>, and the bottom right is by the previous paragraph. Suppose is an ordinary ring of finite global dimension. Let X, Z, and Z' be geometric stacks, let ∈(Z) be almost perfect, and let ' ∈(Z')^+ be arbitrary. Then the Beck-Chevalley map _',X e_,X^R() → e_, Z' × X^R _',X × Z() is an isomorphism for all ∈(X × Z)^+. By definition the given map factors as _',X p_X*(p_Z^*(),) → p_Z' × X*_',X × Z(p_Z^*(),) → p_Z' × X *(p_Z^*(), _',X × Z()), hence is an isomorphism by Propositions <ref> and <ref>. §.§ Ind-geometric external products: the coherent case We now consider external products of coherent sheaves on reasonable ind-geometric stacks. The constructions in the remaining sections will make crucial use of Proposition <ref>, hence we assume that is an ordinary ring of finite global dimension for the rest of the paper. Let X ≅ X_ and Z ≅ Z_ be reasonable presentations. We will define - ⊠ -: (X) ×(Z) →(X × Z) so that it fits into a diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) (X_) ×(Z_); (b) at (5.0,0) (X_× Z_); (c) at (0,-1.5) (X) ×(Z); (d) at (5.0,-1.5) (X × Z); [->] (a) to node[above] - ⊠ - (b); [->] (b) to node[right] (i_× i_)_* (d); [->] (a) to node[left] i_*× i_*(c); [->] (c) to node[above] - ⊠ - (d); for all , . The behavior of the ind-geometric external product on objects is completely determined by these diagrams, given the behavior of the geometric external product. Following <cit.>, external products are most fully encoded as lax symmetric monoidal structures on sheaf theories. Recall that the restriction : ^→ enhances to a symmetric monoidal functor : ^→, where ^≅ and where denotes the category of presentable, stable _-module categories <cit.>. It follows that : ^→ is itself lax symmetric monoidal <cit.>. By Proposition <ref> we obtain a lax symmetric monoidal structure on the induced functor : ^→, where ⊂ is the 1-full subcategory which only includes morphisms of finite Tor-dimension. Now recall from Definition <ref> that the basic functorialities of coherent sheaves on reasonable ind-geometric stacks were packaged as a functor : ()_prop,ftd→ extending (<ref>). The functor (<ref>) has a canonical lax symmetric monoidal structure extending that of (<ref>). The proof will use the following standard result, see <cit.> or, for a close variant, <cit.>. Here if is a symmetric monoidal category, then ^⊗→_* denotes the associated coCartesian fibration. Let , be symmetric monoidal categories, ' ⊂ a full symmetric monoidal subcategory, and Φ': ' → a lax symmetric monoidal functor. Suppose that admits small limits and that '_/X×'_/Y→'_/X ⊗ Y is right cofinal for all X, Y ∈. Then the right Kan extension Φ: → of Φ' admits a canonical lax symmetric monoidal structure extending that of Φ'. Explicitly, it is the given by the right Kan extension Φ^⊗: ^⊗→^⊗ of Φ'^⊗: '^⊗→^⊗ relative to _*. If instead admits small colimits and '_X/×'_Y/→'_X ⊗ Y/ is left cofinal for all X, Y ∈, then the corresponding claim holds for left Kan extensions. Let ⊂ denote the full subcategory of (geometric, spectral) algebraic spaces. By Proposition <ref> the symmetric monoidal structure on : ^→ extends to a lax symmetric monoidal structure on its right Kan extension : ^→. By <cit.> this is in fact symmetric monoidal. By <cit.>, <cit.> this extends to a symmetric monoidal structure functor on the functor : () →, hence to a lax symmetric monoidal structure on the induced functor : () →. Propositions <ref> and <ref> extend this to a lax symmetric monoidal structure on the right Kan extension : ()_alg,all→, where alg is the class of relative algebraic spaces. Using Proposition <ref> this induces a lax symmetric monoidal structure on the restriction : ()_prop,ftd→. The functor (<ref>) is the left Kan extension of this, and again inherits a lax symmetric monoidal structure by Propositions <ref> and <ref>. In particular, the data of the lax symmetric monoidal structure of Proposition <ref> includes the data of a functor (<ref>) for any X and Z, together with the data of the diagrams (<ref>). As a corollary, we obtain an external product of ind-coherent sheaves on coherent ind-geometric stacks. Given Propostion <ref>, and inspecting the proof of Proposition <ref>, we first note that (<ref>) lifts to a lax symmetric monoidal functor to the category of idempotent-complete categories with finite colimits. If X and Z are coherent, it then follows from <cit.> that there is a unique extension of (<ref>) to a continuous functor - ⊠ -: (X) ⊗(Z) →(X × Z), where the left-hand term refers to the tensor product in . To obtain a global statement, note that the full subcategory ()_prop,ftd⊂()_prop,ftd of locally Noetherian ind-geometric stacks is a symmetric monoidal subcategory. We restrict to it the functor : ()_fcd;ftd→ of Definition <ref>. Together with <cit.>, Propositions <ref>, <ref>, and <ref> then immediately imply the following. The restriction of (<ref>) to ()_prop,ftd has a canonical lax symmetric monoidal structure which extends the restriction of the lax symmetric monoidal structure on (<ref>) defined by Proposition <ref>. Note that the obstruction to extending this result to general coherent ind-geometric stacks is that these are not closed under products. However, in <cit.> we will extend it to a certain class of well-behaved coherent ind-geometric stacks, and this will encompass all motivating examples we have in mind (i.e. those appearing in <cit.>). §.§ Ind-geometric external products: the general case If X and Z are coherent ind-geometric stacks and ∈(X), ∈(Z), we defined the external product ⊠∈(X × Z) in the previous section. We now extend this definition to include the case where X and Z are not necessarily coherent, which will in turn require us to assume either or is bounded. As a result, we will not attempt to generalize the full data of a lax symmetric monoidal structure on (-) as in Proposition <ref>, as boundedness hypotheses make even formulating a precise generalization cumbersome. Moreover, the material in this section is not needed for our intended applications, which only concern the coherent setting. But as in previous sections, the poor formal properties of coherent stacks mean that even if one only wants to prove a given result in the coherent setting, it is convenient if the structures involved in the proof are defined in the general ind-geometric setting, where one can make constructions more freely. To start, let X and Z be geometric stacks. The assignment ↦ -⊠ extends to a functor (Z) →((X), (X × Z)). By Proposition <ref> this restricts to a functor (Z)^b →^b((X), (X × Z)), where (Z)^b ⊂(Z) is the full subcategory of bounded sheaves. Since (Z)^b ≅(Z)^b, it follows from the universal property of (-) that we have a canonical diagram of the following form. [baseline=(current bounding box.center),thick,>=] (a) at (0,0) (X) ×(Z)^b; (b) at (5.8,0) (X × Z); (c) at (0,-1.5) (X) ×(Z)^b; (d) at (5.8,-1.5) (X × Z); [->] (a) to node[above] - ⊠ - (b); [->] (b) to node[right] Ψ_X × Z (d); [->] (a) to node[left] Ψ_X ×Ψ_Z(c); [->] (c) to node[above] - ⊠ - (d); We want to generalize the top arrow of this diagram to the case where X and Z are ind-geometric. As in the case of coherent sheaves, the behavior of this extension on objects will be determined by its compatibility with pushforward from truncated geometric substacks. To formalize this we will use the fact that the above diagram is functorial in the following sense. Here ()_alg;ftd denotes the 1-full subcategory of () which only includes correspondences X Y Z such that h is of finite Tor-dimension and f is a relative algebraic space. There exists a diagram [baseline=(current bounding box.center),thick,>=] (a) at (0,0) (-) ×(-)^b; (b) at (5.8,0) (- × -); (c) at (0,-1.5) (-) ×(-)^b; (d) at (5.8,-1.5) (- × -); [->] (a) to node[above] - ⊠ - (b); [->] (b) to node[right] Ψ_(- × -) (d); [->] (a) to node[left] Ψ_(-)×Ψ_(-)(c); [->] (c) to node[above] - ⊠ - (d); of functors ()_alg;ftd^× 2→ which specializes to the diagram (<ref>) when evaluated on any X, Z ∈. We postpone the proof of Proposition <ref> while we use it define external products in the desired generality. To simplify the needed constructions we restrict our attention to the case where Z is reasonable and is coherent. Note first that the top arrow of (<ref>) can be encoded as a functor ()_alg;ftd^× 2→Δ^1, where Δ^1 := (Δ^1, ). Restricting its domain and values we obtain a functor ()_alg;ftd×()_prop;ftd→Δ^1 of the form (-) ×(-) →(- × -). For any X, Z ∈ the specialization of this expression preserves small colimits in (X), hence there exists a unique extension to a functor ()_alg;ftd×()_prop;ftd→ ()^Δ^1 of the form (-) ((-)) →(- × -). Next define a functor ()_alg;ftd×()_prop;ftd→ ()^Δ^1 by left Kan extending (<ref>). Here ()_alg;ftd refers to correspondences whose forward morphism is a relative ind-algebraic space in the obvious sense. This extension exists, and moreover is of the same form as (<ref>), since ()^Δ^1 admits small colimits and ()^Δ^1→ ()^× 2 preserves them <cit.>, since the tensor product in preserves small colimits in each variable <cit.>, and since ind-completion of idempotent-complete categories admitting finite colimits commutes with filtered colimits <cit.>. We define a functor ()_alg;ftd×()_prop;ftd→Δ^1 of the form - ⊠ -: (-) ×(-) →(- × -) by taking the functor ()_alg;ftd×()_prop;ftd→ ()^Δ^1 defined above, passing to its underlying Δ^1-valued functor, and then composing with the canonical natural transformation (-) ×(-) →(-) ((-)). When X and Z are reasonable, it follows by construction that the functor - ⊠ -: (X) ×(Z) →(X × Z) is compatible with the coherent external product (<ref>) in the obvious sense. We now return to the proof of Proposition <ref>. First recall from the proof of Proposition <ref> that the functor : ()_alg;ftd→ has a lax symmetric monoidal structure. Part of the data of this is the bottom arrow of (<ref>), which we regard as a functor ()_fcd;ftd^× 2→Δ^1 taking (X, Y) to (X) ×(Y)^b →(X × Y). By construction this functor ()_fcd;ftd^× 2→Δ^1 factors through the category _cc^b,Δ^1 defined as follows. We set _cc := ××, regarding it as a category over × 2 via (, , ) ↦ (×, ). We then write _cc^Δ^1 := Δ^1×_2_cc for the category of tuples (, , , F), where , ∈, ∈, and F: ×→. Finally, we write _cc^b,Δ^1 for the full subcategory of such tuples whose associated functor →(, ) takes values in ^b(, ). We will prove the claim by constructing a functor _cc^b,Δ^1→Δ^1 ×Δ^1 which takes the bottom arrow in (<ref>), evaluated on any (X,Y), to the entire diagram. Let us set _aa := ×× and _ac := ××, defining _aa^Δ^1, etc., as above. The main step will be to first construct a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) _aa^Δ^1; (ab) at (,0) _ac^Δ^1; (ac) at (+,0) _cc^Δ^1; (ba) at (0,) _aa^b,Δ^1; (bb) at (,) _ac^b,Δ^1; (bc) at (+,) _cc^b,Δ^1; [->] (aa) to node[above] (ab); [<-] (ab) to node[above] (ac); [->] (ba) to node[above] ∼ (bb); [<-] (bb) to node[above] ∼ (bc); [right hook->] (ba) to node[left] (aa); [right hook->] (bb) to node[right] (ab); [right hook->] (bc) to node[right] (ac); in which the bottom functors are equivalences, and such that under these equivalences the bottom arrow in (<ref>) (as a _cc^b,Δ^1-valued functor) corresponds to the top arrow and overall composition of (<ref>) (respectively as a _aa^b,Δ^1-valued and a _ac^b,Δ^1-valued functor). Let us explicitly construct the top left functor in (<ref>) and show that it restricts to the equivalence on the bottom left; the construction of the right square is parallel. To do this we introduce the following pair of diagrams. [baseline=(current bounding box.center),thick,>=] ; ; ; ; [matrix] at (0,0) (aa) at (0,0) Δ^2; (ab) at (,0) Λ^2_2; (ac) at (+,0) Δ^1_02; (ba) at (0,) Λ^2_1; (bb) at (,) Δ^0_0 ∪Δ^1_12; (bc) at (+,) Δ^0_0 ∪Δ^0_2; (ca) at (0,+) Δ^1_01; (cb) at (,+) Δ^0_0∪Δ^0_1; (cc) at (+,+) _aa; [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (ca) to node[above] (cb); [<-] (cb) to node[above] (cc); [->] (aa) to node[left] (ba); [->] (ba) to node[left] (ca); [->] (ab) to node[right] (bb); [->] (bb) to node[right] (cb); [->] (ac) to node[right] (bc); [<-] (bc) to node[right] (cc); [->] (cc) to node[right] (bb); ; [matrix] at (7.5,0) (aa) at (0,0) ^Δ^2_aac; (ab) at (,0) ^Λ^2_2_aac; (ac) at (+,0) ^Δ^1_02_ac; (ba) at (0,) ^Λ^2_1_aac; (ca) at (0,+) ^Δ^1_01_aa; (cc) at (+,+) _aa; [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ca) to node[above] (cc); [->] (aa) to node[left] (ba); [->] (ba) to node[left] (ca); [->] (ac) to node[right] (cc); ; Here the subscripts in e.g. Δ^1_02 indicate a particular 1-simplex of Δ^2, and the arrows in the left diagram not involving _aa are induced by restriction. The unit of the localization ↦ on induces a functor →Δ^1 taking to →, and the diagonal arrow out of _aa is the induced functor (, , ) ↦ (×, →). The horizontal and vertical arrows out of _aa thus take (, , ) to (×, ) and (×, ), respectively. In the right diagram, ^Δ^1_01_aa and ^Δ^1_02_ac are respectively the fiber products of the bottom row and right column of the left diagram (which is consistent with our existing notation after forgetting subscripts). The remaining three categories are the fiber products of their counterparts on the left with _aa over Δ^0_0 ∪Δ^1_12. Note that their natural maps to _aa indeed factor through those of ^Δ^1_01_aa and ^Δ^1_02_ac as indicated. We claim the leftmost vertical functors in the right diagram are equivalences. For the top, this follows since it is base changed from its counterpart on the left, which is an equivalence by <cit.>. For the bottom, this follows from the bottom left square of the left diagram being Cartesian. Composing the inverse equivalences with the top arrows we obtain a functor ^Δ^1_01_aa→^Δ^1_01_ac as desired. The fiber of this functor over a particular (,, ) ∈_aa is the map (×, )^≅→(×, )^≅ given by composition with → (the superscripts indicate that non-invertible natural transformations are excluded). Since the corresponding map ^b(,) →^b(,) is an equivalence by definition, it follows that ^Δ^1_01_aa→^Δ^1_01_ac restricts to a functor ^b,Δ^1_01_aa→^b,Δ^1_01_ac which in turns restricts to an isomorphism of fibers over _aa. We recall that Δ^1 is a bifibration over 2 <cit.>. It follows from the definitions that bifbrations are stable under pullback along products of maps and under restriction to full subcategories. In particular, ^b,Δ^1_01_aa and ^b,Δ^1_01_ac are bifibrations over _aa, factored as the product of × and . It now follows from <cit.> and the previous paragraph that ^b,Δ^1_01_aa→^b,Δ^1_01_ac is an equivalence. To complete the proof, note that by construction the bottom row of (<ref>) factors as _aa^b,Δ^1_01_aac^b,Δ^2_ac^b,Δ^1_02_acc^b,Δ^2_cc^b,Δ^1_12. Here we again use subscripts to indicate edges in Δ^2, _acc^Δ^2 is the evident counterpart of _aac^Δ^2, and _acc^Δ^2, _aac^Δ^2 are the full subcategories corresponding to _aa^b,Δ^1. The middle terms in this factorization map to Δ^2, Δ^1, and Δ^2 compatibly with the relevant maps, hence we obtain a functor _cc^b,Δ^1→Δ^2×_Δ^1_02Δ^2≅Δ^1 ×Δ^1. §.§ Ind-geometric external products: properties Let X and Z be ind-geometric stacks with Z reasonable, and suppose ∈(Z). We again write e_, X for the induced functor - ⊠: (X) →(X × Z). We now extend a few basic results about e_,X from the geometric setting, in particular its compatibility with proper !-pullback. First we record more explicitly the naturality of e_,X implied by Definition <ref>. If f: X → Y is a relative ind-algebraic space (for example, an ind-affine morphism such as the inclusion of a truncated geometric substack), g: Z' → Z is ind-proper and almost ind-finitely presented, and ' ∈(Z'), then we have an isomorphism e_g_*('),Y f_* ≅ (f × g)_* e_',X. Likewise if h: X → Y and g: Z' → Z are of finite Tor-dimension and ∈(Z), then we have an isomorphism e_g^*(),X h^* ≅ (h × g)^* e_,Y. Let X and Z be ind-geometric stacks such that Z is reasonable, and suppose ∈(Z). Then e_,X is bounded. Fix an ind-geometric presentation X ≅ X_ and write ≅ i_*('), where i: Z' → Z is a reasonable geometric substack and ' ∈(Z') ∩(Z')^[m,n]. If _∈(X_)^≥ 0 for some , then by t-exactness of (i_× i)_* and the proof of Proposition <ref> we have e_,Xi_*(_) ≅ (i_× i)_* e_',X_(_) ∈(X)^≥ m', where m' is m minus the global dimension of . Similarly, if _∈(X_)^≤ 0 then e_,Xi_*(_) ∈(X)^≤ n. Given that ≅ i_* i_^!() for any ∈(X) (Proposition <ref> and Lemma <ref>), it follows that e_,X takes (X)^≥ 0 to (X × Z)^≥ m' since i_* i_^! is left t-exact and (X × Z)^≥ m' is closed under filtered colimits. If ∈(X)^≤ 0, then we additionally have ≅ i_*τ^≤ 0 i_^!(). This again follows from Proposition <ref> and Lemma <ref>, given that τ^≤ 0 i_^! is right adjoint to the restriction i_*: (X_)^≤ 0→(X)^≤ 0. It now follows that e_,X takes (X)^≤ 0 to (X × Z)^≤ n. We now extend Proposition <ref> to the ind-geometric setting. For simplicity we give a proof assuming coherence hypotheses, then indicate how the statement may be generalized. Let X, Y, and Z be ind-geometric stacks with X, Y, X × Z, and Y × Z coherent, and let f: X → Y be an ind-proper, almost ind-finitely presented morphism. Then for all ∈(Z) and ∈(Y) the Beck-Chevalley map e_,Xf^!() → (f ×𝕀_Z)^!e_, Y () is an isomorphism. First suppose X, Y, and Z are truncated and geometric. Then e_,Xf^!() and (f ×𝕀_Z)^!e_, Y are continuous and (Y) is compactly generated, so it suffices to consider ∈(Y). But the restrictions of all functors involved to left bounded subcategories commute with the equivalences (-)^+ ≅(-)^+, so the claim follows from Proposition <ref>. Now suppose that Z is truncated and geometric, and that f is the inclusion of a reasonable geometric substack, which we may assume is a term in a reasonable presentation Y ≅ Y_. By construction the functors e_,Y_, i_*, and (i_×𝕀_Z)_* form a diagram A ×Δ^1 →, where A is our index category. Each Y_× Z is coherent by Proposition <ref>, hence by the previous paragraph e_,Y_i_^! → (i_×𝕀_Z)^!e_, Y_ is an isomorphism for all ,. Since (Y) ≅(Y_) and (Y × Z) ≅(Y_× Z) in (Proposition <ref>), the claim follows by <cit.>. Still assuming Z is truncated and geometric, let X ≅ X_ be a reasonable presentation. For any we can find a reasonable geometric substack j_: Y_→ Y fitting into a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) X_; (ad) at (++,0) X_× Z; (ba) at (0,) X; (bc) at (+,) X × Z; (cb) at (,+) Y_; (cd) at (++,+) Y_× Z; (da) at (0,++) Y; (dc) at (+,++) Y × Z; [<-] (ab) to node[above] (ad); [->] (ab) to node[above left, pos=.25] i_ (ba); [->] (ab) to node[left,pos=.8] f_ (cb); [->] (ad) to node[below right] (bc); [->] (ad) to node[right] (cd); [->] (ba) to node[left] f (da); [<-] (cb) to node[above,pos=.25] (cd); [->] (cb) to node[above left, pos=.25] j_ (da); [->] (cd) to node[below right] (dc); [<-] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [<-] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); in which all but the left and right faces are Cartesian. We then have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) e_,X_ i^!_ f^!(); (ab) at (,0) (i_×𝕀_Z)^! e_,X f^!(); (ac) at (+,0) (i_×𝕀_Z)^! (f ×𝕀_Z)^! e_,Y(); (ba) at (0,) e_,X_ f^!_ j^!_(); (bb) at (,) (f_×𝕀_Z)^! e_,Y_ j^!_(); (bc) at (+,) (f_×𝕀_Z)^! (j_×𝕀_Z)^! e_,Y (); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); in (X_× Z). Since the functors (i_×𝕀_Z)^! determine an isomorphism (X × Z) ≅lim(X_× Z) in , it suffices to show the top right arrow is an isomorphism for all . But, given that X_× Z and Y_× Z are coherent by Proposition <ref>, the bottom right and top left arrows are isomorphisms by the previous paragraph, and the bottom left is by the first paragraph. Now let Z be ind-geometric, and write ≅ i_*(') for some reasonable geometric substack i: Z' → Z and some ' ∈(Z'). By (<ref>) we have isomorphisms e_,Y≅ (𝕀_Y × i)_* e_', Y and e_,X≅ (𝕀_X × i)_* e_', X. Thus we are trying to show the composition (𝕀_X × i)_* e_', Xf^! → (𝕀_X × i)_* (f ×𝕀_Z')^! e_', Y→ (f ×𝕀_Z)^! (𝕀_Y × i)_* e_', Y is an isomorphism. But, given that X × Z' and Y × Z' are coherent by Proposition <ref>, the first factor is an isomorphism by the previous paragraph, and the second factor is by Proposition <ref>. With more care, one can show the following weaker result in the general case (recall the definition of (-)^+_lim from Section <ref>). Let X, Y, and Z be ind-geometric stacks with Z reasonable, let f: X → Y be an ind-proper, almost ind-finitely presented morphism. Then e_, Y takes (Y)^+_lim to (Y × Z)^+_lim for all ∈(Z), and for all ∈(Y)^+_lim the Beck-Chevalley map e_,Xf^!() → (f ×𝕀_Z)^!e_, Y () is an isomorphism. We have the following companion to Proposition <ref>, which says that Definition <ref> behaves as expected on non-truncated geometric stacks. Let X and Z be geometric stacks such that Z is reasonable, and suppose ∈(Z). Then we have an isomorphism Ψ_X × Ze_,X≅ e_Ψ_Z(),XΨ_X of functors (X) →(X × Z). Let X ≅ X_ , Z ≅ Z_ be respectively an ind-geometric and a reasonable presentation, and write ≅ i_*(_) for some and _∈(Z_). By Proposition <ref> the functors e_i_*(_),X_ form a filtered system in (^L)^Δ^1 which lifts to a filtered system in (^L)^Δ^1_/e_Ψ_Z(),X (i.e. given termwise by taking e_i_*(_),X_ to the diagram realizing the isomorphism (i_× i_)_*,QCΨ_X_× Z_ e_i_*(_),X_≅ e_Ψ_Z(),X i_*,QCΨ_X_). By Proposition <ref> and <cit.> its colimit in (^L)^Δ^1_/e_Ψ_Z(),X is a diagram whose top and bottom arrows are e_,X and e_Ψ_Z(),X. But the vertical arrows in this diagram are t-exact and induce equivalences of left completions by Proposition <ref>, and by t-exactness of the Ψ_(-) functors and the pushforward functors in the filtered system, hence they are isomorphic to Ψ_X and Ψ_X × Z. §.§ Ind-coherent sheaf Hom Let X and Z be ind-geometric stacks such that Z is reasonable, and suppose ∈(Z). By construction e_, X has a right adjoint e_,X^R: (X × Z) →(X). When X = Z, we define ind-coherent sheaf Hom via the formula (, - ) := e_,X^R Δ_X*: (X) →(X). In this section we generalize various basic properties about quasi-coherent sheaf Hom on geometric stacks to this setting, in particular its compatibility with pushforward (Propositions <ref> and <ref>) and external products (Proposition <ref>). We begin with the following justification of definition (<ref>). Let X and Z be ind-geometric stacks such that Z is reasonable, and let ∈(Z). Then e_,X^R is left bounded. If X and Z are geometric, the Beck-Chevalley map Ψ_X e_,X^R() → e_Ψ_Z(),X^R Ψ_X × Z () is an isomorphism for all ∈(X × Z)^+, and the induced map Ψ_X (,) →(Ψ_X(),Ψ_X()) is an isomorphism for all ∈(X)^+. Since e_, X is bounded (Proposition <ref>), e_,X^R is left bounded and the two functors restrict to an adjunction between (X)^+ and (X × Z)^+. The analogous statement holds for e_Ψ_Z(), X and e_Ψ_Z(),X^R, and the second claim follows and since Ψ_(-) restricts to an equivalence (-)^+ (-)^+ and since Ψ_X × Z e_,X≅ e_Ψ_Z(),XΨ_X (Proposition <ref>). The third follows since Δ_X* is also compatible with the Ψ_(-) functors, and since we have an isomorphism - Ψ_Z() ≅Δ_X^* e_Ψ_Z(),X of functors (X) →(X). Propositions <ref>, <ref>, and <ref> immediately imply the following. If Y is a geometric stack and ∈(Y) is coherent, then (,-): (Y) →(Y) is left bounded and almost continuous. Let X and Y be geometric stacks, h: X → Y a morphism of finite Tor-dimension, and ∈(Y). Then the Beck-Chevalley map h^* (, ) →(h^*(), h^*()) is an isomorphism for all ∈(Y)^+. Next we observe the following naturality properies of e_,X^R. Suppose that f: X → Y and g: Z' → Z are ind-proper morphisms of ind-geometric stacks, that Z' and Z are reasonable, and that g is almost ind-finitely presented. Then if ∈(Z') and ≅ g_*('), the isomorphism (<ref>) yields an isomorphism of right adjoints e_',X^R (f × g)^! ≅ f^! e_,Y^R. Similarly, suppose that h:X → Y and g: Z' → Z are of finite Tor-dimension as well as of ind-finite cohomological dimension. Then if ∈(Z) and ' ≅ g^*(), the isomorphism (<ref>) yields (implicitly using Proposition <ref>) an isomorphism of right adjoints e_,Y^R (h × g)_* ≅ h_* e_',X^R. We now consider the generalizations of Propositions <ref> and <ref>. Let X and Z be reasonable ind-geometric stacks, and suppose ∈(Z). Then e_,X^R is almost continuous. If X and X × Z are coherent, then e_,X^R is continuous. The second claim follows since by construction e_,X preserves coherence. When X and Z are truncated geometric stacks the first claim holds by Propositions <ref> and <ref>. Still assuming Z is truncated and geometric, let X ≅ X_ be a reasonable presentation and ≅_ a filtered colimit in (X × Z)^≥ 0. Since (X) ≅lim(X_) in , it suffices to show the second factor in _ i_^! e_,X^R(_) → i_^! _ e_,X^R(_) → i_^! e_,X^R( __) is an isomorphism for all . The first factor is an isomorphism since e_,X^R is left bounded (Proposition <ref>) and i_^! is almost continuous (Proposition <ref>). But i_^! e_,X^R ≅ e_,X_^R (i_×𝕀_Z)^! by (<ref>), so the composition is an isomorphism by the left t-exactness of (i_×𝕀_Z)^! and the almost continuity of e_,X_^R and (i_×𝕀_Z)^!. Finally, suppose Z ≅ Z_ is a reasonable presentation, and write ≅ i_*(_) for some  and _∈(Z_). By (<ref>) we have e_,X^R ≅ e__,X^R i_^!, and the claim follows since i_^! is left t-exact and since e__,X^R and i_^! are almost continuous. Let X be a reasonable (resp. coherent) ind-geometric stack and ∈(X). Then (,-): (X) →(X) is almost continuous (resp. continuous). Follows from Proposition <ref> and continuity of Δ_X*. If X, Y, and Z are geometric stacks and ∈(Z), then for any f: X → Y the isomorphism (f×𝕀_Z)^* e_, Y≅ e_,X f^* of functors (Y) →(X × Z) yields an isomorphism f_* e_X,^R ≅ e_Y,^R (f×𝕀_Z)_* of right adjoints (X × Z) →(Y). This is an external counterpart of the isomorphism (<ref>). Now suppose that X, Y, and Z are ind-geometric and f: X → Y is of ind-finite cohomological dimension. In this setting f_*: (X) →(Y) typically does not have a left adjoint. We can still define an analogue of (<ref>), however, by considering the Beck-Chevalley transformation f_* e_,X^R → e_,Y^R (f×𝕀_Z)_* associated to the isomorphism (f×𝕀_Z)_* e_,X≅ e_,Yf_* in ((X) , (Y × Z)). In the geometric case one can check that if we restrict to bounded below subcategories, (<ref>) is identified with the isomorphisms (<ref>) under the equivalences (-)^+ ≅(-)^+. Let X, Y, and Z be coherent ind-geometric stacks such that X × Z and Y × Z are coherent, and let f: X → Y be a morphism of ind-finite cohomological dimension. Then for any ∈(Z) and ∈(X × Z), the Beck-Chevalley map f_* e_,X^R() → e_,Y^R (f ×𝕀_Z)_*() is an isomorphism. By Proposition <ref>, all functors in the statement are continuous, hence by coherence of X × Z it suffices to show the claim for ∈(X × Z). When X, Y, and Z are geometric and Z is truncated the claim then follows from (<ref>), since these functors are also left bounded (Proposition <ref>) and compatible with the equivalences (-)^+ ≅(-)^+. Next suppose f is the inclusion of a term in a reasonable presentation Y ≅ Y_, still assuming Z is truncated and geometric. By construction the functors e_,Y_^R, i^!_, and (i_×𝕀_Z)^! form a diagram (A ×Δ^1)^→, where A is our index category. Each Y_× Z is coherent by Proposition <ref>, hence by the first paragraph i_* e_,Y_^R → e_,Y_^R (i_×𝕀_Z)_* is an isomorphism for all ≤. Since (Y) ≅(Y_) and (Y × Z) ≅(Y_× Z) in (Proposition <ref>), the claim follows by <cit.>. Now let X ≅ X_ be a reasonable presentation, supposing again that Y and Z are geometric and Z is truncated. Since ∈(X × Z) we have ≅ (i_×𝕀_Z)_*(_) for some  and some _∈(X_× Z). We want to show the second factor of f_* i_* e_,X_^R(_) → f_* e_,X^R (i_×𝕀_Z)_*(_) → e_,Y^R (f ×𝕀_Z)_* (i_×𝕀_Z)_*(_) is an isomorphism. Given that X_× Z is coherent by Proposition <ref>, this follows since the first factor is by the previous paragraph and the composition is by the first paragraph. In the general case, let Y ≅ Y_ be a reasonable presentation, and write ≅ j_*(') for some reasonable geometric substack j: Z' → Z and some ' ∈(Z'). For any we have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; ; ; ; (ab) at (,0) X_; (ad) at (++,0) X_× Z'; (ba) at (0,) X; (bc) at (+,) X × Z; (cb) at (,+) Y_; (cd) at (++,+) Y_× Z'; (da) at (0,++) Y; (dc) at (+,++) Y × Z; [<-] (ab) to node[above] (ad); [->] (ab) to node[above left, pos=.25] i'_ (ba); [->] (ab) to node[right,pos=.2] f_ (cb); [->] (ad) to node[below right] (bc); [->] (ad) to node[right] (cd); [->] (ba) to node[left] f (da); [<-] (cb) to node[above,pos=.25] (cd); [->] (cb) to node[above left, pos=.25] i_ (da); [->] (cd) to node[below right] (dc); [<-] (da) to node[above,pos=.75] (dc); [-,line width=6pt,draw=white] (ba) to (bc); [<-] (ba) to node[above,pos=.75] (bc); [-,line width=6pt,draw=white] (bc) to (dc); [->] (bc) to node[right,pos=.2] (dc); with all faces but the top and bottom Cartesian. We have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) f_* i'^!_ e_,X^R(); (ab) at (,0) i^!_ f_* e_,X^R(); (ac) at (+,0) i^!_ e_,Y^R (f ×𝕀_Z)_*(); (ba) at (0,) f_* e_',X_^R (i'_× j)^! (); (bb) at (,) e_',Y_^R (f_×𝕀_Z')_* (i'_× j)^!(); (bc) at (+,) e_',Y_^R (i_× j)^! (f ×𝕀_Z)_*(); [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); in (Y_), where the vertical isomorphisms are given by (<ref>). Since the functors i^!_ determine an isomorphism (Y) ≅lim(Y_) in , it suffices to show the top right arrow is an isomorphism for all . Proposition <ref> implies that X_, X_× Z', and Y_× Z' (note that e.g. i_× j factors as (i_×𝕀_Z) ∘ (𝕀_Y_× j)). The claim then follows since the top left and bottom right arrows are isomorphisms by Proposition <ref> and the bottom left is by the previous paragraph. As with the analogous Proposition <ref>, the coherence hypotheses in Proposition <ref> simplify the proof considerably but are not entirely essential. With more work one can show the following extension. Let X, Y, and Z be ind-geometric stacks such that Y and Z are reasonable, and let f: X → Y be a morphism of ind-finite cohomological dimension. Then for any ∈(Z) and ∈(X × Z)^+ the Beck-Chevalley map f_* e_,X^R() → e_,Y^R (f ×𝕀_Z)_*() is an isomorphism. Next recall that if f: X → Y is a proper morphism of geometric stacks and ∈(X), the projection isomorphism f_*( f^*(-)) ≅ f_*() - yields an isomorphism f_* (, f^!(-)) ≅(f_*(), -) of right adjoints. Suppose instead that X and Y are reasonable ind-geometric stacks and that f is ind-proper and almost ind-finitely presented. Again f_*: (X) →(Y) will typically not have a left adjoint, but we can define a transformation f_* (, f^!(-)) →(f_*(), -) of functors (Y) →(Y) as the composition f_* e_,X^R Δ_X * f^! → e_,Y^R (f ×𝕀_X)_*Δ_X * f^! → e_f_*(),Y^R Δ_Y* of Beck-Chevalley maps. Note that we implicitly use the isomorphism e_f_*(),Y^R ≅ e_,Y^R (𝕀_Y × f)^! of (<ref>). In the geometric case one can check that if we restrict to left bounded subcategories, (<ref>) is identified with (<ref>) under the equivalences (-)^+ ≅(-)^+. Let X and Y be coherent ind-geometric stacks such that X × X, X × Y, and Y × Y are coherent, and let f: X → Y be an ind-proper, almost ind-finitely presented morphism. Then for any ∈(X) and ∈(Y) the natural map f_*(, f^!()) →(f_*(), ) is an isomorphism. Follows from Propositions <ref> and <ref>. Similarly, conditioned on proofs of Propositions <ref> and <ref> one obtains the following. Let X and Y be reasonable ind-geometric stacks, and let f: X → Y be an ind-proper, almost ind-finitely presented morphism of finite cohomological dimension. Then for any ∈(X) and ∈(Y)^+ the natural map f_*(, f^!()) →(f_*(), ) is an isomorphism. At the level of objects, the extension of sheaf Hom from the geometric to the ind-geometric setting is uniquely determined by these results, since we can always write ∈(X) as i_*(') for some reasonable geometric substack i: X' → X and ' ∈(X') (note that i is of cohomological dimension zero). With Proposition <ref> in hand, we can also establish the ind-geometric extension of Proposition <ref>. Let X, Z, and Z' be reasonable ind-geometric stacks such that X, X × Z, Z' × X, and Z' × X × Z are coherent. Then for all ∈(Z), ' ∈(Z'), and ∈(X × Z), the Beck-Chevalley map _',X e_,X^R() → e_, Z' × X^R _',X × Z() is an isomorphism. First suppose X, Z, and Z' are truncated and geometric. Then _',X e_,X^R() and e_, Z' × X^R _',X × Z are continuous (Proposition <ref>) and (X × Z) is compactly generated, so it suffices to consider ∈(X × Z). But the restrictions of all functors involved to left bounded subcategories commute with the equivalences (-)^+ ≅(-)^+ (Proposition <ref>), so the claim follows from Proposition <ref>. Still assuming Z and Z' are truncated and geometric, let X ≅ X_ be a reasonable presentation. For any we have a diagram [baseline=(current bounding box.center),thick,>=] ; ; ; (aa) at (0,0) _',X_ i^!_ e_,X^R; (ab) at (,0) (𝕀_Z'× i_)^! _',X e_,X^R; (ac) at (+,0) (𝕀_Z'× i_)^! e_, Z' × X^R _',X × Z; (ba) at (0,) _',X_ e_,X_^R (i_×𝕀_Z)^!; (bb) at (,) e_,Z' × X_^R_',X_× Z(i_×𝕀_Z)^!; (bc) at (+,) e_,Z' × X_^R(𝕀_Z'× i_×𝕀_Z)^! _', X × Z,; [->] (aa) to node[above] (ab); [->] (ab) to node[above] (ac); [->] (ba) to node[above] (bb); [->] (bb) to node[above] (bc); [->] (aa) to node[below,rotate=90] ∼ (ba); [->] (ac) to node[below,rotate=90] ∼ (bc); the vertical isomorphisms being given by (<ref>). Since the functors (𝕀_Z'× i_)^! determine an isomorphism (Z' × X) ≅lim(Z' × X_) in , it suffices to show the top right arrow is an isomorphism for all . But, given that X_× Z, Z' × X_, and Z' × X_× Z are coherent by Proposition <ref>, the bottom right and top left arrows are isomorphisms by Proposition <ref>, and the bottom left is by the first paragraph. Now let Z and Z' be ind-geometric, and write ≅ i_*(_), ' ≅ i_*('_) for some reasonable geometric substacks i_: Z_→ Z, i_: Z'_→ Z' and some _∈(Z_), '_∈(Z'_). Using (<ref>) and (<ref>) the map in the statement factors as (i_×𝕀_X)_* _'_, X e__,X^R (𝕀_X × i_)^! → (i_×𝕀_X)_* e__,Z'_× X^R _'_, X × Z_ (𝕀_X × i_)^! → (i_×𝕀_X)_* e__,Z'_× X^R (𝕀_Z'_× X× i_)^! _'_, X × Z → e__,Z' × X^R (i_×𝕀_X × Z_)_* (𝕀_Z'_× X× i_)^! _'_, X × Z → e__,Z' × X^R (𝕀_Z' × X× i_)^! (i_×𝕀_X × Z)_* _'_, X × Z. But, given that X × Z' and Y × Z' are coherent by Proposition <ref>, the first factor is an isomorphism by the previous paragraph, the second is by Proposition <ref>, the third is by Proposition <ref>, and the fourth is by Proposition <ref>. As with other results in this section, one can prove a weaker claim in the general case. Let X, Z, and Z' be ind-geometric stacks with Z and Z' reasonable. Then for all ∈(Z), ' ∈(Z'), and ∈(X × Z)^+, the Beck-Chevalley map _',X e_,X^R() → e_, Z' × X^R _',X × Z() is an isomorphism. amsalpha
http://arxiv.org/abs/2306.09871v1
20230616143428
Going public: the role of public participation approaches in commercial AI labs
[ "Lara Groves", "Aidan Peppin", "Andrew Strait", "Jenny Brennan" ]
cs.HC
[ "cs.HC", "cs.AI", "cs.CY" ]
An ontological description for relativistic, massive bosons. Gerard 't Hooft Faculty of Science, Department of Physics Institute for Theoretical Physics Princetonplein 5, 3584 CC Utrecht The Netherlands http://www.staff.science.uu.nl/hooft101 ================================================================================================================================================================================= 2 § ABSTRACT In recent years, discussions of responsible AI practices have seen growing support for ‘participatory AI’ approaches, intended to involve members of the public in the design and development of AI systems. Prior research has identified a lack of standardised methods or approaches for how to use participatory approaches in the AI development process. At present, there is a dearth of evidence on attitudes to and approaches for participation in the sites driving major AI developments: commercial AI labs. Through 12 semi-structured interviews with industry practitioners and subject-matter experts, this paper explores how commercial AI labs understand participatory AI approaches and the obstacles they have faced implementing these practices in the development of AI systems and research. We find that while interviewees view participation as a normative project that helps achieve ‘societally beneficial’ AI systems, practitioners face numerous barriers to embedding participatory approaches in their companies: participation is expensive and resource intensive, it is ‘atomised’ within companies, there is concern about exploitation, there is no incentive to be transparent about its adoption, and it is complicated by a lack of clear context. These barriers result in a piecemeal approach to participation that confers no decision-making power to participants and has little ongoing impact for AI labs. This paper’s contribution is to provide novel empirical research on the implementation of public participation in commercial AI labs, and shed light on the current challenges of using participatory approaches in this context. § INTRODUCTION Artificial intelligence research and technology continues to proliferate widely, presenting substantial opportunities but also considerable ethical risks for people and society. Against this backdrop, policymakers, researchers and practitioners are increasingly interested in public participation in AI: methods that enable members of the public to be involved and have their ideas, beliefs, and values integrated into the design and development process of AI systems <cit.>. There are two main reasons for this interest: the first is the perceived success of public participation and engagement methodologies in other fields: participatory approaches are used to address issues where there is impact on the public such as in international development <cit.>, environmental justice <cit.> and in democratic institutions <cit.>. Increased interest in public participation in AI reflects a broader recognition of AI’s implications in the wider world. The second is the, by now, well-documented potential for AI systems to cause harm, such as causing discriminatory impacts on different members of society <cit.>, especially those from marginalised or disadvantaged backgrounds <cit.>. Proponents of participation cite these methods as a way to create external scrutiny and accountability for these systems <cit.>, and argue ‘more or better’ participation in AI <cit.> may partly remedy potential harms <cit.> and produce more ‘socially good’ outcomes <cit.>. Despite this growing interest, it is important to bear in mind that public participation is not a panacea for the harms that AI systems can raise, nor independently capable of deriving societal benefits of emerging technologies. Existing research around ‘participation washing’ highlights the potential pitfalls and extractive practices of these methods <cit.>. A review of the literature at the interface between ‘participation’ and ‘AI’ reveals that, to date, there is very limited research exploring the role of public participation in commercial AI labs. There is also lingering conceptual confusion about what ‘participation’ in AI means and what kinds of approaches should be adopted <cit.>, likely hindering wider adoption of these methods. Given that a significant proportion of AI development is undertaken in industry, there is a pressing need to understand how participation is, or could be, embedded in companies driving important developments in AI products and research. This need is all the more urgent in the context of the latest ‘AI spring’: the advent of novel general purpose and generative AI technologies, which may impact people at greater scale and in more unpredictable ways than traditional ‘narrow’ AI systems. Tech industry leaders have made calls for more ‘public input’ into systems like ChatGPT and GPT-4 to ensure these systems are aligned with societal needs <cit.>. There have also been calls from industry leaders to ‘democratise AI’, a term that can have different or even conflicting meanings, such as increasing access to these systems or sharing governance of these systems <cit.>. These developments have intensified the debate about what public participation in AI means.This paper explores which public participation approaches are being used or considered by tech companies, how they understand the value of these methods, what barriers they face in using these approaches, and what impact public participation has on the company and on participants. Using a literature review of public participation in AI and 12 semi-structured interviews – nine with practitioners working at major AI-focused tech firms, three with non-industry professionals with a stake in the ongoing direction of ‘participatory AI’ – conducted in the autumn of 2022, this paper seeks to answer three research questions: * How do commercial AI labs understand public participation in the development of their products and research? * What approaches to public participation do commercial AI labs adopt? * What obstacles/challenges do labs face when implementing these approaches? The contribution of this paper is twofold: novel empirical research reporting perspectives towards and past projects on public participation in commercial AI, and analysis on a current gap in the literature on ‘participatory AI’, finding that effective uses of participatory methods require a clear understanding of the context in which an AI system will be used. § METHODOLOGY Our findings emerge from two research verticals: a literature review and semi-structured expert interviews. §.§ Literature review We surveyed relevant literature on AI ethics and participation, the wider human-computer interaction (HCI), computer supported cooperative work (CSCW) and value-sensitive design (VSD) literature for scholarship on embedding participation in non-AI/ML technologies. We also drew on wider literature focused on the intersections of participation and democracy, for example, including deliberative democracy and sociology. We manually sourced literature from ACM and arXiv repositories, using a combination of keyword searches: ‘public participation in AI’, ‘participatory AI’, ‘participatory design in AI’ and ‘public engagement’, as well as terms and concepts likely to yield discussion of similar/adjacent theoretical grounding including ‘social choice’ ‘and ‘democratising AI’. We also used a ‘snowball method’ to identify additional papers from reference lists. §.§ Expert interviews We conducted 12 semi-structured interviews in this research. The interviews were led by the lead author, with support and contributions from the second and fourth authors. We interviewed nine practitioners working in large, medium and start-up commercial AI labs developing both products and research, who may be involved in planning or implementation of public engagement / participation projects or be expected to carry forward findings of public participation projects into research and or product development. For additional background, we also interviewed three subject-matter experts across participatory design, participatory AI and public engagement methods, and with knowledge of tech industry practice. One of these three experts is employed by a technology-focused non-profit, two are currently employed by academic institutions; one of these two had recent previous employment in a commercial lab. All three have authored papers pertaining to participation in AI. See Table <ref> for participant IDs. Our interview questions were split into four sections. We asked participants: * How they understand public participation; * What they think public participation in AI is for; * What methods or approaches they have used in their work, or seen in use across the sector, and; * Details of their role, their organisation's work culture, resources, and its propensity to fund or conduct participatory work tableParticipant organisation and ID Organisation Participant ID Start-up providing open source machine learning P1 Large company developing both products and research P2 Large company developing both products and research P3 Large company developing both products and research P4 Start-up developing research P5 Start-up providing open source machine learning P6 Company developing research P7 Tech-focused non-profit organisation P8 Academic institution P9 Academic institution P10 Start-up developing products (pre-market) P11 Company developing research P12 Participants were recruited either directly (selected based on previous demonstrable interest in ‘participatory AI’, ‘responsible AI’ or similar fields, and/or were part of the authors’ existing industry networks) and through snowball recruitment from recommendations from interviewees. Interviews lasted 60 minutes and took place virtually, using video conferencing software from September 2022 to January 2023, and were transcribed using a speech-to-text transcription software service. Three interviewees did not consent for their interview quotes to be used in this paper. Since all participants were in continuous employment at the time of participation, they were not offered additional payment for their time. §.§ Data analysis Interview data was analysed using a constructivist qualitative thematic analysis that draws heavily on a ‘theoretically flexible’ approach set out by Braun and Clarke (2006), that specialises in understanding and reporting repeated patterns, particularly in terms of institutional/organisational behaviours <cit.>. Using a constructivist epistemology allowed us to approach the data with an understanding that meaning and experience are socially (re)produced <cit.>. Following this paradigm, we coded our data and constructed our themes according to a ‘latent classification’ approach <cit.> surfacing implied beliefs. The interviews were coded by the lead author using data analysis software. We chose not to set prescriptive benchmarks around prevalence of codes, or whether codes directly related to the RQs. After an initial batch of 71 codes generated, a re-coding process resulted in 56: some codes were felt to be too broad, in other cases, two substantively similar codes were merged (e.g. ‘building rapport’ to ‘relationship building’), and antonyms such as ‘inclusion’ and ‘exclusion’ were felt to be usefully interpreted dialectically and coded as single entities. From these 56 codes, reproduced across Tables <ref> and <ref>, we identified six main themes that corresponded to different research questions: * Internal factors * Commercial factors * Field-level factors * Societal and moral factors * Purpose of participation * Participatory approaches From the data, we surfaced many different operational considerations and personal values/beliefs that practitioners suggested are (or might be) impactful for the adoption of public participation. Factors were reported to emanate from the level of the firm (‘Internal’), or externally (‘Field-level’), and pertained to business mission (‘Commercial’) or relationship to people and society (‘Societal and moral’). These are categorised as ‘factors’ over the more directional e.g. ‘blockers’ or ‘drivers’ to avoid setting up a simplistic binary for phenomena not experienced by all participants universally. Some codes appear in different themes, highlighting the porous boundaries between these themes. Theme 5 and Theme 6 concern methods and approaches for, and purpose of, participation, and therefore correspond explicitly with RQ1 and RQ2 of our study. tableThemes and codes constructed from factors relevant to the adoption of public participation in commercial AI (as reported by interviewees) Themes Codes 82.5cmInternal factors Buy-in for public participation Compensating participants Internal expertise Remit: AI product or AI research Responsibility for public participation Scale and scope of public participation Types of `public' Capacity building 32.5cmCommercial factors Profit motive PR, optics, reputation Transparency 62.5cmField-level factors Capacity building Intermediaries Lack of industry-specific methods or training on public participation PR, optics, reputation Regulation Responsibility for public participation 82.5cmSocietal and moral factors Extractive practice Good intent, social good Harms, discrimination (In)justice, (in)equality Inclusion, exclusion Power Society building Trustworthiness 102.5cmPurpose of participation Democratising AI Good intent, social good Good business Widening inclusion Embedding lived experience Intrinsic value of participation Public participation as a form of accountability Relationship building Soliciting input / knowledge transfer Trust building *tableTable 2 cont. - Themes and codes constructed from factors relevant to the adoption of public participation in commercial AI (as reported by interviewees). Themes Codes Citizens' jury Crowdsourcing Co-design Community training in AI Community-based approaches Community-based Systems Dynamics framework Consultation 4*[cc]Participatory approaches Cooperatives Deliberative approaches Diverse Voices method Fairness checklist Governance tools e.g. audits, impact assessments, other policy mechanisms Open source Participatory design Request for comment Speculative design/anticipatory futures Surveys User research/user testing Workshops/convenings §.§ Positionality statement At the time of research, all the authors were employed by an independent research institute that conducts evidence-based research on data and AI in policy and practice, with a core organisational belief that benefits of data and AI must be justly and equitably distributed, and must enhance individual and social wellbeing. As part of the organisational remit, the institute collaborates with technology companies in a research capacity, i.e. using industry as a site of study. It does not accept funding from technology companies. The authors live and reside in the UK, and two of the four authors are British, one is British and Irish and one is American. We adopt a sociotechnical conception of AI, understanding that the technical elements of AI – machine learning, neural networks, etc – are inherently interrelated with social, political and cultural factors, principles and motivations (see for example Mohamed et al.) <cit.>. § LITERATURE REVIEW §.§ Public participation in theory and practice Broadly in the literature, public participation refers to approaches or activities that engage or involve members of the public, incorporating perspectives and experience into a project or intervention. Participatory approaches are routinely adopted in a number of areas, environmental decision-making <cit.>, health and care <cit.> and in democratic institutions <cit.>. For example, feedback sessions in health and social care incorporate patient views and lived experience to inform ongoing service delivery (described as ‘patient and public involvement’ (PPI) in the UK) <cit.> and consultations in policy mechanisms such as environmental impact assessments foster democratic debate and broaden decision-making powers <cit.>.In technology design contexts, participatory approaches stem from the fields of human-computer interaction (HCI) <cit.>, user-centred design <cit.> and the theory and application of participatory design (PD) methods <cit.>. These fields offer critical examination of how design might be crafted in tandem with <cit.>, instead of on behalf of, different publics in order to incorporate their needs and values <cit.>. In deliberative democratic theory, it is argued public participation appeals to democratic ideals of legitimacy <cit.> and accountability <cit.> as well as to enhance political autonomy <cit.>. The tradition of deliberative participation – the involvement of the public with a view to fostering deliberative debate and engagement – is evident in participatory design, which offers participants ‘seats at the table’ <cit.>, emulates democratic decision-making <cit.>, adopts consideration of social and political contexts <cit.> and embraces co-production <cit.>. Participation is also often read as an intrinsic value in and of itself <cit.>: like similar concepts such as ‘inclusion’ or ‘collaboration’, it is often understood in the literature as indicative of a ‘moral good’ <cit.>, of ‘flourishing social ties’<cit.> and so on. However, within the literature, there is little agreement about who constitutes the ‘public’. In politics and policy domains, the ‘public’ may refer to ‘citizens’, ‘labelling data people’ or ‘laypersons’ <cit.> while, in technology contexts, it may refer to current or future ‘end users’ <cit.>. More recent literature around participation in AI adopts a broader definition that includes all people affected by the use of an AI system, particularly individuals and groups for whom AI risks exacerbating inequity, injustice and marginalisation <cit.>. This raises the question of how commercial AI labs define ‘public’ in any public participation activities, particularly when their technologies may impact multiple publics in multiple areas or regions. The form of public participation can vary, reflected in the various typologies produced by political scholars and practitioners <cit.>. The first of these is Sherry Arnstein’s Ladder of Citizen Participation <cit.>, a widely referenced framework for forms of participation, originally intended to outline different degrees of participatory approaches in public planning. Arnstein’s eight rungs range from forms of non-participation (‘manipulation’), one-way dialogic methods (such as public request for comment <cit.>), involvement by consultation and partnership in the middle rungs, and finally ‘citizen control’ at the top rung (see Figure <ref>). Arnstein is critical of approaches at the bottom of the ladder, branding them tokenistic and inadequate in shifting the axis of power and therefore not paramount to meaningful participation <cit.>. < g r a p h i c s > figureArnstein's `Ladder of Citizen Participation' <cit.> <cit.> < g r a p h i c s > figureFramework for Participatory Data Stewardship <cit.> Patel et al. <cit.> draw on Arnstein’s ladder and a more recent ‘spectrum of participation’ <cit.> to describe practical mechanisms of participation in the stewardship of data and consequently the design of data-driven systems, including AI (see Figure <ref>). Their analysis creates a link between Arnstein’s political lens on participation and participation in sociotechnical contexts by describing five levels of participation and examples of what practical mechanisms may exist for each, drawn from real-world case studies. These five levels include: * Informing people about how data about them is used, such as through the publication of model cards; * Consulting people to understand their needs and concerns in relation to data use, such as through user experience research or consumer surveys; * Involving people in the governance of data, such as through public deliberation or lived experience panels; * Collaborating with people in the design of data governance structures and the technologies they relate to, such as through novel institutional structures like ‘data trusts’, and; * Empowering people to make decisions about datasets and technologies built with them, such as through citizen-led governance boards. Though indirectly linked to AI, these taxonomies help us makes sense of the public participation approaches commercial AI labs may be using and contribute theoretical foundational frameworks for exploring participation in AI design. §.§ Public participation in AI technology development As Dove et al. note, AI is neither ‘arcane nor obscure’ <cit.>: discursive debate around participation in AI should not be isolated from debates around participation more generally. Cooper et al. argue the AI design and development pipeline of AI technologies is diffuse and therefore typically ‘participatory’, combining multiple iterative activities and the input of multiple actors <cit.> across 'algorithmic supply chains' <cit.>. However, as with participation adopted in other domains, there are varying possible degrees of participation in AI. Two existing typologies are instructive for classifying the different modes of participation in AI: Sloane et al.’s typology of participation: as work, as consultation and as justice <cit.>, and Birhane et al.’s exploration of the three instrumental categories of participation: for algorithmic performance improvement; for process improvement and for collective exploration <cit.>. These typologies provide a sense of some of the goals of public participation in AI, and where participatory approaches can fit in AI development or research. There is an emerging literature on participatory approaches to AI development, which identify a few kinds of ‘participatory’ activities that involve assembling a mixed group of stakeholders to consult or assess an AI system. The literature on participatory development highlights a few activities that are seen as ‘participatory’. These include crowdsourcing <cit.> (such as crowdsourcing possible impacts of ADM systems <cit.> or labelling data <cit.>), participatory dataset documentation <cit.>, creating ‘red teams’ to test or evaluate a model <cit.>, bug bounties <cit.> or engaging members of the public to elicit preferences for algorithmic design decisions <cit.>. Such forms of participation very often prioritise a higher total number of participants over length or depth of participant involvement <cit.>. For example, participatory development of ML datasets <cit.>, requiring higher degrees of input from a higher number of stakeholders might be classified as Sloane’s ‘participation as work’, where methods that foster deliberation around values and experience <cit.>, might fall under Birhane et al.’s heading of ‘collective exploration’. Other scholarship argues that participatory approaches in AI could be instrumentalised to advance ambitious societal-level goals such as fairness, inclusion <cit.>, justice <cit.>, accountability <cit.> and democratic values <cit.>, which could be characterised as Sloane et al.’s ‘participation as justice’ <cit.>. Birhane et al. offer three case studies of a participatory approach to AI development, instances where participation is sought to improve the function of large language models for African and Te Reo Māori languages, annotate datasets and improve dataset documentation <cit.>. The authors suggest community inclusion in such projects might advance goals such as equity and justice, but acknowledge that participation in these kinds of projects may amount to products built that actually harm the communities included. Another proposed method for participation in AI development is Martin, Jr. et al.’s Community Based System Dynamics (CBSD) method, a mechanism that seeks to ‘engage and centre perspectives of marginalized and vulnerable communities’ for the purposes of model refinement <cit.>, however only offering cursory detail on the methodological components required to achieve this goal. There are concerns of ‘participation washing’ <cit.> across participation literature, also highlighted in application to AI. Hossain and Ahmed note that, to date, participation in design or development of AI has been overly modest and inconsequential, prescribing only narrow technological solutions as opposed to lasting community or societal change <cit.>, following the general mode of critique from the participation literature <cit.>. Sloane et al. argue that participatory approaches that claim to value diverse expertise and express a commitment to recentring marginalised communities, but in practice function as (often unrecognised) labour, risk paying lip service to the pro-social ends of participation while exploiting disadvantaged groups <cit.>. There are also dangers, as noted by Lloyd et al., that with a focus on engaging technology ‘users’ (in participatory projects), users become a stand-in merely for ‘consumers’, narrowing focus away from broader segments of society that might be affected by AI, with a risk of exacerbating existing harms to these groups <cit.>. In instances where a wider focal point is adopted to target ‘non-users’ of technologies, often under the objective of ‘democratising AI’ <cit.>, the outcome may not be equivalent to entrenching participatory or democratic structures <cit.> but may simply indicate intent to ‘widening access’ to technology use or development <cit.>. §.§ Public participation in commercial AI Over the past decade, many large technology companies have established or acquired their own dedicated AI labs for developing research and products: for example, the AI research company DeepMind was acquired by Google in 2014 and is now a subsidiary of Google's parent company Alphabet. Google itself has invested in entire AI research wings like Google Brain, and has integrated AI research into its products. There are also a number of smaller, independent companies developing AI that have made significant research and product developments, such as OpenAI and their ChatGPT model and interface. Commercial AI labs are widely considered to be at the forefront of current AI development and research <cit.>. Many AI labs have teams that are specialised in ethics issues (Microsoft’s Office for Responsible AI, Google DeepMind’s Ethics and Society team), including a remit for activities such as public participation. Though debates around ethics, fairness and accountability have gained considerable traction in recent years, it is still challenging terrain: Moss and Metcalf point to a habitual inability among firms to to specifically designate which team (members) have the responsibility for embedding ethics <cit.>, as well as an ineptitude toward institutionally buttressing their role(s), creating pinch points and barriers to the effective implementation of AI ethics initiatives. Practitioners struggle with what Rakova et al. identify as a demanding interplay between ‘organizational structures and algorithmic responsibility efforts’ <cit.>. Other scholars have criticised tech companies have for ‘ethics washing’ behaviours, <cit.> including the use of internal ethics initiatives as a form of social capital that justifies deregulation of their industry in favour of self regulation. Despite the sheer quantity of industry-led AI/ML research, most scholarship on participation in AI to date has emanated from academia or civil society: there is scant publicly available evidence of what kinds of participatory methods or projects are put into use in commercial AI labs. What literature does exist on public participation approaches in industry is authored by individuals working in commercial AI ethics teams <cit.>, and the limited examples we have of participatory efforts are also led by ethics teams in these companies. Examples include the Royal Society of Arts (RSA) and Google DeepMind’s Forum for Ethical AI project, involving a citizens’ jury with members of the public to offer space for deliberation on algorithmic decision-making <cit.>, and Behavioural Insights (BIT)’s blog on a recent partnership with Meta constructing citizens' assemblies for members of the public to deliberate on climate misinformation <cit.>. The lack of public examples of AI labs using participatory methods raises questions about the real extent of their use. § INTERVIEW FINDINGS Based on our review of the literature, we asked our interview subjects how commercial AI labs understand participatory AI approaches and the obstacles they have faced implementing these practices in the development of AI systems: * Within commercial AI labs, public participation is viewed as serving societally `good' ends, but may also have a strong business purpose; * Public participation in AI industry lacks clear and shared understanding of practices. Participants did not identify many participatory methods they use, but rather tended to list methods they had heard of; * Public participation in AI labs faces various obstacles: resource-intensity, atomisation, exploitation risk and misaligned incentives; * Public participation in AI labs is complicated by products or research that lack a clear context. §.§ Within commercial AI labs, public participation is viewed as serving societally ‘good’ ends, but may also be good for business We do a lot of AI for social good projects at [large company]. But I’m always wondering why we need the qualifier of AI for social good. [P3] Interviewees, including the practitioners working on ‘participatory AI’ and adjacent topics, view participation and participatory approaches positively, with several associating these practices with ‘doing good in the world’, an indication of company legitimacy or as a commitment to accountability. Another participant described the pull to embed participatory approaches as an ‘obligation’, to ensure the company are achieving societally beneficial outcomes with their technologies: We, as a corporation, building or researching a technology that has the potential to solve problems for people, have an obligation to engage folks from various backgrounds to help us understand the different problems they face. [P7] Some interviewees report viewing public participation in the labs through the lens of profitability or business mission: It should be for good business, right? Engaging with people should help you build a product that addresses their wants and needs better which in turn, makes your company more profitable. [P2] This view more closely follows the argument that increasing participation in corporate tech contexts presents an opportunity to increase access to technology: unsurprisingly, if your goal is to build better tech, then making it work better for more people is an attractive prospect. However, other participants expressed frustration that this would be likely to be the only logic that would wash with corporate shareholders (who, as one interviewee suggested, would not find any reason to complain if public participation was not conducted at all). In larger companies, interviewees noted challenges of explaining the value and role that participation can play to others in the firm. Those using these methods were trying to resolve concerns around, for example, bias and fairness, but often found that they had to reframe these objectives from the perspectives of how these methods could provide an increasing return on revenue. One interview noted concerns of a performativity around labels such as ‘responsible AI’: There’s concern about being exploitative in using that knowledge to do this sort of marketing veneer of responsible AI, then we’re still just going to make money on everything. [P3] §.§ Public participation in commercial AI lacks a clear and shared understanding of practices We take that there are many different approaches to public participation [at the company]. Some are more kind of focused on participatory annotation of data and co-production of AI systems. I think my work is more focused on vision setting for the future of AI. [P7] Our research corroborates findings from the literature of an enduring lack of consensus around participatory approaches in practice <cit.>. Interviewees were asked “what approaches to participation have you used in your work or practice?" Some interviewees were able to talk about approaches they’d personally used for certain research/development projects, but more usually, would recall (often cursory) detail about specific projects or ideas in either their organisation or across the sector, rather than any direct experience. Overall, interviewees cited 1 different methods they were familiar with or had used – see Table <ref>. tableParticipatory approaches in commercial AI (as reported by interviewees) mapped onto Arnstein's `Ladder of Citizen Participation' Arnstein's ladder Participatory approaches 82.5cmDegrees of citizen power Cooperatives Citizens' jury Community-based approaches Deliberative approaches Participatory design Speculative design/ anticipatory futures Governance tools e.g. audits, impact assessments, other policy mechanisms 82.5cmDegrees of tokenism Co-design Community training in AI Community-based Systems Dynamics framework Crowdsourcing UX/user testing Open source Diverse Voices method Workshops/convenings Consultation 22.5cmNon participation Surveys Request for comment The method interviewees cited most often was a form of consultation with people outside the company, generally domain experts rather than members of the public, usually to solicit feedback on the design or usability of products. Most interviewees recognised that participation could have multiple dimensions, with a few specifically using the word ‘spectrum’. Two interviewees suggested that open sourcing machine learning models, as a kind of mass participation predicated on widespread involvement, might constitute a participatory approach. Despite overall understanding and knowledge of types of approaches that could be used in AI development, the important accompanying finding is that most interviewees did not feel fully equipped to report on their organisation’s activity in the area of ‘participatory AI’. While we cannot rule out that commercial AI labs are using participatory methods that we are unaware of, these findings suggest that, at best, interviewees did not feel comfortable discussing specific examples of these methods with us, or had no awareness of these methods being used in their companies – and at worst, that such methods are not being used at all. Given that most interviewees self-selected to participate on the basis of their familiarity with public participation in AI (see ‘Methodology’), it would appear that the most likely scenario is that there is little use of participatory approaches to AI in industry. §.§ Public participation in commercial AI labs faces various obstacles: resource intensity, atomisation, exploitation risk and misaligned incentives §.§.§ Embedding participation is expensive and resource-intensive If you want actual participation, you actually have to invest before you need something from people. [P2]As reported elsewhere in the literature <cit.>, practitioners we spoke with struggled to embed participation in their companies. The accordant time and costs, and the difficulty in quantifying the work, are at present seen as too great to inspire action (and therefore outweighing any motivations for ‘social good’). One interviewee put forward interest in conducting further participatory work, but felt that other research and development pursuits, like ensuring ‘truthfulness’ of large language models, would be a higher priority. Many interviewees put forward a need for capacity building in this space, stating that, at present, practitioners are not equipped to conduct public participation, as many do not come from a social science background or have not undertaken work with community groups, and therefore lack the requisite skills and experience to undertake long-term engagements with members of the public. §.§.§ Participation in the AI industry is ‘atomised’ Interviewees often expressed there was not a clear understanding within AI companies of who has the responsibility for leading participatory projects or embedding a ‘culture of participation’ in which all members of a product team have a shared understanding of the value and uses of these methods. One interviewee suggested that spearheading the adoption of public participation in AI labs puts you at odds with the direction of travel of the rest of the company, effectively creating misaligned incentives, with public participation work not rewarded or recognised within the organisation. In the cases our interviewees mentioned, participation generally arose emergently, responding to specific design or development knots (particularly in the ‘agile development’ <cit.> of product lifecycles). One interviewee pointed to burnout and a lack of bandwidth among tech workers, preventing individual practitioners from connecting with other individuals or teams who had taken on participatory work in the past. §.§.§ There is concern and care around exploitation and ‘participation washing’ Many interviewees report that they are paying attention to social, societal and moral questions when considering how to adopt public participation approaches in their practice. Frequently cited considerations include concern about extractive behaviour and practice, whether or not ‘inclusion’ is always a commendable value, and questions of power, justice and societal impacts. Two interviewees specifically cited the term ‘participation washing’ <cit.> when sharing thoughts on potential obstacles to embedding participation, which may indicate that this is a concern that has become more routinely observed in these companies. Most interviewees reported feeling great responsibility for non-tokenistic participation and being attuned to power and privilege, especially in capacity as a tech worker. While these interviewees demonstrate a motivation for wanting to adopt meaningful participation that confers decision-making power for participants, for many, it did not translate into ‘better’ participation (often because owing to the other obstacles we set out in this paper, they felt they could not do a deeper level of engagement justice). Some interview subjects highlighted the tension between the business needs of a commercial lab and the mode of participation in certain projects. While one interviewee reported satisfactory levels of funding and support received by their company, this puts undue pressure on wanting to achieve the ‘desired’ outcomes from participatory work, recalling a project where they were told to go back and get a different answer [from participants] [P3]. Other interviewees described concerns of exploitation of participants from marginalised or underrepresented communities in their work:[recalling previous public participation in the company]It gets to the point where it’s like ‘Oh, yeah, we talked to some Black people. And they said it’s fine.’ And we’re being fair! We’re being responsible! [P3] Practitioners report grappling with values such as societal justice and the relation to their work: some discussion across different interviews took place on whether ‘inclusion’ in AI could advance justice or address power asymmetries. Most interviewees were firm on the importance of adopting focus on communities that have historically been excluded from technology development conversations. For some companies, lowering the barrier of participation/inclusion in AI was deemed a priority, usually in the context of enabling different groups of people to design or use machine learning tools. Moreover, some interviewees situated the role of participation into the broader societal context: one participant argued the role of participation is interrelated to broader questions of political representation and governance: That’s the realm of the political, setting up the terms under which we all live together. And increasingly, technology, technology systems have encroached so thoroughly on that, that we're having to rethink all of these extremely old questions about how can people self-determine the conditions under which they live in a technology space? [P10] Concern and care over extractive practice and exploitation was reported to closely correlate with the type of ‘public’ chosen to take part in participatory projects: two interviewees revealed that it is often subject-matter experts that are assembled in place of ‘laypeople’, suggesting that technical expertise is more often sought out by companies than lived experience. This echoes concerns in the literature around which publics are participating, a particular concern for public participation in AI given the potential for AI systems to impact communities across the globe at great scale and magnitude. §.§.§ Commercial AI labs are not incentivised to be transparent or share their experiences using participatory approaches Even where participatory approaches are tested and trialled, interviewees described a lack of incentive to report publicly about the work and any potential learnings. One suggested that publishing detail on participatory approaches and specific methodological choices might pose a commercial risk, as it would be sharing information that could be seen as intellectual property.Some interviewees reported feeling conscious about the reputation of their company, and the ways in which publicising (or not publicising) certain activities could be seen as affecting optics and comprising good or bad ‘PR’, suggesting that this disincentivises experimenting with public participation.One interviewee reported feeling as though external scrutiny over practice and public pressure to enact their social responsibilities (where they saw participatory work as situated) did not have much of an effect on the company’s direction or bottom line at all: If you take all the headlines [on tech industry practice] over the last five years, they didn't affect share price, or revenue [P3] This suggests that, for this company, ‘the techlash’ <cit.> has not had enormous impact on their practices and would not incentivise publishing details of participatory approaches. A lack of transparency has effects at the industry level. Institutional theory holds that companies in the sector begin to homogenise when faced with the same set of economic conditions <cit.>, and one interviewee reported that this felt true of tech companies – all the AI companies just look at each other [P3], suggesting a ‘fear of missing out’ effect. Coordinated, tech industry-wide effort was often cited by interviewees as being critical for an ecosystem of public participation, particularly around pooling resources to collectively establish or articulate better participatory practices. Most interviewees saw an increased role for some kind of regulation to incentivise public participation, though not without caveat: That’s a whole other issue of “gaming” regulation. You know, you start this cat and mouse game of: “Here’s some regulations”. And then companies are thinking, how do we get out of this?[P3] Other actors’ contributions to deriving change across the sector was noted by some interviewees, particularly activists. Some suggested looking to other sectors to use as analogues for an AI industry-specific approach. The FDA’s medical device pipeline, with its requirement for patient involvement, was offered by two interviewees in this context, as a potential practice that could be adapted to AI research and development. §.§.§ Participation in commercial AI labs is complicated by products or research that lack clear context As demonstrated above, public participation is costly and resource intensive: companies already lack incentives to conduct it, and where it is conducted, it can be piecemeal. The difficulty of running public participation methods is exacerbated as the generalisability of AI increases. Three interviewees identified a need to conduct public participation work around more complex, general purpose AI systems where the context in which it could be used to impact the public is less clear, and an additional two were concerned about conducting public participation in the face of rapid development of general purpose AI systems that may present complexities for a non-technical ‘public’. The interviewees we spoke with who belong to or work closely with AI product teams regularly conduct UX/user-research to get feedback on the usability of the proposed product with a narrow group of potential users. Interviewees saw this context as favourable for public input, as potential participants may have a clearer understanding of the impacts of the proposed system: Being in a product team can be really focusing, because we have these goals for the conversation. So you can get much clearer feedback from [participants] [P2] One interviewee recalled a project assembling members of the public to discuss potential benefits, harms and use cases of AI models at a high level, but reported that the exercise lacked focus and was not perceived by their company to have useful impact. They suggested that using specific technologies as a steer might enable critical dialogue on possible societal impacts of a technology at a higher level (though did not feel well-equipped to conduct such approaches at present). Interviewees belonging to research teams, outside strict product deadlines, put forward that they have more flexibility to pursue alternate research or design agendas. For example, practitioners working in research teams had encountered more methods akin to co-design <cit.> as a result of more agency to set pace and objectives. We find that embedding far-reaching or longer-term public participation projects is seen as particularly complex for general purpose technologies that have many number of downstream applications. One interviewee expressed concern at the pace and spread of recent developments in generative AI further implicating the scope and scale of participation, as well as participant understanding: What does it mean to engage people who are affected by, but don’t have the knowledge of, state of the art systems, especially as things like DALL-E and DALL-E mini [now Craiyon] and Stable Diffusion go viral? [P1]As generative AI and similar technologies continue to proliferate at an astonishing rate, with innumerable downstream uses and a wide user base, several interviewees reported the obligation to conduct some kind of public participation work across a variety of conditions increases, as highlighted by this quote: The people that put [content such as images] into the public sphere did not know they would be used for this application. How could you know that something you posted in 2007 would be used in a model over a decade later? So the public should have a say. [P2] These findings show that any proposed public participation approach or project must be attuned to the specific context of AI development (product or research). Our findings reveal that it’s harder to do public participation when the context in which it would be used or affect the public is less clear (for example, in AI research that is theoretical rather than practical, or with AI systems like generative models that can impact or be used in multiple contexts relevant to a person's life). § LIMITATIONS §.§ Limitations of interview approach We report the following limitations of our interview approach: – Non-representative sample: Not every major AI lab is represented in this study. In the largest companies, we would have preferred to interview multiple employees from different teams to gain a richer understanding of institutional culture and practice, which is hard to glean from a single interview. Additionally, interviewees in many cases were selected (or self-selected) on the basis of pre-existing interest in ethical/participatory AI etc. – Barriers to participation: We identify two main barriers to participation: interviewee concern around candour, and atomisation of public participation in commercial environments. Drawing from the research team's prior experience working in industry, and our experiences engaging with industry representatives, we recognised the potential for interviews to surface commercially sensitive IP and or corporate malpractice, resulting in varying degrees of comfort and willingness to interview. Many interviewees may have been reticent to share identifiable details of relevant projects within interview. While we sought to address this limitation by offering interviewees anonymisation of findings and removal of identifiable material, this concern may have persisted. Additionally, as we set out in the Discussion, there is often limited awareness both internally and externally on which individual/team has remit or expertise for public participation, arising in confusion over who would be best placed to participate in this study. In total, 47 direct personal invitations were sent for this study, in addition to two broadcast messages on two ‘responsible tech’ Slack boards. 12 directly invited interviewees explicitly declined the offer of participation in this study, we speculate in part owing to some of the barriers set out above, in addition to burnout (which was explicitly cited by a couple of invitees). This resulted in a relatively small sample size of remaining respondents who were available and happy to interview. §.§ Limitations of study We acknowledge here the recent rounds of tech sector layoffs and the gloomier economic climate beginning to intensify during and shortly following our interview period, and suggest these will have tangible implications for the adoption of participatory approaches (but which are not specifically reported on or studied here). We are employed by a research institute operating in the UK and in Europe, and all interviewees are employed at companies or institutions located in North America and Europe, reflecting the dominant geographies of high-profile AI research labs. We would have preferred to have substantive input from organisations based in the Global Majority represented in this research, though we note, following Chan et al. and others, that mere inclusion is not a conduit to rebalancing North American power domination <cit.>. Nevertheless, there may be opportunity for future research along these lines. § CONCLUSION In this study, we find that although public participation is recognised as a valuable mechanism to involve public perspectives and enjoys support and interest from this sample of interviewees in commercial AI labs, only limited participatory projects have been explored and implemented to date. Commercial AI labs view public participation as a way to mitigate ethical risks in AI systems and produce more ‘societally beneficial’ technologies. However, our interviewees report that individuals responsible for implementing participatory approaches in commercial labs do not have a shared understanding of what methods can or should be used and how to use them. While many of the challenges of embedding public participation are not unique to the commercial sector, nor to the context of technology development, there are routinely observed difficulties for public participation in commercial AI: where implemented, participatory approaches in commercial AI labs are informal, atomised and often deprioritised, with limited incentive for companies to publicly declare adoption of participation approaches (even in the context of companies’ public commitments to fairness, trustworthiness, and other ethical principles). In some cases, interviewees confirmed concerns from the literature that participation-washing may be occurring. Consequently, we conclude that factors such as the corporate profit motive and concern around exploitation are at present functioning as significant barriers to the use of participatory methods in AI , rather than drivers or enablers for the uptake of these practices. These concerns for the use of public participation in AI are exacerbated when one considers the growth of general purpose and generative AI systems, which enable a wide range of potential uses of AI systems in different contexts and settings. Successful public participation requires a clear use case for members of the public to understand, raising an innate challenge for the use of these methods for general purpose technologies. It is our intention for this research to function as a springboard: by presenting current conditions and emergent challenges for public participation in commercial AI, we lay foundations for further work and debate. § AREAS FOR FURTHER INPUT The role of this paper is to provide insight into current challenges in public participation in commercial AI, but this is only one piece in the puzzle in better understanding the logics and conditions of participation in these environments. We acknowledge that possible next steps are manifold, require cooperation from multiple actors, and are unlikely to be ‘quick wins’. In light of some of our study limitations, further research on commercial AI public participation is necessary, such as ethnographic research of ‘live’ participatory projects in labs, to strengthen conclusions on the current lay of the land. Second, the authors urge industry executives to exercise leadership in this area, namely: connect teams and individuals interested in ‘participatory AI’ across firms, provide institutional support and funding for further enquiry into participation in AI labs ‘in the open’ (with learnings made public), and vocally challenge the perceived norm of public participation working in opposition to tech business models. These combined forces may begin to unlock a grander normative vision for what participation in commercial AI should look like. We join many of our interviewees in their demand for regulators and governments to incentivise this work through appropriate regulatory levers and offer funding and evaluation capacity to kickstart wider adoption of public participation. The authors also recognise and commend the contributions of activists, investigative journalists, researchers and others for their important work in raising awareness of tech industry abuses of power and in advancing algorithmic justice. We call on people affected by uses of AI, activists, civil society and other interest groups to maintain public pressure to advance a stake in the systems and technologies so often built using their data, but decoupled from their values, experiences and vision for technologies and society. § ACKNOWLEDGEMENTS We would like to thank Laura Carter, Emily Clough and Octavia Reeve for their review of this paper and for their thoughtful comments and suggestions. We have no grant sponsorship, external funding or conflicts of interest to declare.
http://arxiv.org/abs/2306.04517v1
20230607152825
Binding energies of ground and isomeric states in neutron-rich Ru isotopes: measurements at JYFLTRAP and comparison to theory
[ "M. Hukkanen", "W. Ryssens", "P. Ascher", "M. Bender", "T. Eronen", "S. Grévy", "A. Kankainen", "M. Stryjczyk", "L. Al Ayoubi", "S. Ayet", "O. Beliuskina", "C. Delafosse", "Z. Ge", "M. Gerbaux", "W. Gins", "A. Husson", "A. Jaries", "S. Kujanpää", "M. Mougeot", "D. A. Nesterenko", "S. Nikas", "H. Penttilä", "I. Pohjalainen", "A. Raggio", "M. Reponen", "S. Rinta-Antila", "A. de Roubin", "J. Ruotsalainen", "V. Virtanen", "A. P. Weaver" ]
nucl-ex
[ "nucl-ex", "nucl-th" ]
University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland Université de Bordeaux, CNRS/IN2P3, LP2I Bordeaux, UMR 5797, F-33170 Gradignan, France Institut d'Astronomie et d'Astrophysique, Université Libre de Bruxelles, Campus de la Plaine CP 226, 1050 Brussels, Belgium Université de Bordeaux, CNRS/IN2P3, LP2I Bordeaux, UMR 5797, F-33170 Gradignan, France Université de Lyon, Université Claude Bernard Lyon 1, CNRS/IN2P3, IP2I Lyon, UMR 5822, F-69622 Villeurbanne, France University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland Université de Bordeaux, CNRS/IN2P3, LP2I Bordeaux, UMR 5797, F-33170 Gradignan, France University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland Université Paris Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France II. Physikalisches Institut, Justus Liebig Universität Gießen, 35392 Gießen, Germany University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland [Present address: ]Université Paris Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland GSI Helmholtzzentrum für Schwerionenforschung, 64291 Darmstadt, Germany Université de Bordeaux, CNRS/IN2P3, LP2I Bordeaux, UMR 5797, F-33170 Gradignan, France University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland Université de Bordeaux, CNRS/IN2P3, LP2I Bordeaux, UMR 5797, F-33170 Gradignan, France University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland [Present address: ]KU Leuven, Instituut voor Kern- en Stralingsfysica, B-3001 Leuven, Belgium University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland University of Jyvaskyla, Department of Physics, Accelerator Laboratory, P.O. Box 35(YFL) FI-40014 University of Jyvaskyla, Finland [Present address: ]TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia V6T 2A3, Canada School of Computing, Engineering and Mathematics, University of Brighton, Brighton BN2 4GJ, United Kingdom We report on precision mass measurements of ^113,115,117Ru performed with the JYFLTRAP double Penning trap mass spectrometer at the Accelerator Laboratory of University of Jyväskylä. The phase-imaging ion-cyclotron-resonance technique was used to resolve the ground and isomeric states in ^113,115Ru and enabled for the first time a measurement of the isomer excitation energies, E_x(^113Ru^m)=100.4(9) keV and E_x(^115Ru^m)=129(5) keV. The ground state of ^117Ru was measured using the time-of-flight ion-cyclotron-resonance technique. The new mass-excess value for ^117Ru is around 37 keV lower and 7 times more precise than the previous literature value. With the more precise ground-state mass values, the evolution of the two-neutron shell-gap energies is further constrained and a similar trend as predicted by the BSkG1 model is obtained up to the neutron number N=71. Binding energies of ground and isomeric states in neutron-rich Ru isotopes: measurements at JYFLTRAP and comparison to theory A.P. Weaver July 31, 2023 ============================================================================================================================= § INTRODUCTION Neutron-rich nuclei between zirconium (Z=40) and tin (Z=50) exhibit a variety of shapes; several of them even exhibit shape coexistence, where excited states are linked to shapes which differ from that of the nuclear ground state. The diversity of collectivity in general and of the nuclear shape in particular in this region of the nuclear chart has been studied widely, both theoretically and experimentally, see e.g. Ref. <cit.> and references therein. The relevant nuclear configurations are not limited to shapes with a comparatively high degree of symmetry such as spheres or axially symmetric ellipsoids with prolate or oblate deformation, but also includes shapes with no remaining rotational symmetry axis: triaxial shapes. There is evidence that the ground states of neutron-rich ruthenium isotopes (Z=44) fall in the latter category <cit.>, an interpretation that is further supported by different models <cit.>. These models typically agree that the effect of triaxial deformation is largest at the mid-shell and that the effect tapers off when even more neutrons are added to the nucleus, i.e. that sufficiently neutron-rich nuclei revert to an axially symmetric or even spherical shape towards the shell closure at N=82. Structural changes can be studied via a wide range of experimental methods, including laser- and decay-spectroscopy as well as Coulomb excitation. At the same time, Penning-trap mass spectrometry can be used to explore differences in binding energy which can reveal possible shape transitions <cit.>. With the development of the phase-imaging ion-cyclotron-resonance (PI-ICR) technique <cit.>, not only the ground-state binding energies but also the isomer excitation energies down to a few tens of keV <cit.> can be extracted, allowing to obtain new insight into the nuclear structure. Masses of neutron-rich ruthenium isotopes up to A = 116 <cit.> have been measured before with the JYFLTRAP double Penning trap mass spectrometer <cit.>. However, for the cases where long-lived isomers are present, namely ^113,115Ru, the time-of-flight ion-cyclotron-resonance (TOF-ICR) <cit.> technique used at that time did not provide enough resolving power to separate the ground- and isomeric-states in ^113Ru or to detect the isomer in ^115Ru unknown at that time. Therefore these results might have suffered from a systematic shift for the reported ground-state mass-excess values <cit.>. More exotic ruthenium isotopes were studied using the Experimental Storage Ring at GSI <cit.>. However, ^117Ru had the uncertainty 2.4-times increased by the Atomic Mass Evaluation 2020 (AME20) evaluators while the mass-excess value of ^118Ru was rejected due to a significant 700-keV deviation from the mass trends <cit.>. In this work, we report on the direct mass measurement of the ground states of ^113,115,117Ru and the isomeric states in ^113Ru and ^115Ru, the latter being the shortest-lived state (T_1/2 = 76(6) ms) ever measured at JYFLTRAP so far. The role of deformation for the systematics of masses in this region and the nature of the isomeric state in ^115Ru are analysed within the context of the recent global microscopic models BSkG1 <cit.> and BSkG2 <cit.> that are based on self-consistent Hartree-Fock-Bogoliubov (HFB) calculations using a Skyrme energy density functional (EDF). § EXPERIMENTAL METHOD The masses of neutron-rich ruthenium isotopes were studied at the Ion Guide Isotope Separator On-Line (IGISOL) facility <cit.> using the JYFLTRAP double Penning trap mass spectrometer <cit.> during two experiments. The isotopes of interest were produced in proton-induced fission by impinging a 25-MeV proton beam onto a thin target, ^232Th for ^113Ru and ^ natU for ^115,117Ru. First, the fission fragments were stopped in a helium gas cell operating close to 300 mbar from which they were extracted and guided using a sextupole ion guide <cit.>. Then, the produced ions were accelerated to 30q keV and mass-separated based on their mass-to-charge ratio using a 55 degree dipole magnet. The continuous mass-separated beam was cooled and bunched using the helium buffer gas-filled radio-frequency quadrupole cooler-buncher <cit.>. Finally, the ion bunches were injected into the JYFLTRAP double Penning trap. In the first trap of JYFLTRAP, known as the purification trap, the ion bunch was cooled, centered and the ions of interest were selected utilizing the mass-selective buffer gas cooling technique <cit.>. After that, the purified ion sample was sent into the second trap, called the precision trap, where the mass measurements took place. In addition, ^113Ru^2+ ions were produced via the in-trap decay of ^113Tc (T_1/2 = 152(8) ms <cit.>). The ^113Tc^+ ions, produced via fission, were captured in the first trap after which, the ion motion was let to cool for 102 ms. Then a dipolar excitation on the magnetron frequency was applied for 10 ms. During the trapping time a fraction of the ^113Tc^+ ion sample β-decay to ^113Ru^2+. Quadrupolar excitation of 100 ms was used to select the ions of interest by matching the excitation frequency of ^113Ru^2+ ions. After, the ^113Ru^2+ ions were sent to the second trap for the precision mass measurement. In the presence of a magnetic field of strength B, the mass m of an ion is related to its cyclotron frequency ν_c: ν_c = 1/2πq/mB , where q/m is the charge-to-mass ratio of the measured ion. To determine the magnetic field strength precisely, ^133Cs^+ ions from the IGISOL offline surface ion source station <cit.> were used as a reference for the mass measurement of ^113,115Ru^+ ground states and ^117Ru^+. For the mass measurement of isomeric states in ^113,115Ru, the ground-state masses were used as a reference. To account for the temporal magnetic field fluctuations, ruthenium ions and their references were measured alternately. The atomic mass m is determined from the frequency ratio r=ν_ c,ref./ν_c between the singly-charged reference ions and the ions of interest: m = r(m_ ref-m_e)+m_e , where m_e and m_ ref are the mass of an electron and the atomic mass of the reference, respectively. The isomer excitation energies were extracted as follows: E_x = (r-1)[m_ gs - m_e]c^2 , where m_ gs is the ground-state atomic mass and c is the speed of light in vacuum. Contribution from electron binding energies are on the order of eV and have thus been neglected. To measure the masses of the ground- and isomeric states in ^113,115Ru, the PI-ICR technique <cit.> was utilized in the precision trap. The determination of the ion's cyclotron frequency with PI-ICR is based on a measurement of the phase difference ϕ_c between the accumulated magnetron and cyclotron motion phases projected onto a position-sensitive microchannel plate (2D MCP) detector after a phase accumulation time t_acc: ν_c = (ϕ_c + 2π n) / 2 π t_acc, with n being the number of the ions' full revolutions in the precision trap. We used the following accumulation times for the PI-ICR mass measurements: 557 ms for the ^113Ru^+ ground and isomeric state, 220 ms for the q = 2+ ions of ^113Ru isomeric state, 200 ms for the ^115Ru^+ ground state and 100 ms for the ^115Ru^+ isomer (see Fig. <ref>). The measurement pattern utilized at JYLFTRAP is described in more detail in Refs. <cit.> and the PI-ICR measurement technique in Ref. <cit.>. For ^117Ru^+, the TOF-ICR technique <cit.> was applied. The ion's cyclotron frequency ν_c in TOF-ICR technique is determined from a time-of-flight resonance measured with the 2D MCP detector, located outside the strong magnetic field of the trap. To enhance the resolving power, the Ramsey method of time-separated oscillatory fields <cit.> was utilised. A short 10-30-10 ms (on-off-on) pattern was used in order to minimize the decay losses (see Fig. <ref>). In the mass measurement of ^113Ru and ^115Ru, the ground state and the isomer were in the precision trap at the same time. It is known that when two or more ions of different masses are present in the trap simultaneously, the ion-ion interaction can cause a frequency shift <cit.>. To account for the ion-ion interaction, a count-rate class analysis <cit.> was performed for the ground state ion of ^115Ru, while for other cases it was not statistically feasible. At JYFLTRAP the systematic uncertainty related to temporal magnetic field fluctuation has been determined to be δ B/B = 2.01(25) × 10^-12 min^-1 ×δ t <cit.>, where δ t is the time between the measurements. In all of the measurements the maximum systematic uncertainty related to the temporal magnetic field fluctuations was calculated but was found to be negligible compared to the statistical uncertainty. We added a further mass-dependent uncertainty of δ_m r/r = -2.35(81) × 10^-10 / u× (m_ ref - m) and a residual systematic uncertainty of δ_ resr/r=9× 10^-9 for measurements where the A/q for the reference and ion-of-interest were not the same, i.e. when using the ^133Cs ions as reference <cit.>. A systematic uncertainty related to the magnetron phase advancement and systematic angle error were also accounted for in the PI-ICR measurements. A more detailed description on the systematic uncertainties and their determination at JYFLTRAP can be found in Ref. <cit.>. § RESULTS The ground- and isomeric-state mass of ^113,115Ru and the ground-state mass of ^117Ru are reported in detail below. The measured frequency ratios (r), mass-excess values (ME) and excitation energies (E_x) are summarized in Table <ref>. §.§ ^113Ru The ground-state mass excess of ^113Ru, -71874.7(15) keV, was determined using ^133Cs^+ ions as a reference. The isomer excitation energy, E_x =100.4(9) keV, was determined against the ground state, both as singly-charged ions produced directly in fission as well as doubly-charged ions produced via in-trap decay of ^113Tc^+ (for details see Sect. <ref>). This yields a mass excess of -71774.3(17) keV for the isomer. The mass of ^113Ru has been previously measured at JYFLTRAP by Hager et al. <cit.>, using the TOF-ICR technique with a 400-ms quadrupolar excitation time and ^105Ru^+ ions as a reference. With the AME20 <cit.> mass value for ^105Ru, this results in a mass-excess value of -71826(12) keV. The revised value is in between the ground- and isomeric-state mass-excess values reported in this work (see Fig. <ref>.(a)) suggesting that a mixture of states was measured in Ref. <cit.>. A similar effect was observed in Rh isotopes, as reported in Ref. <cit.>. The reported mass-excess values are in agreement with the NUBASE20 evaluation <cit.> where it was correctly assumed that the value measured in Ref. <cit.> was a mixture of the ground state and an isomer at 131(33) keV. To date, the isomeric-state excitation energy was not based on direct experimental observations but on the suggestion that it has to lie in between the states at 98 and 164 keV in ^113Ru <cit.>. In this work, we have confirmed this hypothesis by determining the excitation energy for the first time and by placing the isomer just above the 98-keV state (see Fig. <ref>). The production of both long-lived states in ^113Ru in the β-decay of ^113Tc is also in agreement with the work by Kurpeta et al. <cit.>. §.§ ^115Ru The ground state mass excess, -66054.7(29) keV, was measured against a ^133Cs^+ reference. The isomer excitation energy, E_x = 129(5) keV, was determined against the ground state resulting in a mass excess of -65925.6(58) keV for the isomer. Our ground-state mass excess value is in agreement with the previous TOF-ICR-based JYFLTRAP measurement (ME = -66064.0(69) <cit.>) after adjusting for the updated mass of the reference ^120Sn ion. In our previous work we have observed that for nuclei with low-lying isomeric states the masses obtained with the TOF-ICR method are a weighted average of the ground state and the isomer masses <cit.>. In the case of ^115Ru, an apparent absence of the isomer influence on the measured mass can be explained by a relatively short half-life of the isomeric state (T_1/2 = 76 ms <cit.>) compared to the 300 ms excitation time used in Ref. <cit.>. Figure <ref>.(b) shows a comparison of our measurement with the values reported in NUBASE evaluations on nuclear and decay properties from 2003 <cit.>, 2012 <cit.>, 2016 <cit.> and 2020 <cit.> as well as the revised JYFLTRAP value of Ref. <cit.>. Changes between different editions of NUBASE can be explained as due to varying input data. In NUBASE03 <cit.>, the only entry for ^115Ru was from a β-decay end-point energy study <cit.>. After the JYFLTRAP measurement by Hager et al. <cit.>, a long-lived isomeric state in ^115Ru was discovered <cit.>, and the evaluators of NUBASE12 <cit.> applied a special procedure for mixtures of isomeric states assuming the excitation energy to be 250(100) keV. In NUBASE16 <cit.>, the β-decay end-point energy study was excluded from the global fit and the only remaining information was from Ref. <cit.>. Finally, in NUBASE20 <cit.>, the energy of the isomeric state was adjusted to 82(6) keV based on the value originally proposed in Ref. <cit.>. However, the isomeric-state excitation energy seems not to be taken into account for the mass-excess value of the isomer but only for its uncertainty. §.§ ^117Ru The value determined in this work, -59527(64) keV, is in agreement with AME20 <cit.> and it is almost seven times more precise. The mass-excess value adopted in AME20, -59490(430) keV <cit.>, is based on storage-ring measurements <cit.> but with the uncertainty artificially increased by evaluators <cit.>. The only known isomeric state has a half-life of 2.49(6) μs <cit.> which is much shorter than the measurement cycle used in this work. § DISCUSSION In this section, we discuss the experimental results and compare them to the BSkG-family of models of nuclear structure <cit.>. This section is organised as follows: we first establish the theoretical framework in Sec. <ref> and then proceed to study first the trends of the ground state (g.s.) binding energies of neutron-rich Ru isotopes in Sec. <ref>. Sec. <ref> discusses the isomeric state in ^115Ru as well as the implication of our measurement of its excitation energy. §.§ Theoretical framework The BSkG-family of models responds to the need for reliable data on the structural properties of exotic nuclei in different fields of research and in astrophysics in particular. These models are based on an empirical Energy Density Functional (EDF) of Skyrme type that models the effective in-medium nucleon-nucleon interaction. The concept of an EDF allows for a global yet microscopic description of all relevant quantities at a reasonable computational cost. The coupling constants of the EDF are the main element of phenomenology in this type of model and have to be adjusted to experimental data. Since binding energies are crucial ingredients for the modeling of nuclear reactions, the ensemble of known nuclear masses is a key ingredient of the parameter adjustment of the BSkG models. Because of this, these models reach root-mean-square (rms) deviations better than 800 keV on the thousands of masses included in AME20 <cit.>. This performance is not at all competitive with the uncertainties of the measurements we report on here, but it nevertheless reflects the state-of-the-art in global mass modeling: it is only matched by some of the older BSk models that were adjusted in the same spirit <cit.>, microscopic-macroscopic approaches <cit.> and empirical models <cit.>. The latter two types of model become particularly accurate when refined with machine learning techniques <cit.>, but either do not extend their predictions to other observables or struggle to describe them with the same parameter values deduced from the masses. The BSkG-family so far counts two entries: BSkG1 <cit.> and BSkG2 <cit.>. Both models combine a description of many hundreds of measured charge radii and realistic predictions for the properties of infinite nuclear matter with a description of the AME20 masses with similar accuracy (rms deviations of 741 and 678 keV, respectively). Although some of the BSk models reach an rms deviation below 600 keV <cit.>, BSkG1 and BSkG2 are better adapted to study the neutron-rich Ru isotopes as they rely on a three-dimensional representation of the nucleus, thereby accomodating naturally the triaxial deformation that is known to be particularly relevant for this region of the nuclear chart. BSkG2 incorporates a full treatment of the so-called `time-odd' terms in an EDF <cit.> and improves systematically on the description of fission properties compared to its predecessor <cit.>. Since (i) the inclusion of the time-odd terms did not result in a meaningful improvement of our global description of binding energies and (ii) fission properties are not directly related to the masses, a priori we expect BSkG1 and BSkG2 to be of roughly equal quality for the task at hand and therefore we will compare experiment to both models in what follows. Large-scale EDF-based models of nuclear structure such as the BSk- and BSkG-models describe the nucleus in terms of one single product wavefunction, typically of the Bogoliubov type. The simplicity of such an ansatz, as compared to the complexity of the many-body problem, is compensated for by allowing for spontaneous symmetry breaking in the mean fields. By considering such deformed configurations EDF-based models can account for a large part of the effects of nuclear collectivity on bulk properties such as masses while remaining at the mean-field level and thus keeping calculations tractable. Nevertheless, symmetry breaking comes at considerable computational cost. For all calculations that we report on, we employed the MOCCa code <cit.> to represent the single-nucleon wavefunctions on a three-dimensional coordinate mesh. All numerical parameters such as the mesh point spacing are identical to those employed in the adjustment of both BSkG models <cit.>. In a three-dimensional calculation, the quadrupole deformation of a nucleus of mass A can be described by way of the (dimensionless) deformation β_2 and the triaxiality angle γ, defined as β_2 = 4 π/ 3 R^2 A √(Q_20^2 + 2 Q_22^2) , γ = atan( √(2) Q_22/Q_20) , where R = 1.2 A^1/3 fm. The quadrupole moments Q_20 and Q_22 are defined in terms of integrals of the total nuclear density and spherical harmonics, see for instance Ref. <cit.>. Axially symmetric prolate and oblate shapes correspond to γ = 0^∘ and 60^∘, respectively, while intermediate values of the triaxiality angle in between those two extremes indicate triaxial shapes. We show in Fig. <ref> the potential energy surface (PES) of ^115Ru in the β-γ plane as obtained with BSkG2, calculations with BSkG1 leading to a similar PES. Since ^115Ru has an odd number of nucleons, Fig. <ref> shows the result of so-called `false-vacuum' calculations, where we constrained the expected number of neutrons to ⟨ N ⟩ = 71, but otherwise treated the nucleus as if it were even-even. We emphasize that all the calculations for which we report masses do not rely on this approximation: for both BSkG1 and BSkG2 our treatment of the odd-mass Ru isotopes includes self-consistent blocking of a neutron quasiparticle. For BSkG2, we also include the energy contribution of the finite spin and current densities induced by the presence of the odd neutrons. For more details on our treatment of odd-mass and odd-odd nuclei, see the discussion in Ref. <cit.>. A complete calculation for ^115Ru that includes blocking leads to the deformation shown as a black star on Fig. <ref>; its offset with respect to the minimum of the false-vacuum calculations is due to the polarisation induced by the odd neutron. Qualitatively, the false-vacuum PES of ^115Ru looks similar to the PES of ^112Rh that we discussed in Ref. <cit.>: we observe a somewhat broad triaxial minimum near γ = 30^∘ of significant quadrupole deformation. Close inspection reveals some quantitative differences: β_2 ∼ 0.27 is here somewhat smaller than the value 0.3 obtained for ^112Rh for instance. Another difference is the energy gain due to triaxiality: the difference between the oblate saddle point and the minimum on Fig. <ref> is about 800 keV, while it exceeds 1 MeV for ^112Rh. This can be linked to the four additional neutrons in ^115Ru compared to ^112Rh: as we approach the N=82 shell closure, the neutrons have less freedom to exploit quadrupole correlations and the importance of (static) quadrupole deformation in general and triaxial deformation in particular diminishes. §.§ The g.s. masses of Ru isotopes and their trends For the chain of Ru isotopes between N=65 and N=73, BSkG1 reproduces the absolute g.s. binding energies best: the deviation with respect to experiment for the absolute mass excesses averages to 360 keV and never exceeds 640 keV. The performance of BSkG2 is not as good: an average deviation of 650 keV with a deviation of up to 1.175 MeV for ^115Ru. Interestingly, the sign of the deviation is consistent: both models overbind these Ru isotopes and hence produce mass excesses that are too large in absolute size. As discussed before, the experimental uncertainties are several orders of magnitude beyond the accuracy of global models like BSkG1 and BSkG2: instead of comparing absolute masses in more detail, we will focus in what follows primarily on the trends of mass differences. We start with the two-neutron separation energy S_2n, defined as: S_2n(Z,N) = ME(Z,N-2) - ME(Z,N) +2ME(0,1) , where ME(Z,N) is the mass excess of a nucleus with Z protons and N neutrons. The top panel of Fig. <ref> compares the S_2n values derived from the newly measured masses to the values reported in the AME20 <cit.> evaluation and the two mass models. We also show the results of the less general calculations with BSkG1 reported on in Ref. <cit.>, which restrict the nucleus to axially symmetric configurations. For the less exotic ^109,111,113Ru, all three calculations with BSkG-models reproduce the general trend of the experimental S_2n rather well, although deviations on the order of several hundred keV are clearly visible. For the BSkG1 model, the description of the more neutron-rich isotopes follows the trend of the more stable ones, systematically overestimating the S_2n values by a small value. BSkG2 also overestimates the separation energies and describes their overall trend, but with deviations that are somewhat larger than those of its predecessor. Calculations with BSkG1 that are restricted to axial shapes, however, entirely miss the experimental trend. We can furthermore discuss the slope of the S_2n curve by introducing the empirical two-neutron shell gaps δ_2n: δ_2n(Z,N) = S_2n(Z,N) - S_2n(Z,N+2) , which we show in the bottom panel of Fig. <ref>. The new JYFLTRAP measurement for ^115Ru clearly establishes that the slope of the S_2n in this isotopic chain evolves smoothly at least until N=71. Although the corresponding curves are less regular, the BSkG1 and BSkG2 results produce δ_2n values that remain close to experiment up to N=71. For the heavier N=72, 73 and 74 isotopes, whose experimental δ_2n values are at least partially based on extrapolated AME20 values, the two models predict no major change in slope either. It is only for N=75-76 that BSkG1 and BSkG2 predict a change in slope that is correlated with the disappearance of triaxial deformation for N≥ 76. For ^120Ru and even more neutron-rich isotopes, the models predict axially symmetric prolate shapes with deformation that gradually diminishes towards N=82. Finally, we discuss the three-point neutron gaps Δ^(3)_n(Z,N): Δ_n^(3)(Z,N) = (-1)^N/2[ME(Z,N+1) +ME(Z,N-1)-2ME(Z,N)] . This quantity estimates the average distance between the curves that interpolate the masses of the even-N and odd-N isotopes, respectively, as a function of neutron number. It is particularly sensitive to the neutron pairing, but it can also be affected by variations in the structure of these isotopes with N. The new experimental results confirm the continuation of the trend of less exotic isotopes: the three-point gaps for the even-N isotopes at N=66, 68, 70, and 72 are all equal within error-bars. For N=70, our new result actually brings the Δ^(3)_n value more in line with this trend. The updated value of Δ_n^(3) for N=71 falls significantly out of the uncertainty range of AME20, which reflects the lack of accuracy of the AME20 estimate for the excitation energy of the isomeric state of ^115Ru. Nevertheless, it is not dramatically larger than the gap values for N=69 and N=71. The BSkG2 model generally overestimates Δ^(3)_n and its curve exhibits features at N=68, 69, and 70 that are not seen in the experimental data. BSkG1 on the other hand, provides a fair description of the experimental results, whether including or not triaxial deformation. Yet even this model is clearly not without flaws: the deviation of the full calculation w.r.t. experiment grows with N from N=69 onwards. In this respect, the deviation between the calculated BSkG1 value and the updated point at N=73 (which incorporates the recommended AME20 binding energy for ^118Ru) seems ominous. We note in passing that both BSkG models systematically overestimate Δ^(3)_n along odd-Z isotopic chains, which we discovered for the first time during the study of neighbouring Rh isotopes in Ref. <cit.>. Similarly, both models overestimate the calculated three-point proton gaps in odd-N isotopic chains. The common origin of these issues is the failure of both models to account for a small amount of binding energy in odd-odd nuclei that is usually ascribed to the residual interaction between the two odd nucleons, see Ref. <cit.>. This issue does not affect our discussion here, but it explains why both models describe much better the three-point neutron gaps in even-Z Ru isotopes than in odd-Z Rh isotopes. We have established that performance of BSkG2 for the N=65-71 Ru isotopes is worse than that of BSkG1 for absolute masses as well as all mass differences discussed. Since these models are the result of a complicated parameter adjustment which is global in scope, it is hard to pinpoint a particular source of this (local) deficiency. As we remarked in the previous section, we did not a priori expect that BSkG2 would offer an improved description of the measured masses. Although the difference we observe between models indicates BSkG1 as the tool of choice for future studies of this region, this does not imply that BSkG2 is a step backwards compared to its predecessor. The newer model presents a different compromise on the very large number of observables included in the parameter adjustment, leading to a worse description of the nuclei we study here but also to an improved description of other observables <cit.>. To close this section, we note again that our new measurement indicates a rather uneventful continuation to N=71 of the trends of binding energies and mass differences as established for less exotic isotopes. This can be interpreted as experimental confirmation that the structural evolution of nuclei in this isotopic chain is smooth rather than dramatic. From the point of view of the BSkG models this was expected: from N=55 onwards, the Ru isotopes exhibit triaxial deformation that smoothly evolves with neutron number until N=76. The authors of Ref. <cit.> relied on the Woods-Saxon single-particle spectrum of Ref. <cit.> to interpret the change in (tentative) ground state spin assignment in ^113-115Ru ((1/2^+) and (3/2^+), respectively) as a sign of a shape transition from prolate to oblate deformation. The trend of masses and mass differences does not seem to support such scenario. §.§ The isomer in ^115Ru The isomeric state in ^115Ru was reported for the first time in Ref. <cit.>, discussing the analysis of a β decay experiment. The authors observed that the 61.7-keV γ ray is not in coincidence with a β particle or any other γ ray. In addition, the half-life extracted from this transition, T_1/2 = 76(6) ms, differed from the half-life obtained for the ^115Ru ground-state (T_1/2 = 318(19) ms). Consequently, it was assumed that the isomeric state de-excites via an unobserved γ ray having energy below Ru K x-rays (E ≈ 20 keV), which we label γ_1, followed by an emission of the 61.7-keV γ ray, labeled as γ_2. With the assumption of the energy of γ_1 being below 20 keV, the observed ruthenium K x-rays were associated solely with the emission of K internal conversion electrons from the γ_2 transition. This observation enabled a determination of the γ_2 K-internal conversion coefficient (α_K = 2.7(6) <cit.>) by calculating the ratio of the ruthenium K x-rays and the γ_2 transitions. The new isomer excitation energy reported in this work renders previous calculations incorrect. However, if one assumes that (i) the total intensity (γ rays and internal conversion electrons emission) of γ_1 and γ_2 is identical, (ii) γ_1 has a pure M2 character and (iii) γ_2 has a pure M1 character, the observed ratio of the ruthenium K x-rays to γ_2 would be equal to 2.8(8). Any other assumptions regarding the multipolarity of both transitions would lead to a ratio that differs significantly from the experimental value of 2.7(6) <cit.>. Therefore, we propose M2 and M1 multipolarities for γ_1 and γ_2, respectively. By assigning (3/2)^+ as the ground-state spin-parity as proposed in <cit.> from a detailed β-decay spectrsocopy experiment of ^115Ru, a tentative (9/2)^- isomer assignment can be adopted, see Fig. <ref>. A precise description of the level scheme of ^115Ru is beyond the capabilities of current large-scale models such as BSkG1 and BSkG2, but we can use them to gain a qualitative understanding of the existence of the isomeric state. To this end, we show in Fig. <ref>, the Fermi energy and the single-particle energies for both neutrons and protons obtained in false-vacuum calculations for ^115Ru with BSkG2 along the trajectory in the β-γ plane indicated by the arrows in Fig. <ref>. Although symmetry-breaking allows models such as BSkG1 and BSkG2 to grasp a significant part of the effect of collectivity on nuclear structure, here is where we pay the price: we can no longer use the quantum numbers of an operator associated with a broken symmetry to label single-particle states. At the spherical point, on the utmost left and right of Fig. <ref>, no symmetry is broken and all single-particle levels are simultaneous eigenstates of three operators with three associated quantum numbers: the angular momentum squared Ĵ^2 with quantum number J, parity P̂ with quantum number π and the z-component of the angular momentum Ĵ_z with quantum number K. The quantum numbers of the orbitals at the spherical point are indicated in the traditional spectroscopic notation on the right of Fig. <ref>. Along the first segment of the path on Fig. <ref>, we break rotational symmetry but conserve axial symmetry: the levels in the left-most column are no longer eigenstates of Ĵ^2 but retain the K quantum number[For axially symmetric configurations, we always align the symmetry axis with the z-axis in the simulation volume.], which is indicated by colors on Fig. <ref>. When exploring finite values of γ along the second segment of the path on Fig. <ref> axial symmetry is broken and K can no longer be used to label the single-particle states, hence the absence of colors in the middle column of Fig. <ref>. The final segment of the path explores oblate shapes which are axially symmetric, such that levels in the right column of Fig. <ref> can again be color-coded. For all our calculations we conserve parity, such that π is a good single-particle quantum number along the entire path that we can use to distinguish between levels of positive (full lines) and negative parity (dashed lines) in all columns of Fig. <ref>. This loss of single-particle quantum numbers also translates to the many-body state: the BSkG-models cannot currently offer definite angular momentum assignments for calculated ground states for odd-mass and odd-odd nuclei. Doing so would require symmetry-restoration techniques <cit.> whose application is presently still out of the scope of global models for reasons of their numerical cost and because of formal issues with the type of EDF assumed for the BSkG models. We are however not entirely without options: we can calculate expectation values ⟨ i| Ĵ_̂ẑ | i ⟩, which will not be half-integer multiples of ħ but which nevertheless tell us something about the angular momentum of the single-particle state |i⟩. In the limit of a non-interacting particle-core model of the ground states of odd-mass Ru isotopes, the angular momentum expectation value of the odd neutron will also be the expectation value of angular momentum of the many-body state. We discussed a qualitatively similar Nilsson diagram obtained for ^112Rh in Ref. <cit.> and repeat here a few observations that are common to both nuclei before discussing the isomer. Local minima in the PES correspond to deformations for which the single-particle level density near the Fermi energy is low: for nuclei with Z=43, 44, and 45, the protons drive the appearance of triaxial deformation since their single-particle spectrum at β_2 ∼ 0.28-0.3, γ∼ 30^∘ is very sparse. In this region of the PES only positive parity state orbitals are near the Fermi energy, matching the parity assignments of all even-N Tc and Rh isotopes. The single-particle level density of the neutrons on the other hand is much higher, resulting in a closely-spaced set of levels with different parities near the Fermi energy. We interpret the close interleaving of positive and negative parity neutron states with different angular momentum content as the origin of the isomeric state in ^115Ru. Two neutron states are nearly degenerate near the Fermi energy at the location of the minimum of the PES: these are highlighted in the middle column of Fig. <ref> and we will refer to them by their markers: |♢⟩ and |∙⟩. These levels differ in their parity, but also in their angular momentum content: near γ = 30^∘ the positive parity state has an average ⟨♢ | Ĵ_z | ♢⟩≈ 0.73 ħ, while that of the negative parity state is significantly larger, ⟨∙ | Ĵ_z | ∙⟩≈ 4.13 ħ. Since the odd-neutron can be assigned to each of these levels, we expect the appearance of two low-lying levels with opposite parity in the spectrum of ^115Ru that are close in energy yet differ substantially in their angular momentum, hence one of them being an isomer. Finally, the (5/2)^+ state in between the g.s. and the isomer on Fig. <ref> could be rotational in character: taking the calculated moments of inertia of ^115Ru and under the assumption of a rigid triaxial rotor, a 1ħ change in total angular momentum corresponds to about 88 keV of excitation energy. Moving beyond simple arguments based on a non-interacting particle-core picture and the Nilsson diagram, we explicitly calculated the lowest-lying configuration of each parity in ^115Ru with both BSkG1 and BSkG2. One of these is the calculated g.s. , whose binding energy figured in the previous section: for BSkG1 this is the state with positive parity and for BSkG2 this is the one with negative parity. In both cases, we find an excited state of opposite parity at low excitation energy; 33 keV and 90 keV for BSkG1 and BSkG2 respectively. For BSkG2, we have direct access to the average many-body angular momentum along the z-axis: a small value ⟨ J_z ⟩≈ 0.7 ħ for the positive parity state and a large one, ⟨ J_z ⟩≈ 3.1 ħ, for the negative parity state. These calculations support our conclusions drawn from the Nilsson diagram and the calculated excitation energy are very roughly comparable to the experimental isomer excitation energy. These results should not be overinterpreted: all relevant energy differences are very small and the neutron spectrum in Fig. <ref> is very complicated. Small changes to any aspect of the model will affect the precise location of level crossings and therefore the ordering of levels. Our calculated excitation energies should thus not be taken as a precise prediction, but rather as a confirmation that two states of opposite parity that differ little in energy can be constructed with different angular momentum content. Predicting their ordering and energy difference with accuracy is beyond BSkG1 and BSkG2, or for that matter, any large-scale model that we are aware of. The same mechanism can be used to interpret the isomerism in nearby N=71 isotones: isomeric states with half-lives on the order of seconds or longer have been observed in ^116Rh, ^118Ag and ^119Cd whereas shorter-lived isomeric states are known in ^114Tc and ^117Pd <cit.>. For Z=42-46, one can expect from Fig. <ref> triaxial deformation with a sparse proton single-particle spectrum and two low-lying states arising from neutron orbitals of different parities. The experimental systematics extend much further: in the entire range of Z=43-57, low-lying isomers have been observed <cit.>. A more in-depth study of isomerism in the N=71 isotones would certainly require more diagrams like Fig. <ref> for larger proton numbers and is outside of the scope of this study. Nevertheless, we remark that both BSkG1 and BSkG2 predict triaxial deformation for almost all N=71 isotones in the range Z=40-60[The only exceptions occur for BSkG1 near the Z=50 shell closure: ^118Ag, ^119Cd, ^120In and ^121Sn remain axially symmetric.]. § SUMMARY The masses of ^113,115,117Ru have been measured using the Penning-trap mass spectrometry at the JYFLTRAP double Penning trap. The ground- and isomeric states in ^113,115Ru have been separated and masses measured using the PI-ICR technique. The isomer excitation energies were determined directly for the first time. The high-precision measurement reported in this work place the (7/2)^- isomeric state in ^113Ru at 100.4(9) keV, just above the (3/2^+) level at 98.4(3) keV <cit.>, but still in agreement with the previous prediction of 133(33) keV <cit.>. For ^115mRu, the excitation energy was found to be 129(5) keV, which is significantly larger than proposed in Ref. <cit.> or the value listed in the most recent NUBASE evaluation, 82(6) keV <cit.>. The determined ground-state masses of ^113,117Ru are in excellent agreement with the atomic mass evaluation <cit.>. For ^115Ru, we report a mass-excess value which is 50(26) keV larger than reported in AME20 <cit.>. However, it is in agreement with the previous JYFLTRAP mass measurement by Hager et al. <cit.>. With the mass values determined in this work, the trend in the two-neutron separation energies continues smoothly. The experimental results have been compared with the global BSkG1 <cit.> and BSkG2 <cit.> models, which allow for triaxially deformed shapes. Detailed calculations were performed for the structure of ^115Ru. In the predicted triaxial deformation, the proton single-particle spectrum was found to be sparse and the predicted low-lying states arise from neutron orbitals with different parities. More systematic studies on the isomeric states in this triaxially deformed region would be needed to shed more light on the reasons for the isomerism in these nuclei. The present research benefited from computational resources made available on the Tier-1 supercomputer of the Fédération Wallonie-Bruxelles, infrastructure funded by the Walloon Region under the grant agreement No 1117545. W.R. acknowledges financial support from the FNRS (Belgium). Work by M.B. has been supported by the Agence Nationale de la Recherche, France, Grant No. 19-CE31-0015-01 (NEWFUN). Funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No 771036 (ERC CoG MAIDEN) and No 861198–LISA–H2020-MSCA-ITN-2019 are gratefully acknowledged. M.H. acknowledges financial support from the Ellen & Artturi Nyyssönen foundation. We are grateful for the mobility support from Projet International de Coopération Scientifique Manipulation of Ions in Traps and Ion sourCes for Atomic and Nuclear Spectroscopy (MITICANS) of CNRS. T.E. and A.d.R. acknowledge support from the Academy of Finland project No. 295207, 306980 and 327629. J.R. acknowledges financial support from the Vilho, Yrjö and Kalle Väisälä Foundation. apsrev
http://arxiv.org/abs/2306.03329v1
20230606004236
AVIDa-hIL6: A Large-Scale VHH Dataset Produced from an Immunized Alpaca for Predicting Antigen-Antibody Interactions
[ "Hirofumi Tsuruta", "Hiroyuki Yamazaki", "Ryota Maeda", "Ryotaro Tamura", "Jennifer N. Wei", "Zelda Mariet", "Poomarin Phloyphisut", "Hidetoshi Shimokawa", "Joseph R. Ledsam", "Lucy Colwell", "Akihiro Imura" ]
cs.LG
[ "cs.LG", "q-bio.QM" ]
A Robust Likelihood Model for Novelty Detection Ranya AlmohsenDenotes equal contribution. Shivang Patel^* Donald A. Adjeroh Gianfranco Doretto West Virginia University Morgantown, WV 26506 {ralmohse, sap00008, daadjeroh, gidoretto}@mix.wvu.edu July 31, 2023 ========================================================================================================================================================================================================================= Antibodies have become an important class of therapeutic agents to treat human diseases. To accelerate therapeutic antibody discovery, computational methods, especially machine learning, have attracted considerable interest for predicting specific interactions between antibody candidates and target antigens such as viruses and bacteria. However, the publicly available datasets in existing works have notable limitations, such as small sizes and the lack of non-binding samples and exact amino acid sequences. To overcome these limitations, we have developed AVIDa-hIL6, a large-scale dataset for predicting antigen-antibody interactions in the variable domain of heavy chain of heavy chain antibodies (VHHs), produced from an alpaca immunized with the human interleukin-6 (IL-6) protein, as antigens. By leveraging the simple structure of VHHs, which facilitates identification of full-length amino acid sequences by DNA sequencing technology, AVIDa-hIL6 contains 573,891 antigen-VHH pairs with amino acid sequences. All the antigen-VHH pairs have reliable labels for binding or non-binding, as generated by a novel labeling method. Furthermore, via introduction of artificial mutations, AVIDa-hIL6 contains 30 different mutants in addition to wild-type IL-6 protein. This characteristic provides opportunities to develop machine learning models for predicting changes in antibody binding by antigen mutations. We report experimental benchmark results on AVIDa-hIL6 by using neural network-based baseline models. The results indicate that the existing models have potential, but further research is needed to generalize them to predict effective antibodies against unknown mutants. The dataset is available at <https://avida-hil6.cognanous.com>. § INTRODUCTION Antibodies are proteins that play an essential role in the immune system. When antigens such as viruses and bacteria invade the body, the immune system protects the body by producing large numbers of antibodies that bind to the antigens to inhibit their function or mark them for removal. Antibodies have become an important class of therapeutic agents to treat human diseases because of their high target specificity and binding affinity <cit.>. An essential step in therapeutic antibody discovery is the identification of specific interactions between antibody candidates and target antigens, which has traditionally relied heavily on expensive, time-consuming experiments <cit.>. Therefore, computational approaches are increasingly used to complement and accelerate traditional processes for therapeutic antibody discovery <cit.>. In particular, there is growing interest in using machine learning to predict antigen-antibody interactions <cit.>, which can be used to virtually screen binding antibodies against specific target antigens. Schneider et al. <cit.> developed the structure-based deep learning for antibodies virtual screening (DLAB-VS) by using the structural antibody database (SAbDab) <cit.>, which contains collections of antigen-antibody complex structures. Lim et al. <cit.> generated datasets of antibody sequences from mice immunized with cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1); then, they built deep learning models to predict binder and non-binder antibodies to CTLA-4 and PD-1. Huang et al. <cit.> proposed AbAgIntPre, a deep learning-assisted prediction method that was trained using only the amino acid sequences in two public databases: SAbDab and the coronavirus antibody database (CoV-AbDab) <cit.>. Despite these promising developments, progress in therapeutic antibody discovery has lagged behind progress in other areas of drug discovery. A major reason for this is the lack of availability of high-quality, large-scale datasets of antigen-antibody interactions. First, most existing datasets have small sample sizes. For example, as of May 2023, SAbDab and Lim et al.'s datasets <cit.> contain 5,737 and 3,064 binder samples, respectively. In addition, SAbDab only has samples for binding antigen-antibody pairs. In previous studies <cit.> using SAbDab, antigens and antibodies were randomly paired to form non-binding pairs. CoV-AbDab contains 12,021 entries, from which more than 30,000 antigen-antibody pairs, including non-binding pairs, are available. However, CoV-AbDab provides only the variant name and not the amino acid sequence. As the variant name is defined by representative mutations, the exact amino acid sequence may vary between publications, thus making it difficult to use CoV-AbDab for antibody discovery because a single amino acid change can be critical for an antigen-antibody interaction. To overcome these limitations, we have developed AVIDa-hIL6, an antigen-variable domain of heavy chain of heavy chain antibody (VHH) interaction dataset produced by an alpaca immunized with the human interleukin-6 (IL-6) protein. IL-6 is a relatively small protein, a simply structured, well-characterized cytokine that exists as a monomer in the body and is associated with many inflammatory diseases and cancers. To ensure a wide variety of antibody sequences, we used VHHs, whose simple structures enable much easier identification of full-length amino acid sequences by DNA sequencing technologies such as next-generation sequencing (NGS) than for conventional antibodies. By leveraging these advantages, AVIDa-hIL6 contains 573,891 antigen-VHH pairs, including 20,980 binding pairs, with their amino acid sequences. In addition, we have developed a novel labeling method to obtain reliable labels for binding and non-binding. Furthermore, AVIDa-hIL6 contains information on the interaction of diverse VHHs with 30 different mutants produced by artificial point mutations, in addition to the wild-type IL-6 protein. As the COVID-19 pandemic has shown, viruses continuously evolve through mutation to evade the immune system. Because emerging mutations involving amino acid substitutions can lead to profound changes in antibody binding, prediction of their effects is critical in the development of therapeutic antibodies. Notably, AVIDa-hIL6 contains antibody sequences that are positive for most IL-6 mutants but negative for specific IL-6 mutants, or vice versa, thus providing important insights for understanding how antigen mutations affect antibody binding. The main contributions of this paper are summarized as follows. * We release AVIDa-hIL6, which is the largest existing dataset for predicting antigen-antibody interactions (10 times larger than any other public dataset) and contains amino acid sequences of antigens and antibodies and binary labels for binding and non-binding pairs. * AVIDa-hIL6 has the wild type and 30 mutants of the IL-6 protein as antigens, and it includes many sensitive cases in which point mutations in IL-6 enhance or inhibit antibody binding. * We have designed a novel data generation method, including data labeling, by using the immune system of a live alpaca. This method can be applied to any target antigen, in addition to IL-6. * We report benchmark results for the prediction of antigen-antibody interactions by using neural network-based baseline models. These results confirm that AVIDa-hIL6 provides valuable benchmarks for assessing a model's performance in capturing the impact of antigen mutations on antibody binding. § RELATED WORK In this section, we put our work into context with public datasets for predicting antigen-antibody interactions. Recent advances in NGS technology now enable the construction of large-scale databases of antibody sequences, such as the observed antibody space (OAS) <cit.> and iReceptor <cit.>. However, those databases cannot be directly used as training data for predicting antigen-antibody interactions because of the lack of information on the antigen corresponding to each antibody. Thus, those databases are primarily used to build antibody-specific language models <cit.> and to generate new antibody sequences via deep generative models <cit.>. Here, we focus on datasets with information on antigen-antibody interactions, as summarized in Table <ref>. Antibody Type. The datasets listed in Table <ref> contain two types of antibodies: conventional and VHH. A conventional antibody comprises two pairs of heavy and light chains. As the heavy and light chains are encoded on different chromosomes, cell cloning is required to identify the pair of DNA sequences. In contrast, a VHH, found in camelids such as alpacas and llamas, comprises only heavy chains. A large number of VHH sequences can be identified from lymphocytes in bulk with NGS technology, which is much more efficient than identification of conventional antibodies. Recently, VHHs have gained interest as therapeutic agents because of their small size, high stability, good human tolerability, and relative ease of production <cit.>. SAbDab-nano <cit.> and sdAb-DB <cit.> are public databases that collect only VHHs, but they both have too few samples for machine learning drug discovery or design applications. Hence, we use immunized alpacas as a data source to generate a large amount of VHH sequence data. Sequence and Structure Information. SAbDab <cit.> and its sub-database, SAbDab-nano <cit.>, collect all the available antigen-antibody complex structures in the Protein Data Bank (PDB) <cit.>. Also, some data in sdAb-DB <cit.> and CoV-AbDab <cit.> include structural information from the PDB. Because accurate knowledge of antibody structures is important for understanding the antigen-binding function of antibodies, SAbDab is increasingly used for antibody structure prediction via deep learning <cit.>. However, experimental methods for antibody structure determination, such as X-ray crystallography and cryo-electron microscopy, are relatively expensive and time-consuming, making it difficult to increase the amount of data. More recently, machine learning methods such as AlphaFold <cit.> and RoseTTAFold <cit.>, which accurately predict a protein's structure from the amino acid sequence, have greatly accelerated progress in the biological sciences. AVIDa-hIL6 focuses on amino acid sequences of antigens and antibodies to generate sufficient training data for machine learning. Number of Labeled Samples. Some existing datasets only have samples for binding antigen-antibody pairs. One reason is that the identification of non-binding antigen-antibody pairs has little clinical significance. In previous studies <cit.> using SAbDab, antigens and antibodies were randomly paired to form non-binding pairs. This process was based on the assumption that randomly sampled pairs are unlikely to bind because of antibodies' high target specificity. Lim et al. <cit.> generated a dataset of antibody sequences labeled as “binder” and “non-binder” through an experimental approach using mice immunized with CTLA-4 and PD-1. The number of samples in Table <ref> is the total for CTLA-4 and PD-1. As with AVIDa-hIL6, Lim et al. created their dataset from their original experiments, thus revealing the potential to increase the amount and diversity of data by the same approach using arbitrary antigens. CoV-AbDab collects antibodies that bind to at least one beta coronavirus and currently contains 12,021 entries. Because each entry has “Binds to” and “Doesn't Bind to” columns and contains zero to multiple antigens for a specific antibody, the number of samples in Table <ref> was counted for each possible antigen/antibody pair. However, CoV-AbDab only provides variant names, e.g., SARS-CoV1_Omicron-BA2 or SARS-CoV2_Alpha, and each antigen's exact amino acid sequence is only available if it was provided in the original publication. § AVIDA-HIL6: ANTIGEN-VHH INTERACTION DATASET PRODUCED FROM ALPACA IMMUNIZED WITH HUMAN IL-6 PROTEIN AVIDa-hIL6 is a dataset of antigen-VHH interactions with amino acid sequences and binary labels for binding and non-binding. In this section, we introduce the dataset generation process, dataset statistics, and verification of label reliability. A labeled dataset and the raw data are available at <https://avida-hil6.cognanous.com>. The dataset is released under a CC BY-NC 4.0 license. §.§ Dataset Generation Figure <ref> shows an overview of the data generation process. Appendix <ref> gives the detailed experimental procedures and the amino acid sequences of the IL-6 proteins. Step 1. Immunization To ensure the diversity of antibodies binding to the IL-6 protein that we used as an antigen, we used the immune system of a live alpaca. We introduced a site-directed mutation with alanine at intervals of three to six amino acids, like the alanine scanning technique <cit.>, which is used in molecular biology to determine the contribution of a specific amino acid; as a result, we obtained 30 types of mutants in addition to the wild-type IL-6 protein. A mutant is denoted, for example, as IL6_P42A, which means that an amino acid in the wild type is substituted from proline to alanine at position 42. We immunized a single alpaca with a cocktail of 31 different IL-6 proteins four times at about two-week intervals. After each immunization, one blood sample and one or more lymph nodes from different body sites were collected, yielding a total of 12 libraries. Step 2. Phage Library Construction We used phage display <cit.> to identify VHHs that bind to the IL-6 protein. Phage display is a technique for displaying the target proteins on a phage surface in a form that allows them to bind to other molecules. We cloned the VHH genes obtained from each library into the pMES4 phagemid vector to display the VHHs on the phagemid surface. As a result, 12 phage libraries corresponding to each of the above libraries were generated and designated as the mother libraries. Step 3. Affinity Selection Affinity selection by biopanning using the mother libraries can enrich a phage with displayed VHHs that bind to the target molecule. For the target molecules, we used the wild type and 30 mutants of IL-6 and a negative control sample that did not contain any IL-6 protein. Only experiments targeting the wild-type IL-6 protein were performed in triplicate to ensure reproducibility. The mother library was added to the container and incubated with target-coated magnet beads. Then, non-binding phages were washed away, and the remaining phages that bound to the beads were eluted. Consequently, by performing one round of biopanning on each of the 12 mother libraries, we generated a total of 408 sublibraries. Step 4. Sequence Analysis The amino acid sequences of VHHs displayed on a phage surface can be identified by analyzing the phage genome's DNA with NGS technology. Approximately 100,000 paired reads were generated for each library by NGS, and singletons were removed to avoid sequencing errors. The DNA sequences were translated into amino acid sequences. We counted the number of occurrences of each unique VHH amino acid sequence from the paired reads, which reflected the concentration of each VHH in the library. For each of the 12 mother libraries before panning and 408 sublibraries after panning, we created a table with the VHH amino acid sequences and their read counts. Step 5. Data Labeling We designed a labeling method to distinguish whether a VHH binds to each IL-6 protein type by applying a statistical test for differences in the proportions of each VHH in a library before and after panning. Here, we focused on examining the binding between a specific VHH and a specific target molecule. Let p_1 and p_2 denote the population proportions of a specific VHH in the libraries before and after panning. We identified some of the VHH sequences in the libraries by NGS analysis. Let n_1 and n_2 denote the libraries' total read counts before and after panning, respectively, and let x_1 and x_2 denote the read counts of a specific VHH in the libraries. Then, the respective sample proportions of a specific VHH in each library are p̂_1=x_1/n_1 and p̂_2=x_2/n_2. Given that the minimum value of all possible n_1 and n_2 was over 10,000, we assumed that p̂_1 and p̂_2 follow normal distributions with mean p_1 and p_2 and variance p_1(1-p_1)/n_1 and p_2(1-p_2)/n_2, respectively, according to the central limit theorem. Furthermore, the difference in the proportions p̂_1-p̂_2 can also be approximated by a normal distribution due to the reproductive property of the normal distribution. Thus, the test statistic Z under null hypothesis H_0: p_1=p_2 was calculated as follows. Z=p̂_1-p̂_2/√(p(1-p)(1/n_1+1/n_2)) where p is the pooled proportion calculated as p=x_1+x_2/n_1+n_2. The p-value of Z was calculated using the standard normal distribution. In the same way, p-values were calculated for all VHH-target pairs in the sublibraries with respect to the 12 corresponding mother libraries. Because we had 12 sublibraries associated with the same target molecule, we adopted the smallest p-value, indicating the most significant difference in proportion, among identical VHH-target pairs. If a specific VHH's proportion in a sublibrary increased from the proportion in the corresponding mother library and the p-value was 0.05 or less (our chosen significance level), the VHH-target pair was labeled with “binder.” Similarly, if the proportion decreased and the p-value was 0.05 or less, the pair was labeled with “non-binder.” Finally, if the p-value exceeded 0.05, the pair was labeled with “non-significant.” These labels were not used for supervised learning to predict antigen-antibody interactions. The results of biological experiments always contain background noise, such as binding to contaminating proteins. Therefore, we developed a novel noise reduction algorithm to avoid false positives and improve label reliability. We reconfirmed VHHs labeled as “binder” to any of the IL-6 proteins by comparing the labels to negative control samples under the following conditions. * If the VHH was a non-binder to the negative control sample, the label remained “binder.” * If the VHH was a binder to the negative control sample, the label was reassigned from “binder” to “noise” because of possible false positives. * If the VHH was “non-significant” with respect to the negative control sample, the ratio of p-values was compared to 10^2.5, a value empirically determined by domain experts, as follows. * If the ratio of p-values was below 10^2.5, the label was reassigned from “binder” to “non-significant” because of possible false positives. * If the ratio of p-values was 10^2.5 or more, the label remained “binder.” We carefully verified the reliability of our labels, as discussed in section <ref>. The code for data labeling is available at <https://github.com/cognano/AVIDa-hIL6>. §.§ Dataset Statistics AVIDa-hIL6 contains 573,891 data samples, comprising 20,980 binding pairs and 552,911 non-binding pairs. The proportion of binding pairs is about 3.7 %. Figure <ref>(a) shows the number of samples for each antigen type. Although the number of samples varied for each IL-6 protein type, we successfully generated at least 10,000 samples for the wild type and 30 different mutants. Furthermore, at least 250 binder VHH sequences existed for each IL-6 protein type. Because we labeled the VHH sequences in the mother library for each IL-6 protein type, AVIDa-hIL6 has information on whether the same VHH sequence binds to each of multiple targets. The number of unique VHH sequences in AVIDa-hIL6 is 38,599, including 4,425 sequences that bind to at least one IL-6 protein type. Importantly, 650 VHH sequences, about 14.7 % of the VHH binders, show binding to specific IL-6 protein types but non-binding to others. We visualized whether 100 sequences extracted from these 650 VHH sequences bound to each antigen type, as shown in Figure <ref>(b). These samples have valuable information on which mutations enhance or inhibit antibody binding, which should be strongly associated with the IL-6 protein's binding site. To gain a better understanding of the distribution of VHH sequences, we compared it to the distributions in the existing VHH datasets SAbDab-nano and sdAb-DB. We used only the binders from AVIDa-hIL6 because the existing datasets only contain binders. The numbers of unique VHH binders in SAbDab-nano, sdAb-DB, and AVIDa-hIL6 are 828, 1,414, and 4,425, respectively. To mitigate the computational complexity, we randomly sampled 700 unique VHH sequences from each dataset and calculated all pairwise sequence identities with Biopython v1.81 <cit.>. Figure <ref>(c) shows the distributions of sequence identities for these datasets. The results indicate that AVIDa-hIL6 has peaks at regions of higher sequence identity than the other datasets. Interestingly, AVIDa-hIL6 has a peak at 97 % sequence identity, which is absent for the others. As a living organism's immune response progresses, effective antibodies with high binding affinity to the antigen are selected through a process called affinity maturation and are further mutated by repeated exposure to the antigen. Thus, these results may reflect that a live alpaca's immune system selects VHH sequences with high sequence identity that specifically bind to target IL-6 proteins through affinity maturation. §.§ Label Reliability To verify our label reliability, we tested the antibody binding ability by immunofluorescence staining. Because the number of VHHs that could be verified was limited by the time and cost of biological experiments, VHHs were selected under the following conditions to verify label reliability efficiently. First, we examined only the wild-type IL-6 protein as a target antigen. Next, the amino acid sequences of all labeled VHHs with higher than 93 % identity were clustered by the UCLUST algorithm <cit.> to validate diverse sequences. When two or more VHHs with the same label were in the same cluster, the one with the highest read count was selected. We know empirically that if all the VHHs in a cluster have the same label, these labels are likely to be true. Therefore, such VHHs were excluded from the candidates, and VHHs with suspect labels were selected by domain experts. Finally, we tested 10 binder-labeled, six non-binder-labeled, and four noise-labeled VHHs for validation. Immunofluorescence analysis showed that all 10 binder-labeled VHHs actually bound to the wild-type IL-6 protein, whereas the six non-binder-labeled and four noise-labeled VHHs did not. Figure <ref> shows the results for a representative clone of the three types of labeled VHHs. Appendix <ref> gives the results for all the tested clones and their amino acid sequences. We could observe the overlapping of IL-6 signals and VHH signals in the binder group, whereas the VHH signals were lost in the non-binder group. The VHH signals did not coincide with the IL-6 signals in the noise group, which can be interpreted as noise-labeled VHHs binding nonspecifically to cells. These results indicate that our noise reduction algorithm contributed to reducing false positives. In addition, these results were also confirmed by kinetic assay via biolayer interferometry (BLI), as described in Appendix <ref>. As a result, we could ensure that AVIDa-hIL6 has highly reliable labels. § BENCHMARKS §.§ Benchmark Task To demonstrate the use of AVIDa-hIL6 for antibody discovery, we performed an experiment on binary classification of whether a given antigen-antibody pair binds. By leveraging information on the binding of diverse antibodies to antigen mutants, we defined a benchmark task to assess the model performance in capturing the impact of antigen mutations on antibody binding. First, we randomly selected 15 mutants and reserved the data samples for those mutants as a test set. Next, we trained models by using only the wild-type IL-6 protein and evaluated their performance in predicting antibody bindings in the test set. Then, we randomly selected one mutant from the remaining 15 mutants outside the test set and added it to the training set to evaluate each model's predictive performance on the test set. By repeating this process, we tracked the model's predictive performance for unknown mutants contained only in the test set. Because the order of adding mutants to the training set affected the model performance, we ran the same experiment five times in shuffled order, and we report the averaged results here. This experimental scenario assumes prediction of antibody candidates that will bind to future emerging mutants according to the binding information of antigens that have already been observed. For all model training, we randomly selected 10 % of the training set for model validation. §.§ Baseline Models We adopted three neural network-based models as baselines. The model inputs were the amino acid sequence of an IL-6 protein with a length of 218 and a VHH with a maximum length of 152. * AbAgIntPre <cit.> is a state-of-the-art model designed for antibody-antigen interactions based on amino acid sequences. It combines the composition of k-spaced amino acid pairs (CKSAAP) <cit.> encoding and a convolutional neural network (CNN) model with a Siamese-like architecture. We used the model parameters reported in the original paper <cit.>. * PIPR <cit.> is a residual recurrent convolutional neural network (RCNN) for protein-protein interaction (PPI) prediction. Following the PIPR strategy, we used an amino acid encoding that combined a five-dimensional vector obtained from a pretrained skip-gram model using the STRING database <cit.> and a seven-dimensional vector describing the categorization of electrostaticity and hydrophobicity. We changed the number of RCNN units from five to three because our sequence length was more than nine times shorter than the protein input in the original PIPR. The other model parameters were the same as in the paper <cit.>. Although PIPR was not specifically designed for antigen-antibody interactions, such interactions that ignore non-protein antigens can be considered a subset of PPI, meaning that models designed for PPI can also apply to antibody-antigen interactions. * A Multi-Layer Perceptron (MLP) with one hidden layer of 512 neurons was used as a simpler neural network-based model than the above two models. We used one-hot encoding to represent amino acid sequences. One-hot vectors of the VHHs and IL-6 proteins were flattened and concatenated for input to the MLP. In this experiment, all models were trained for 100 epochs on one NVIDIA Tesla V100 GPU by using the Adam optimizer with an initial learning rate of 0.0001 and a batch size of 256. The code to run the benchmark models is available at <https://github.com/cognano/AVIDa-hIL6>. §.§ Results Figure <ref>(a) shows the prediction performance of the baseline models as a function of the number of IL-6 protein types used for model training. We used the precision, recall, and F1-score as evaluation metrics because the prediction of antibody binders, which are fewer in number than non-binders, is much more important for drug discovery. Figure <ref>(b) shows the precision-recall curves when 1 and 16 IL-6 protein types were used for training. When the number of antigens was 1—that is, when only the wild-type IL-6 protein was used for training—the recalls of AbAgIntPre, PIPR, and the MLP were 67.9, 57.6, and 67.2 %, respectively. These results indicate that the models failed to predict over 30 % of the effective VHHs that bound to mutants in the test set. All the metrics improved as the number of IL-6 protein types used for training increased, and this trend is clearly evident in the precision-recall curves and the area under the curve (AUC) values. After adding 15 mutants for training, the precisions of the baseline models were over 95 %, but the recalls were still only about 85 %. For drug discovery applications, the construction of a generalized model for unknown mutations from as little antigen-binding information as possible is ideal, because the number of possible mutations in antigens is tremendously large. As shown by the F1 scores and AUCs in Figures <ref>(a) and (b), respectively, AbAgIntPre outperformed the other two models, but there was still room for improvement. Furthermore, the performance of AbAgIntPre was not significant as compared to that of the simpler MLP. AVIDa-hIL6 differs significantly from the existing datasets used for training by AbAgIntPre and PIPR because it includes cases in which changes of a few amino acids enhance or inhibit antibody binding. Given this difference in properties, AbAgIntPre and PIPR may not have a clear performance advantage over the MLP. Hence, these results indicate the need for research on model architectures that are dedicated to predicting antibody binding to antigen mutants, and AVIDa-hIL6 will be a useful benchmark for evaluating such models. § DISCUSSION §.§ Binding Site Prediction Antibodies recognize specific regions of antigens, called epitopes, and the regions of antibodies that are directly involved in recognition are called paratopes. Because epitopes and paratopes are crucial for the affinity and specificity of antigen-antibody interactions, many studies have been devoted to predicting epitopes <cit.> and paratopes <cit.>. AVIDa-hIL6 has highly sensitive information on changes of a few amino acids in both the antigen and antibody that can significantly affect binding, which should be strongly associated with epitopes and paratopes. Thus, AVIDa-hIL6 may facilitate research on predicting epitopes and paratopes from amino acid sequences. §.§ Potential Risk In recent years, VHH technology has rapidly developed not only as a research and diagnostic tool but also as a therapeutic agent. Although VHHs are known to have low toxicity to humans, several VHH drugs have been approved to date <cit.>. As our dataset is derived from alpacas, a low risk of autoimmune adverse events is guaranteed for alpacas but not for humans. Therefore, a phase I clinical trial cannot be omitted for each clone for the time being. §.§ Limitations and Future Works We introduce two potential limitations of AVIDa-hIL6 and describe future works to address them. The first limitation is that AVIDa-hIL6 uses artificial mutations. Such mutations offer the advantage of investigating binding to an arbitrary number of mutants; however, natural mutations are more complex, as different sites mutate simultaneously. The second limitation is the lack of antigen diversity: specifically, AVIDa-hIL6 only has the IL-6 protein as an antigen. Our experimental scenario is to predict antibody binding to unknown mutants of a known antigen. In drug discovery applications, there is also a need to find effective antibodies against new emerging antigens. These limitations lead to the narrow applicability of a model trained on AVIDa-hIL6. An essential approach to overcome these limitations will be to accumulate labeled data for a wider variety of antigens and their mutants. Because our data generation method described in section <ref> is applicable to any target antigen, it can be a fundamental technology for establishing a more comprehensive database of antigen-antibody interactions. In fact, we used the same approach to generate a dataset for SARS-CoV-2 variants and successfully found effective antibodies <cit.>. In the future, we plan to generate and release datasets for various antigens, which should be more practical for building models to predict antigen-antibody interactions. § CONCLUSION In this paper, we have described AVIDa-hIL6, a large-scale dataset of IL-6 protein-VHH pairs containing amino acid sequence information and reliable labels for binding or non-binding pairs. By introducing artificial mutations into the IL-6 protein used as an antigen, we generated an interaction dataset for 30 types of mutants in addition to wild-type IL-6. This design enabled AVIDa-hIL6 to include many sensitive cases in which point mutations in the IL-6 protein enhance or inhibit antibody binding, thus providing researchers with valuable insights into the effects of antigen mutations on antibody binding. We envision that AVIDa-hIL6 will help democratize antibody discovery and serve as a valuable benchmark for machine learning research in the growing field of predicting antigen-antibody interactions. We thank Tomohisa Oda for developing the AVIDa-hIL6 website. splncs04 10 abanades2022ablooper Abanades, B., Georges, G., Bujotzek, A., Deane, C.M.: ABlooper: fast accurate antibody CDR loop structure prediction with accuracy estimation. Bioinformatics 38(7), 1877–1880 (2022) al2022therapeutic Al Ojaimi, Y., Blin, T., Lamamy, J., Gracia, M., Pitiot, A., Denevault-Sabourin, C., Joubert, N., Pouget, J.P., Gouilleux-Gruart, V., Heuzé-Vourc’h, N., et al.: Therapeutic antibodies–natural and pathological barriers and strategies to overcome them. Pharmacology & Therapeutics 233, 108022 (2022) amimeur2020designing Amimeur, T., Shaver, J.M., Ketchem, R.R., Taylor, J.A., Clark, R.H., Smith, J., Van Citters, D., Siska, C.C., Smidt, P., Sprague, M., et al.: Designing feature-controlled humanoid antibody discovery libraries using generative adversarial networks. bioRxiv (2020) arbabi2022camelid Arbabi-Ghahroudi, M.: Camelid single-domain antibodies: Promises and challenges as lifesaving treatments. International journal of molecular sciences 23(9),  5009 (2022) aronesty2013comparison Aronesty, E.: Comparison of sequencing utility programs. The open bioinformatics journal 7(1) (2013) baek2021accurate Baek, M., DiMaio, F., Anishchenko, I., Dauparas, J., Ovchinnikov, S., Lee, G.R., Wang, J., Cong, Q., Kinch, L.N., Schaeffer, R.D., et al.: Accurate prediction of protein structures and interactions using a three-track neural network. Science 373(6557), 871–876 (2021) berman2000protein Berman, H.M., Westbrook, J., Feng, Z., Gilliland, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E.: The Protein Data Bank. Nucleic acids research 28(1), 235–242 (2000) bolger2014trimmomatic Bolger, A.M., Lohse, M., Usadel, B.: Trimmomatic: a flexible trimmer for illumina sequence data. Bioinformatics 30(15), 2114–2120 (2014) chen2019multifaceted Chen, M., Ju, C.J.T., Zhou, G., Chen, X., Zhang, T., Chang, K.W., Zaniolo, C., Wang, W.: Multifaceted protein–protein interaction prediction based on Siamese residual RCNN. Bioinformatics 35(14), i305–i314 (2019) chen2008prediction Chen, Y.Z., Tang, Y.R., Sheng, Z.Y., Zhang, Z.: Prediction of mucin-type O-glycosylation sites in mammalian proteins using the composition of k-spaced amino acid pairs. BMC Bioinformatics 9(1), 1–12 (2008) chinery2023paragraph Chinery, L., Wahome, N., Moal, I., Deane, C.M.: Paragraph—antibody paratope prediction using graph neural networks with minimal feature vectors. Bioinformatics 39(1), btac732 (2023) cock2009biopython Cock, P.J., Antao, T., Chang, J.T., Chapman, B.A., Cox, C.J., Dalke, A., Friedberg, I., Hamelryck, T., Kauff, F., Wilczynski, B., et al.: Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics 25(11), 1422–1423 (2009) corrie2018ireceptor Corrie, B.D., Marthandan, N., Zimonja, B., Jaglale, J., Zhou, Y., Barr, E., Knoetze, N., Breden, F.M., Christley, S., Scott, J.K., et al.: iReceptor: a platform for querying and analyzing antibody/B-cell and T-cell receptor repertoire data across federated repositories. Immunological reviews 284(1), 24–41 (2018) cunningham1989high Cunningham, B.C., Wells, J.A.: High-resolution epitope mapping of hGH-receptor interactions by alanine-scanning mutagenesis. Science 244(4908), 1081–1085 (1989) davydova2022protein Davydova, E.K.: Protein Engineering: Advances in phage display for basic science and medical research. Biochemistry (Moscow) 87(Suppl 1), S146–S167 (2022) dunbar2014sabdab Dunbar, J., Krawczyk, K., Leem, J., Baker, T., Fuchs, A., Georges, G., Shi, J., Deane, C.M.: SAbDab: the structural antibody database. Nucleic acids research 42(D1), D1140–D1146 (2014) edgar2010search Edgar, R.C.: Search and clustering orders of magnitude faster than BLAST. Bioinformatics 26(19), 2460–2461 (2010) huang2022abagintpre Huang, Y., Zhang, Z., Zhou, Y.: AbAgIntPre: A deep learning method for predicting antibody-antigen interactions based on sequence information. Frontiers in Immunology 13 (2022) jin2023nanobodies Jin, B.k., Odongo, S., Radwanska, M., Magez, S.: Nanobodies: A review of generation, diagnostics and therapeutics. International Journal of Molecular Sciences 24(6),  5994 (2023) jovvcevska2020therapeutic Jovčevska, I., Muyldermans, S.: The therapeutic potential of nanobodies. BioDrugs 34(1), 11–26 (2020) jumper2021highly Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., et al.: Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021) kandari2021antibody Kandari, D., Bhatnagar, R.: Antibody engineering and its therapeutic applications. International Reviews of Immunology pp. 156–183 (2021) kim2023computational Kim, J., McFee, M., Fang, Q., Abdin, O., Kim, P.M.: Computational and artificial intelligence-based methods for antibody development. Trends in Pharmacological Sciences 44(3), 175–189 (2023) leem2022deciphering Leem, J., Mitchell, L.S., Farmery, J.H., Barton, J., Galson, J.D.: Deciphering the language of antibodies using self-supervised learning. Patterns 3(7), 100513 (2022) liberis2018parapred Liberis, E., Veličković, P., Sormanni, P., Vendruscolo, M., Liò, P.: Parapred: antibody paratope prediction using convolutional and recurrent neural networks. Bioinformatics 34(17), 2944–2950 (2018) lim2022predicting Lim, Y.W., Adler, A.S., Johnson, D.S.: Predicting antibody binders and generating synthetic antibodies using deep learning. mAbs 14(1), 2069075 (2022) maeda2022panel Maeda, R., Fujita, J., Konishi, Y., Kazuma, Y., Yamazaki, H., Anzai, I., Watanabe, T., Yamaguchi, K., Kasai, K., Nagata, K., et al.: A panel of nanobodies recognizing conserved hidden clefts of all SARS-CoV-2 spike variants including Omicron. Communications Biology 5,  669 (2022) martin2011cutadapt Martin, M.: Cutadapt removes adapter sequences from high-throughput sequencing reads. EMBnet. journal 17(1), 10–12 (2011) olsen2022observed Olsen, T.H., Boyles, F., Deane, C.M.: Observed Antibody Space: A diverse database of cleaned, annotated, and translated unpaired and paired antibody sequences. Protein Science 31(1), 141–146 (2022) olsen2022ablang Olsen, T.H., Moal, I.H., Deane, C.M.: AbLang: an antibody language model for completing antibody sequences. Bioinformatics Advances 2(1), vbac046 (2022) paszke2019pytorch Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (2019) raybould2021cov Raybould, M.I., Kovaltsuk, A., Marks, C., Deane, C.M.: CoV-AbDab: the coronavirus antibody database. Bioinformatics 37(5), 734–735 (2021) rice2000emboss Rice, P., Longden, I., Bleasby, A.: EMBOSS: the european molecular biology open software suite. Trends in genetics 16(6), 276–277 (2000) ruffolo2021deciphering Ruffolo, J.A., Gray, J.J., Sulam, J.: Deciphering antibody affinity maturation with language models and weakly supervised learning. arXiv preprint arXiv:2112.07782 (2021) ruffolo2022antibody Ruffolo, J.A., Sulam, J., Gray, J.J.: Antibody structure prediction using interpretable deep learning. Patterns 3(2), 100406 (2022) schneider2022dlab Schneider, C., Buchanan, A., Taddese, B., Deane, C.M.: DLAB: deep learning methods for structure-based virtual screening of antibodies. Bioinformatics 38(2), 377–383 (2022) schneider2022sabdab Schneider, C., Raybould, M.I., Deane, C.M.: SAbDab in the age of biotherapeutics: updates including SAbDab-nano, the nanobody structure tracker. Nucleic acids research 50(D1), D1368–D1372 (2022) shen2016seqkit Shen, W., Le, S., Li, Y., Hu, F.: SeqKit: a cross-platform and ultrafast toolkit for FASTA/Q file manipulation. PloS one 11(10), e0163962 (2016) da2022epitope3d da Silva, B.M., Myung, Y., Ascher, D.B., Pires, D.E.: epitope3D: a machine learning method for conformational B-cell epitope prediction. Briefings in Bioinformatics 23(1), bbab423 (2022) smith1985filamentous Smith, G.P.: Filamentous fusion phage: novel expression vectors that display cloned antigens on the virion surface. Science 228(4705), 1315–1317 (1985) szklarczyk2016string Szklarczyk, D., Morris, J.H., Cook, H., Kuhn, M., Wyder, S., Simonovic, M., Santos, A., Doncheva, N.T., Roth, A., Bork, P., et al.: The STRING database in 2017: quality-controlled protein–protein association networks, made broadly accessible. Nucleic acids research 45(D1), D362–D368 (2017) tubiana2022scannet Tubiana, J., Schneidman-Duhovny, D., Wolfson, H.J.: ScanNet: an interpretable geometric deep learning model for structure-based protein binding site prediction. Nature Methods 19, 730–739 (2022) wilman2022machine Wilman, W., Wróbel, S., Bielska, W., Deszynski, P., Dudzic, P., Jaszczyszyn, I., Kaniewski, J., Młokosiewicz, J., Rouyan, A., Satława, T., et al.: Machine-designed biotherapeutics: opportunities, feasibility and advantages of deep learning in computational antibody discovery. Briefings in Bioinformatics 23(4), bbac267 (2022) wilton2018sdab Wilton, E.E., Opyr, M.P., Kailasam, S., Kothe, R.F., Wieden, H.J.: sdAb-DB: the single domain antibody database. ACS Synthetic Biology 7(11), 2480–2484 (2018) § APPENDIX §.§ Ethics Statement for Animal Experiments All animal experiments on an alpaca were conducted in accordance with the KYODOKEN Institute for Animal Science Research and Development (Kyoto, Japan). Veterinarians performed breeding, health maintenance, and immunization by adhering to the published Guidelines for Proper Conduct of Animal Experiments by the Science Council of Japan. The KYODOKEN Institutional Animal Care and Use Committee approved the protocols for these studies (KYODOKEN protocol number 20190216). §.§ Dataset Generation Here, we describe the detailed experimental procedures and conditions for each step. Step 1. Immunization We immunized a single alpaca with purified recombinant human IL-6 protein and several single-amino-acid mutants. Specifically, the gene encoding the human IL-6 protein was codon-optimized, synthesized, and sub-cloned in the pcDNA3.1(+) vector (Thermo Fisher Scientific K.K., Tokyo, Japan). The amino acid sequence of the wild-type IL-6 protein with a C-terminal 6×His-tag was “MNSFSTSAFGPVAFSLGLLLVLPAAFPAPVPPGEDSKDVAAPHRQPLTSSERIDKQIRY- ILDGISALRKETCNKSNMCESSKEALAENNLNLPKMAEKDGCFQSGFNEETCLVKIITGL- LEFEVYLEYLQNRFESSEEQARAVQMSTKVLIQFLQKKAKNLDAITTPDPTTNASLLTKL- QAQNQWLQDMTTHLILRSFKEFLQSSLRALRQMHHHHHH.” We introduced a site-directed mutation with alanine at intervals of three to six amino acids, like the alanine scanning technique <cit.>, which is used in molecular biology to determine the contribution of a specific amino acid. A total of 30 single-amino-acid mutants was prepared: P42A, Q45A, T48A, E51A, D54A, I57A, I60A, G63A, K69A, C72A, C78A, S81A, E87A, L90A, P93A, D99A, F102A, G105A, E108A, T117A, L120A, L126A, L129A, S135A, E138A, Q144A, F153A, D162A, T165A, and D168A. Here, for example, P42A means that an amino acid in the wild type was substituted from proline to alanine at position 42. The antigen cocktail mixture was emulsified in Titermax (Funakoshi, Tokyo, Japan) adjuvant at a dose of 600 μg and subcutaneously injected into an alpaca four times at about two-week intervals. Lymph nodes and blood samples were each collected four times, resulting in a total of 12 libraries. Step 2. Phage Library Construction Peripheral blood mononuclear cells (PBMCs) were obtained from blood samples by sucrose density gradient centrifugation using Ficoll (Nacalai Tesque, Kyoto, Japan). The lymph nodes and PBMC samples were washed with phosphate-buffered saline (PBS, Nacalai Tesque) and suspended in an RNAlater solution (Thermo Fisher Scientific K.K., Tokyo, Japan). Total RNA was isolated from these samples by using Direct-Zol RNA MiniPrep (Zymo Research, Irvine, CA). Complementary DNA was synthesized from 1 μg of total RNA as a template by using random hexamer primers and SuperScript II reverse transcriptase (Thermo Fisher Scientific K.K.). The coding regions of the VHH domain were amplified using LA Taq polymerase (TAKARA Bio Inc., Shiga, Japan) with two PAGE-purified primers (CALL001, 5'-GTCCTGGCTGCTCTTCTACAAGG-3' and CALL002, 5'-GGTACGTGCTGTTGAACTGTTCC-3'), and they were separated on a 1.5 % low-melting-temperature agarose gel (Lonza Group AG, Basel, Switzerland). Approximately 700 base-pair bands were extracted using a QIAquick Gel Extraction Kit (Qiagen K.K., Tokyo, Japan). Nested PCR was performed to amplify the VHH genes by using two primers that contained flanking PstI (forward) and BstEII (reverse) restriction sites to enable cloning into the pMES4 phagemid vector with a C-terminal His-tag. Electroporation-competent Escherichia coli TG1 cells (Agilent Technologies Japan, Ltd., Tokyo, Japan) were transformed with the ligated plasmids under chilled conditions (Bio-Rad Laboratories, Inc., Hercules, CA). The library densities were monitored and maintained at >10^7 colony-forming units per microliter with limiting dilution. Colonies from 8 mL of cultured cells were harvested, pooled, and reserved in frozen glycerol stock as a mother library. Thus, the 12 phagemid libraries were designated as the mother libraries. Step 3. Affinity Selection One round of biopanning was performed using each target protein-coated magnet beads in 50-mM phosphate buffer (pH 7.4) containing 0.1 % Triton X-100 (Nacalai Tesque), 0.3 % (w/v) bovine serum albumin (BSA, Nacalai Tesque), and 500 mM of NaCl. Every IL-6 mutant was used at 1.2 mL bead slurry, which was saturated with 240 μg of protein, except for P93A (90 μg), E108A (190 μg), and L126A (180 μg). To distinguish nonspecific signals, a negative control sample that did not contain any IL-6 protein was also used. The wild-type IL-6 protein libraries were obtained in triplicate to confirm the reproducibility. After three washes with the same buffer, the remaining phages bound to the beads were eluted with a trypsin-ethylenediaminetetraacetic acid (EDTA, Nacalai Tesque) solution at room temperature for 30 minutes. The eluate was neutralized with a PBS-diluted protein inhibitor cocktail (cOmplete, EDTA-free, protease inhibitor cocktail tablets, Roche Diagnostics GmbH, Mannheim, Germany) and used to infect electroporation-competent cells. The infected cells were cultured in LB Miller broth containing 100 μg/mL of ampicillin (Nacalai Tesque) at 37 ^∘C overnight. The genes of the phagemids selected by biopanning were collected with a QIAprep Miniprep Kit (Qiagen), amplified by PCR, and purified using AMPure XP beads (Beckman Coulter, High Wycombe, UK). Then, dual-indexed libraries were prepared and sequenced on an Illumina MiSeq (Illumina, San Diego, CA) by using a MiSeq Reagent Kit v3 with paired-end 300-bp reads (Bioengineering Lab. Co., Ltd., Kanagawa, Japan). Step 4. Sequence Analysis Approximately 100,000 paired reads for each library were generated by NGS analysis. The raw read data were trimmed to remove the adaptor sequence by using cutadapt v1.18 <cit.> and to remove low-quality reads by using Trimmomatic v0.39 <cit.>. The remaining paired reads were merged using fastq-join <cit.>, and then the VHH coding sequences were extracted using seqkit v0.10.1 <cit.>. The DNA sequences were translated to amino acid sequences with EMBOSS v6.6.0.0 <cit.>, and the VHH sequences were cropped from start to stop codon. Finally, each phagemid library was converted to a FASTA file containing tens of thousands of VHH sequences. §.§ Label Reliability §.§.§ Experimental Procedures VHH Substantiation The gene sequences encoding each selected VHH clone, which were connected with a 4×(GGGGS) linker for expression as a tandem dimer, were codon-optimized and synthesized (Eurofins Genomics Inc., Tokyo, Japan). The synthesized genes were subcloned into the pMES4 vector to express N-terminal PelB signal peptide-conjugated and C-terminal 6×His-tagged VHHs. BL21 (DE3) E. coli cells transformed with the plasmids were plated on LB agar with ampicillin and incubated at 37 ^∘C overnight. Grown colonies were picked and cultured at 37 ^∘C to reach an OD of 0.6 AU, and the cells were then cultured at 37 ^∘C for three hours with 1 mM of IPTG (isopropyl-β-D-thiogalactopyranoside, Nacalai Tesque). Lastly, the cultured cells were pelleted by centrifugation and stored in a freezer until use. VHHs were eluted from the periplasm by soaking in TES buffer (200 mM Tris, 0.125 mM EDTA, 125 mM sucrose, and pH 8.0) at 4 ^∘C for one hour. They were further incubated with a 2× volume of 0.25× diluted TES buffer with a trace amount of benzonase nuclease (Merck) at 4 ^∘C for 45 minutes. The supernatants were centrifuged (20,000 ×g, 4 ^∘C for 10 minutes), sterilized by adding gentamicin (Thermo), and passed through a 0.22 μm filter (Sartorius AG, Gottingen, Germany). The filtered supernatants were then applied to a HisTrap HP nickel column (Cytiva) on an ÄKTA pure HPLC system, and the bound His-tagged VHHs were eluted with 300 mM of imidazole. The eluted fraction was collected and concentrated with a VIVAspin 3000-molecular-weight cutoff filter column (Sartorius) and applied to a Superdex75 10/300 GL gel-filtration column (Cytiva) on an ÄKTA pure HPLC system. Finally, the protein purity was measured via Coomassie brilliant blue (CBB) staining (Rapid Stain CBB Kit, Nacalai Tesque). Immunofluorescence Staining Analysis HEK293T cells were transiently transfected with a plasmid-encoding C-terminally HA-tagged wild-type IL-6 protein by using Lipofectamine 3000 (Thermo) according to the manufacturer's instructions. The next day, the cells were seeded on collagen type-I-coated culture plates (IWAKI, AGC TECHNO GLASS CO., LTD., Shizuoka, Japan) and cultured for 24 hours before being fixed with 2 % paraformaldehyde (PFA) at 4 ^∘C overnight. After three washes with PBST (PBS with 0.005 % Tween 20), the cells were blocked with PBST containing 2 % goat serum (blocking solution) at room temperature for one hour. Each well was soaked with 100 μL of the blocking solution containing 100 ng of purified VHH at 4 ^∘C overnight. After washing with PBST, 1:3000-diluted anti-His-tag rabbit antibodies and 1:100-diluted anti-HA 7C9 mouse monoclonal antibodies (ChromoTek GmbH, Planegg-Martinsried, Germany) in blocking buffer were added and reacted at room temperature for one hour. Finally, after washing, Alexa-Fluor-conjugated anti-rabbit IgG (594 nm emission) antibodies at 1:3000 dilution and Alexa-Fluor-conjugated anti-mouse IgG (488 nm emission) antibodies at 1:3000 dilution in blocking buffer were added to the wells, and the fixed cells were labeled at room temperature for one hour before washing three times with PBST. The cell nuclei were visualized with 4',6-diamidino-2-phenylindole (DAPI). The stained cells were imaged with an 8-ms exposure time (594 nm emission), a 40-ms exposure time (488 nm emission), or an automatically adjusted exposure time (DAPI) by using an IX71S1F-3 microscope (Olympus Corporation, Tokyo, Japan) with the cellSens Standard 1.11 application (Olympus). Each full observed field corresponding to a 165 μm × 220 μm square was photographed. Kinetic Assays via Biolayer Interferometry (BLI) Real-time binding experiments were performed using an Octet Red96 instrument (fortèBIO, Pall Life Science, Portsmouth, NH). Each purified VHH clone was biotinylated with EZ-Link Sulfo-NHS-LC-Biotin (Thermo) according to the manufacturer's protocol; uncoupled biotin was excluded with a size exclusion spin column (PD SpinTrap G-25, Cytiva) in PBS (pH 7.4). Assays were performed at 30 ^∘C with shaking at 1000 rpm. Biotin-conjugated clones at 10 μg/mL were captured on a streptavidin-coated sensor chip (SA, fortèBIO) to reach the signals at 1 nm. One unrelated VHH P17-coated sensor chip was monitored as a baseline. The loaded concentration of the wild-type IL-6 was 200 μg, corresponding to 0.625 μM. Assays were performed with PBS containing 0.005 % Tween 20 (Nacalai Tesque). After baseline equilibration for 180 s in the buffer, association and dissociation were each performed for 180 s. The data were then subtracted from the baseline data and analyzed with fortèBIO data analysis software 9.0. §.§.§ Additional Results As listed in Table <ref>, we selected 12 binder-labeled, six non-binder-labeled, and five noise-labeled clones for substantiation. Of the 12 binder-labeled clones, two could not be isolated by the E. coli protein synthesis system because of the limitations of the phage display method. Even if a protein of interest functioned as a fusion protein with the g3p protein on a phage, it was not always possible to express the protein alone in a soluble form with function <cit.>. However, if a sufficient signal was observed in the mother library, then the clones must have been truly present as heavy-chain antibodies, at least in the alpaca body. The remaining 10 clones all showed binding to the wild-type IL-6 protein, as shown in Figure <ref>(a). The immunofluorescence staining analysis showed strong to weak signals, which probably reflected avidity differences between the clones. Note that the calculated p-values did not correlate with the staining intensity by a simple inverse relationship. The biolayer interferometry (BLI) analysis revealed that the clones positively associated with the wild-type IL-6 protein with different association curves (K_on), dissociation curves (K_off), and KDs (K_off/K_on), as shown in Figure <ref>(a), although the sensitivity was relatively lower than that of the immunostaining analysis. All six non-binder-labeled clones showed negative results in both the immunostaining and BLI analyses, as shown in Figures <ref>(b) and <ref>(b), respectively. Of the five noise-labeled clones, one could not be isolated. The immunostaining analysis showed that the remaining four clones likely had nonspecific binding, as shown in Figure <ref>(c), but not to the wild-type IL-6 protein, as confirmed by the BLI analysis results shown in Figure <ref>(c). Accordingly, the sensitivity and specificity of our labeling method can be considered sufficiently high. §.§ Benchmarks §.§.§ Data Splitting For a test set, we randomly selected 15 mutants: P42A, T48A, E51A, I57A, I60A, K69A, C78A, S81A, E87A, L120A, L126A, L129A, Q144A, D162A, and T165A. The remaining 15 mutants and the wild type were used for model training. Table <ref> lists the numbers of samples in the training and test sets. First, we trained the models by using only the wild-type IL-6 protein and evaluated the model performance on the test set. Then, we randomly selected one mutant from the mutants not contained in the test set and added it to the training set to evaluate each model's predictive performance on the test set. By repeating this procedure, we tracked each model's predictive performance for unknown mutants contained only in the test set. Because the order of adding mutants to the training set affected the model performance, we ran the same experiment five times in shuffled order, and we report the averaged results in section <ref>. Table <ref> summarizes the order in which mutants were added and the number of samples in each set. §.§.§ Model Implementations The implementations of all the benchmark models are available at <https://github.com/cognano/AVIDa-hIL6>. * AbAgIntPre <cit.>. We used the implementation[AbAgIntPre: <https://github.com/emersON106/AbAgIntPre>] that is provided by AbAgIntPre's developers and released under Apache License 2.0 for the composition of k-spaced amino acid pairs (CKSAAP) <cit.> encoding. The calculation was performed using k = 0, 1, 2, 3, thus yielding a 1600-dimensional vector for each amino acid sequence. We also used the original PyTorch <cit.> implementation released under Apache License 2.0 for the AbAgIntPre model. We used the model parameters reported in the original paper <cit.>. * PIPR <cit.>. We reimplemented PIPR by using PyTorch with reference to the original implementation[PIPR: <https://github.com/muhaochen/seq_ppi>] released under Apache License 2.0. We changed the number of RCNN units from five to three, while the other parameters were the same as in the original paper <cit.>. Each RCNN unit had a one-dimensional max pooling with a kernel size of three, which shortened the sequence length by a third. Our dataset's maximum sequence length is 218, and the application of five RCNN units would have resulted in a sequence length shorter than one; thus, we reduced the number of units. * Multi-Layer Perceptron (MLP). We implemented one-hot encoding and an MLP with one hidden layer of 512 neurons by using PyTorch. The one-hot vectors of the VHHs and IL-6 proteins were flattened and concatenated for input to the MLP. We used zero padding to match the dimensions, thus yielding 8000-dimensional vectors for each VHH and IL-6 protein pair.
http://arxiv.org/abs/2306.01433v1
20230602104715
Zero-Shot Blind Audio Bandwidth Extension
[ "Eloi Moliner", "Filip Elvander", "Vesa Välimäki" ]
eess.AS
[ "eess.AS", "cs.LG", "cs.SD" ]
SUBMITTED TO IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2023 Moliner et al.: Zero-shot Blind Audio Bandwidth Extension Zero-Shot Blind Audio Bandwidth Extension Eloi Moliner, Filip Elvander, Member, IEEE, and Vesa Välimäki, Fellow, IEEE Manuscript received June 1, 2023; revised XXX YY, 2023. This research is part of the activities of the Nordic Sound and Music Computing Network—NordicSMC, NordForsk project no. 86892. (Corresponding author: Eloi Moliner) E. Moliner, F. Elvander, and V. Välimäki are with the Department of Information and Communications Engineering, Aalto University, Espoo, Finland (e-mail: [email protected]). July 31, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Audio bandwidth extension involves the realistic reconstruction of high-frequency spectra from bandlimited observations. In cases where the lowpass degradation is unknown, such as in restoring historical audio recordings, this becomes a blind problem. This paper introduces a novel method called BABE (Blind Audio Bandwidth Extension) that addresses the blind problem in a zero-shot setting, leveraging the generative priors of a pre-trained unconditional diffusion model. During the inference process, BABE utilizes a generalized version of diffusion posterior sampling, where the degradation operator is unknown but parametrized and inferred iteratively. The performance of the proposed method is evaluated using objective and subjective metrics, and the results show that BABE surpasses state-of-the-art blind bandwidth extension baselines and achieves competitive performance compared to non-blind filter-informed methods when tested with synthetic data. Moreover, BABE exhibits robust generalization capabilities when enhancing real historical recordings, effectively reconstructing the missing high-frequency content while maintaining coherence with the original recording. Subjective preference tests confirm that BABE significantly improves the audio quality of historical music recordings. Examples of historical recordings restored with the proposed method are available on the companion webpage: http://research.spa.aalto.fi/publications/papers/ieee-taslp-babe/http://research.spa.aalto.fi/publications/papers/ieee-taslp-babe/ Audio recording, convolutional neural networks, machine learning, music, signal restoration. § INTRODUCTION AUDIO bandwidth extension refers to the reconstruction of the missing high-frequency information of a bandlimited sound signal <cit.>. The task is considered an ill-posed inverse problem, where the objective is to recover the original wideband signal from lowpass filtered observations <cit.>. A common application of this technology is audio upsampling or super-resolution, where the goal is to regenerate all frequency components that lie above the original Nyquist limit and increase the sampling rate of the signal <cit.>. Yet, this paper explores a less-researched but an urgently needed application of bandwidth extension, namely the restoration of historical music recordings that suffer from limited bandwidth due to technological constraints. The latter case represents a significant challenge, as the bandwidth extension system should be capable of adapting to real-world cases in which the exact characteristics of the lowpass degradation are unknown. To meet this challenge, this paper presents a novel approach, which we refer to as blind bandwidth extension. Recent works approach this and similar problems with generative models engineered for the specific task <cit.>. In these methods, the degradations are directly incorporated as data augmentation during training, and the model is expected to implicitly retrieve the degradation model from the input observations and generate a coherent signal in accordance with that. The success of these approaches relies on the design of the training data pipeline, which requires applying a well-engineered set of data augmentations. In the case of audio bandwidth extension, this may represent utilizing different kinds of lowpass filters and randomizing their parameters <cit.> as well as corrupting the input data with noise <cit.>. The result must be a robust model that is able to generalize to real-world scenarios. Despite the engineering effort that this approach represents, the applicability of a trained model is still limited to the cases considered during training and the models underperform when they encounter an out-of-distribution degradation, regardless of its underlying difficulty. In addition, we argue that relying upon problem-specialized models is impractical from a computational viewpoint, as training large-scale generative models requires a vast amount of computing, which does not pay off for all tasks. This work explores an alternative approach, where blind bandwidth extension is achieved in a zero-shot setting. The basis of this work is our audio restoration framework <cit.>, which utilizes the generative priors of an unconditional diffusion model, trained without knowledge of the restoration tasks to which it will be applied during inference. However, such a framework is not directly applicable for blind inverse problems, as knowledge of the true degradation operator, in this case, the lowpass filter response, is required. Therefore, a new approach is needed which generalizes to lowpass filters having a different cutoff frequency and magnitude response shape. This paper proposes a strategy where the parameters of a lowpass filter are jointly optimized during the iterative audio generation process in a coarse-to-fine manner. The optimization problem is solved using a diffusion model. We show how the proposed blind audio bandwidth extension (BABE) method can be applied to restore historical music recordings in a robust way. In addition, the proposed method allows for a larger degree of interpretability than previous techniques, as the degradation operator is explicitly estimated and the best guess for a lowpass cutoff frequency is obtained as output. The BABE method is compared with previous bandwidth extension methods in terms of objective and subjective quality measures. The listening test results verify the advantages of BABE, especially in enhancing the sound quality of real historical music recordings. The remainder of this paper is organized as follows. Sec. <ref> gives an overview of related research on bandwidth extension and blind inverse problems. Sec. <ref> recapitulates the basics of diffusion models and how to approximate posterior sampling with them. Sec. <ref> describes BABE, the new algorithm for zero-short blind bandwidth extension, which uses a parametric lowpass filter model. Sec. <ref> presents details of the deep neural network architecture, the datasets used, and the training. Sec. <ref> reports on our experiments to expand the bandwidth of both synthetic and real audio as well as evaluates the results using objective and subjective methods, including listening tests. Sec. <ref> concludes the paper. § RELATED WORK This section presents a brief overview of audio bandwidth extension methods. In addition, we compare our proposed method with recent and concurrent works that use diffusion models for solving blind inverse problems in different modalities, such as speech or image processing, in a zero-shot setting. §.§ Audio Bandwidth Extension and Super-Resolution Early works in audio bandwidth extension focused on speech signals and employed diverse signal processing methods, including source-filter models <cit.>, and codebook mapping <cit.>. The first attempts at music audio bandwidth extension used nonlinear devices <cit.> and spectral band replication <cit.>. Other approaches relied on data-driven techniques, such as Gaussian mixture models <cit.>, Hidden Markov Models <cit.>, and shallow <cit.> and deep neural networks <cit.>. Nevertheless, these methods often yielded suboptimal quality due to their limited modeling capabilities. Many recent works approach this task using deep generative models, which are suitable to address the ill-posedness of the problem. Until very recently, Generative Adversarial Networks (GANs) were the most popular choice and many works applied them for audio and speech bandwidth extension <cit.>. While GANs have a strong design versatility, they suffer from some limitations, such as training instabilities, suboptimal mode coverage, and a lack of explainability. For these reasons, there is a growing interest in using alternative generative approaches, such as flow-based models <cit.> or, as in the present study, diffusion models <cit.>. Other works also used diffusion-based audio super-resolution models within the context of text-to-audio generation, where their purpose was to separate the task of high-resolution audio generation into separate hierarchical steps <cit.>. All the generative approaches mentioned above are designed in a conditional problem-specified setting, requiring specialized model training, with all the inconveniences stated above. Only a few works so far have opted for a zero-shot approach <cit.> in which bandwidth extension is achieved only during the inference stage. However, while conditional approaches can adapt to unknown degradations (or lowpass filters), as long as sufficient data augmentation is used as regularization during training, the existing zero-shot methods require knowledge of the exact degradation, which is often unavailable. This paper proposes a strategy to infer the lowpass filter during sampling and allow for the first zero-shot Blind Audio Bandwidth Extension method, BABE. §.§ Diffusion Models for Blind Inverse Problems Several recent works have explored the use of diffusion models to solve blind inverse problems, where the degradation operator is unknown. This research area can be categorized into two groups: conditional methods, which require specialized training for specific problems, and zero-shot methods, which leverage priors from unconditional diffusion models. Within the category of conditional models, several works target speech enhancement <cit.>, image deblurring <cit.>, and JPEG reconstruction <cit.>, among others. It may be noted that these methods all require pairs of clean/degraded samples and a well-thought training data pipeline. Our primary interest lies in zero-shot methods, which involve a two-fold inference task: estimating the degradation operator and reconstructing the degraded signal. Chung et al. (BlindDPS) <cit.> propose a zero-shot method that utilizes a pre-trained diffusion model of the degradation parameters as a prior. During sampling, they simultaneously infer both the degradation and reconstructed image by exploiting the diffusion prior. This approach allows the inference of high-dimensional degradation parameters, making it applicable to blind image deblurring and imaging through turbulence. However, we consider this method impractical as it requires training a diffusion model for the degradation of interest. Murata et al. <cit.> formulate the problem as a partially collapsed Gibbs sampler (GibbsDDRM), enabling approximate posterior sampling of both the data and operator without necessitating a structured prior for the latter. The GibbsDDRM sampling algorithm iteratively updates both the data and operator throughout the process. While the operator parameters are updated using a gradient-based approximation <cit.>, the data is updated using the projection-based method <cit.>, which requires the computationally expensive singular value decomposition. This approach is often impractical for more complex forward models <cit.>. Murata et al. evaluate GibbsDDRM in tasks such as image deblurring and vocal dereverberation <cit.>. In comparison to the existing methods, our proposed approach is grounded in domain knowledge specific to the task of bandwidth extension. It employs a low-dimensional parametrization of the degradation operator, specifically a piecewise linear lowpass filter. This parametrization enables an interpretable and robust optimization process that benefits from the implicit inductive biases of diffusion models. § BACKGROUND This section presents an overview of diffusion models through the score-based formalism, as well as their application for solving inverse problems using posterior sampling. §.§ Diffusion Models Diffusion models generate data by reversing the diffusion process in which data x_0 ∼ p_data is progressively diffused into Gaussian noise x_τ_max∼𝒩(0,σ_max^2 𝐈) over time[The “diffusion time” τ must not be confused with the “audio time” t. ] τ. The diffusion process can be formally described by means of a stochastic differential equation <cit.> d𝐱=f(𝐱_τ, τ) dτ + g(τ)d𝐰, where the diffusion time τ flows from 0 (when the data is clean) to T (Gaussian noise), 𝐰 is the (multivariate) standard Wiener process, 𝐱_τ is the noise-perturbed data sample at time τ, and the drift f and diffusion g coefficients define the schedule of the diffusion process. The forward diffusion process and its reverse, where diffusion time flows backward from T to 0 and the data is gradually denoised, can be expressed in terms of a deterministic probability flow Ordinary Differential Equation (ODE). In this work, we adopt the parametrization from Karras et al. <cit.>, who use the ODE d𝐱= - τ∇_𝐱_τlog p_τ(𝐱_τ) dτ, where the diffusion time is equivalent to the Gaussian noise level τ=σ[The variable names τ and σ are used interchangeably in this paper where there is no risk of confusion.], and ∇_𝐱_τlog p_τ(𝐱_τ) is the (Stein) score <cit.>, which can be geometrically interpreted as a vector field pointing towards higher data density <cit.>. The score ∇_𝐱_τlog p_τ(𝐱_τ) is intractable, but, under Gaussian noise, it can be approximated using the proxy task of denoising <cit.>. Given a noise-level-dependent denoiser D_θ(𝐱_τ, τ) parametrized as a deep neural network with weights θ, the score is approximated as ∇_𝐱_τlog p_τ(𝐱_τ) ≈ (D_θ(𝐱_τ,τ)-𝐱_τ)/σ^2. The denoiser is usually trained with an L2 loss: 𝔼_𝐱_0 ∼ p_data, ϵ∼𝒩(0,𝐈) [ λ(τ) ‖ D_θ(𝐱_0+τϵ,τ) -𝐱_0 ‖_2^2 ], where λ(τ) is a weighting function. The choice of the loss weighting plays an important role in the model performance <cit.> and, depending on it, the objective in (<ref>) can also be understood as noise prediction <cit.> or score matching <cit.>. In this work, we follow the choices from <cit.>, which are well-motivated considering the standard practices of neural network training. Also note that a denoiser trained with a Euclidean objective yields the Minimum-Mean-Squared-Error estimate of 𝐱_0 given 𝐱_τ, or the expectation of the posterior, 𝐱̂_0=D_θ(𝐱_τ, τ) = 𝔼[𝐱_0 | 𝐱_τ]. This means, intuitively, that the denoised estimate 𝐱̂_0 at a given noise level σ is the best possible guess of the clean data given its noisy version, but still lacks some of the information that has been corrupted under the noise in 𝐱_τ. §.§ Posterior Sampling With Diffusion Models Recent works have proposed to leverage the rich data-driven priors of diffusion models for solving inverse problems by approximating posterior sampling <cit.>. Inverse problems are often formulated with the goal of retrieving a clean signal 𝐱_0 from a set of measurements or observations, produced as 𝐲 =𝒜(𝐱_0) + ϵ, where 𝒜 is a degradation operator and ϵ accounts for measurement noise. In the case of bandwidth extension, the operator 𝒜(·) is a lowpass filter, and the observations 𝐲 are a narrowband audio signal. Note that this inverse problem is ill-posed, as the lowpass filter cannot be trivially inverted due to the limits of numerical precision or the appearance of noise in historical recordings. To solve the inverse problem, one may want to sample from the posterior distribution given the observations p(𝐱 | 𝐲). In the context of a diffusion model, this would require estimating the posterior score ∇_𝐱_τlog p_τ(𝐱_τ|𝐲). Applying the Bayes rule, the posterior score factorizes as the sum of two terms: ∇_𝐱_τlog p_τ(𝐱_τ|𝐲)= ∇_𝐱_τlog p_τ(𝐱_τ)+ ∇_𝐱_τlog p_τ(𝐲|𝐱_τ), where we refer to ∇_𝐱_τlog p_τ(𝐲|𝐱_τ) as the likelihood score. Chung et al. <cit.> propose to approximate the likelihood with p_τ(𝐲|𝐱_τ) ≃ p(𝐲|𝐱̂_0), where 𝐱̂_0 is the denoised estimate at an intermediate noise level. The likelihood score can then be approximated as ∇_𝐱_τlog p_τ(𝐲|𝐱_τ) ≈ -ξ(τ) ∇_𝐱_τ C_audio(𝐲, ŷ) , where C_audio is a cost function that provides a distance between the observations 𝐲 and our estimation of them ŷ=𝒜(𝐱̂_0), which requires knowledge of the degradation operator 𝒜 and the denoised estimate 𝐱̂_0=D_θ(𝐱_τ, τ). We denote this strategy as reconstruction guidance. If we consider Gaussian measurement noise ϵ∼𝒩(0, σ_y I), a Euclidean norm is a sound choice for the cost function <cit.> C_audio(𝐲, 𝒜(𝐱̂_0))= ‖𝐲 - 𝒜(𝐱̂_0) ‖_2^2, and will be used throughout this work. Note that the gradient operator ∇_𝐱_τ requires differentiating through the degradation 𝒜, and through the denoiser D_θ, which is parametrized with a deep neural network. The term ξ(τ) refers to a scaling function or step size, which regulates the impact of the approximated likelihood on the sampling trajectories. We parameterize the step size in the following way <cit.>: ξ(τ)=ξ^'√(N)/σ‖∇_𝐱_t C_audio(𝐲, 𝒜(𝐱̂_0)) ‖^2, which weights the gradients according to their Euclidean norm, the noise level σ, the length (in samples) of the audio signal N, and a scalar hyperparameter ξ^'. We empirically find that this parametrization yields robust and stable results, while allowing a more intuitive search for ξ^'. § METHOD This section details the proposed algorithm called BABE, targeted for zero-shot blind audio bandwidth extension. The presented approach consists of a generalization of the diffusion posterior sampling <cit.>, where the degradation operator does not need to be known but is parametrized and iteratively optimized during the sampling process. Algorithm 1 and Fig. <ref> provide a concise summary of the proposed method, while the subsequent sections explain each component. §.§ Warm Initialization One of the motivations for this work is the observation that diffusion models generate content in a coarse-to-fine manner. Music signals tend, on average, to have a frequency-dependent energy decay. As a consequence, given that the forward operator in a diffusion model is additive white Gaussian noise, high-frequency components tend to be generated at the later stages of the diffusion process, when the low-frequency range is already built. This property has been previously observed in the image domain <cit.>. In this work, we treat this observation as a feature, arguing that diffusion models have an implicit inductive bias for bandwidth extension by design. Motivated by this observation, instead of initializing the reverse diffusion process with pure Gaussian noise, we start from a warm initialization constructed by adding noise to the lowpass filtered observations 𝐱_T=𝒩(𝐲| σ_start^2 𝐈). The starting noise level σ_start should be wisely chosen so that the added noise does not completely destroy the low-frequency content that is already present in the observations, but still sufficiently floods out the high-frequency part of the spectra that needs to be regenerated. It is safe to assume that a sufficiently large value of σ_start could allow for a suitable solution without sacrificing generation quality (see <cit.> for a formalized reasoning). This strategy has been similarly used for image restoration <cit.> and speech enhancement <cit.> tasks. In this application, a warm initialization not only accelerates sampling but also plays a crucial role in stabilizing the convergence of the algorithm, as elaborated on in Sec. <ref>. §.§ Filter Parametrization Old gramophone recordings have a limited bandwidth primarily because the disc-cutting lathes used to transfer sound onto physical discs were not capable of capturing a wide range of frequencies <cit.>. The specific features of the equipment used to create a recording, such as the manufacturer, publication date, recording medium, and any adjustments made by recording engineers, can significantly affect its lowpass behavior. Due to the lack of uniform international standards, it is hard or impossible to know the frequency response of a particular recording. In a previous work <cit.>, it was observed that, when compared to modern recordings of the same piece, some gramophone recordings showed a distinct logarithmic decay above a certain cut-off frequency, which would normally be about 3 kHz, depending on the severity of the degradations. This observation motivates us to design an elementary filter parametrization that would account for a wide range of lowpass magnitude responses with only a small set of optimizable parameters. We define an optimizable lowpass filter as a piecewise-linear function in the logarithmic frequency domain, as shown in Fig. <ref>, which can be expressed as H(f) [dB] = 0 f < f_c1 A_1 log_2 f/f_c1 f_c1≤ f < f_c2 A_2 log_2 f/f_c1 +A_1 f_c2≤ f < f_c3 ⋮ ⋮ A_S log_2 f/f_cS + ∑_i=1^S A_i f_cS≤ f , where f_ci (Hz) are cutoff frequencies and A_i (dB) are the decay slopes. Note that (<ref>) is piecewise differentiable with respect to the cutoff and slope parameters. We define the set of optimizable parameters as ϕ={ f_ci , A_i | i=1,…, S }, where S is the number of breakpoints. §.§ Joint Posterior Sampling and Filter Inference Reconstruction-guidance-based posterior sampling <cit.> can be understood as a stochastic optimization process that uses the generative priors from a diffusion model to optimize an audio signal in the data space using a cost function C_audio(𝐲,ŷ_ϕ) that penalizes the reconstruction error <cit.>. In an analogous manner, one can also leverage the priors from a pre-trained diffusion model to obtain gradients that would allow us to optimize a set of filter parameters ϕ. Thus, after having initialized a filter ϕ^0, we apply a set of optimization steps: ϕ^j+1=ϕ^j-μ∇_ϕ^j C_filter (𝐲, 𝐲̂_ϕ^j ), where C_filter(·, ·) is a cost function,𝐲̂_ϕ^j is the estimate of the observations at some step j, and μ is the step size. Unlike traditional gradient descent optimization, we found it beneficial to use parameter-specifc values for the step size μ to improve the optimization stability. In particular, we used a larger step size μ_f_c=1000 for optimizing the cutoff frequency parameters and a lower one μ_A=10 for the slopes. The signal 𝐲̂_ϕ^j is computed by filtering the denoised estimate 𝐱̂_0 with the filter ϕ^j in the frequency domain, as 𝐲̂_ϕ^j =ℱ^-1( H_ϕ^j⊙ℱ(𝐱̂_0) ), where ℱ and ℱ^-1 refer to the Fourier transform and its inverse operation, respectively,H_ϕ^j is a zero-phase frequency-domain filter computed through (<ref>) using the parameters in ϕ^j, and ⊙ is the Hadamard product, or element-wise multiplication. This operation is, in practice, realized in a frame-by-frame manner using a short-time Fourier transform, using a Hamming window length of 4096 samples and a hop size of 2048 samples. We furthermore constraint the parameters in ϕ^j to form a strictly decreasing function, as we observe that this improves the robustness of the algorithm. Thus, given f_c min < f_c 1 < f_c 2 <⋯ < f_c S < f_c max, we enforce A_max>A_1>A_2>⋯>A_S> A_min. This is achieved by projecting the filter parameters to the constraint set after every iteration. Starting from k=1, the cutoff frequencies f_c k are projected as follows: f_c k = f_c min f_c k≤ f_c min f_c k-1 + c_f f_c k < f_c k-1 f_c k f_c k-1≤ f_c k < f_c max f_c max f_c k≥ f_c max , then the slopes A_k are projected according to A_k = A_max A_k≥ A_max A_k-1 - c_A A_k > A_k-1 A_k A_k-1≥ A_k > A_min A_min A_k≤ A_min . The purpose of the constants c_f_c and c_A is to avoid different parameters from collapsing to the same values, and we use c_f_c=10 Hz and c_A=1 dB. In our experiments, we also use the boundary parameters f_c min=20 Hz, f_c max=f_s/2, A_max=-1 dB, and A_min=-50 dB. As formalized in Algorithm 1 and visualized in Fig. <ref>d, for each of the T diffusion sampling steps we perform M filter inference iterations. During each sampling step i, we seek to optimize the filter ϕ_i to a local minimum of C_filter (𝐲, 𝐲̂_ϕ^j) by applying M steps of (<ref>), considering that 𝐲̂_ϕ^j is obtained using 𝐱̂_0, an estimate of the unavailable ground truth 𝐱_0. If a convergence criterion is satisfied, such as relative change (<5· 10^-3) in the parameter values, the filter inference is stopped, but resumed at the next iteration. Then, the audio signal 𝐱_i is updated through reconstruction guidance (<ref>) using the updated filter ϕ_i-1. Note that while computing the gradients, ∇_x_i C_audio(y, 𝐲̂_ϕ^j) is computationally expensive as it requires differentiating through the deep neural network denoiser D_θ (see blue dotted arrow in Fig. <ref>), computing ∇_ϕ^j C_filter (𝐲, ϕ^j(𝐱̂_0)) has a negligible computational cost (see the green dotted arrow in Fig. <ref>). We define the cost function C_filter as a weighted L2 norm between spectral magnitudes as C_filter (𝐲, ŷ) = ‖𝐖 (|ℱ(𝐲)| - |ℱ(ŷ)|) ‖_2^2, where the matrix 𝐖 applies a frequency-dependent weighting function, represented in Fig. <ref>. Using a phase-agnostic cost function is a natural choice for this particular task of estimating a zero-phase filter that is parametrized in the frequency domain. In our initial experiments, we observed how using a phase-aware cost function would have a detrimental effect on optimization stability without providing any clear improvement for the filter inference. The purpose of the frequency weighting is to counteract the frequency-decaying spectral energy of most music signals, as well as the attenuation factor of the lowpass filter. Without it, the error in high frequencies would only affect the cost function in a minimal way, and only a small amount of gradient would be propagated. We empirically found a square-root frequency-weighting function (see Fig. <ref>) to work well, and it is defined as: W=√(𝐟/(12f_s))·I, where 𝐟 is a vector containing the frequency values in Hz, f_s is the sampling frequency, and I is the identity matrix. As elaborated in Sec. <ref>, the use of the frequency weighting is not critical for the performance of BABE, but it significantly helps on improving the filter estimation accuracy and accelerating inference, as it allows for stable convergence with fewer optimization steps. Fig. <ref> shows a practical example that sheds light on how the optimization converges. To facilitate the visualization, the represented example considers a single-breakpoint filter with S=1 having only two optimizable parameters, cutoff frequency f_c and slope A. We plot the values of the cost function in the parameter space on the right-hand side in Fig. <ref>. It can be observed that in the earlier sampling steps, the cost function does not contain an informative gradient in the high-frequency region. Nevertheless, thanks to applying the warm initialization, the cost function has a steep slope at low frequencies and, as the reverse diffusion process proceeds, a local minimum starts to appear in the region around the cutoff frequency, getting progressively more pronounced. This observation motivates us to initialize the lowpass filter ϕ_T with a low cutoff frequency (around 300 Hz) and a steep slope. §.§ Application to Historical Recordings One of the goals of this work is to develop a model that is applicable for the restoration of historical recordings. In order to minimize the distribution mismatch between the training data and the original historical recordings we are interested to restore, we utilize a predictive denoiser to remove all additive structured disturbances from the original recording. In particular, we use a denoising model[The reader must not confuse the mentioned denoising model with the denoiser of the diffusion model.] based on a deep neural network which is specialized in separating the gramophone recording noises <cit.>. We then use the denoised recording as the observations that will be used for the warm initialization and for the guidance of the diffusion-based generation. A similar strategy was used for the purpose of speech enhancement in <cit.>. Fig. <ref> visualizes in a simplified way the process of restoring a gramophone recording with the proposed method. If the goal is to restore a long recording that may last several minutes, the restoration needs to be treated on a frame-by-frame basis. In this case, in order to ensure coherence between frames, we use the block-autoregressive extension method as used in <cit.>. This method consists of taking the last fragment of the previously generated frame and using it as a conditioning signal at the beginning of the next one. The conditioning can be applied through approximate posterior sampling, in pair with the lowpass-filtered observations. Intuitively, the subsequent samples will be “outpainted” in coherence with both the previous and the lowpass-filtered observations, allowing us to process recordings of arbitrary length. Also note that, if we assume time-invariant conditions on the degradation, the filter only needs to be estimated once at the beginning of the recording and can be reused for the rest of the frames, thus saving some computation. Another important detail that one must care about is loudness normalization, as the recordings need to be normalized to be in the same range as the training data. The solution we applied is normalizing the denoised recording to match the average standard deviation of the dataset, which we report in Sec. <ref>. We, however, acknowledge the limitations of this decision as music dynamics have a nonlinear effect, e.g., a piano played loudly sounds different than one played softly, and it could distort the original intended sound. § IMPLEMENTATION DETAILS In this section, we provide important implementation specifications of the proposed method, BABE. These include our choice for the neural network architecture, the datasets we experimented with, and training and sampling details. §.§ Constant Q-Transform-Based Architecture As a consequence of their high sampling rates, audio signals, when seen as vectors, are high-dimensional, a property that makes the training of a diffusion model difficult. Recent successful diffusion models in audio circumvent this issue by designing the diffusion process in a compressed latent space <cit.> or by subdividing the task in a sequence of independent cascaded models <cit.>. However, utilizing reconstruction guidance without any further modifications requires designing a single-stage diffusion process in the raw audio domain because relying on a decoder or a super-resolution model could potentially harm the quality of the gradients. Thus, such latent and cascaded strategies are not directly applicable to the setting of this work. Seeking for inductive biases that could facilitate training, previous work <cit.> proposed to use an invertible Constant-Q-Transform (CQT) <cit.> to precondition the backbone architecture with. The CQT leverages a sparse time-frequency representation where pitch transpositions are equivalent to translations in the frequency axis, motivating the usage of a convolutional architecture. In a follow-up work <cit.>, a more efficient and scalable architecture was proposed. The improved architecture allowed for a smaller amount of signal redundancy without sacrificing invertibility. Here, we use a version of this architecture (without the self-attention blocks) consisting of 45× 10^6 training parameters. §.§ Datasets The proposed BABE method only requires collecting an audio dataset from the desired target domain to train an unconditional diffusion model. Thus, no labels or any kind of paired data are needed. However, a relatively large dataset is desired, as overfitting would widely affect the out-of-distribution performance on real recordings. Since, in this work, we are interested in restoring instrumental music signals, we experiment with two datasets: MAESTRO <cit.> and COCOChorales <cit.>. §.§.§ MAESTRO The MAESTRO dataset <cit.> contains about 200 h of classical solo piano recordings played by virtuoso pianists. We convert the stereo data to mono, resample it to f_s=22.05 kHz for experimental convenience, and feed it into the training loop without applying any kind of normalization. The calculated standard deviation of the dataset, necessary to compute the parametrization from <cit.>, is, approximately, σ_data=0.07. §.§.§ COCOChorales The COCOChorales dataset <cit.> is a large-scale corpus of chamber music recordings synthetically generated using a structured synthesis model <cit.>. The dataset contains mixtures of strings, woodwind, and brass instruments playing in the style of Bach's chorale music. The fact that the audio data is synthetic and sampled at f_s=16 kHz represents an upper bound on the expected restoration quality. However, the examples from COCOChorales show a positive audio quality compared with the historical recordings we are interested to restore. In addition, we believe that the idea of transferring knowledge from more structured DSP-based models is an interesting solution to account for the data-intensive demands of diffusion models. For this dataset, we estimated a standard deviation of σ_data=0.15. §.§ Training Details We train separate models with the training set of MAESTRO (piano) and the three training subsets of COCOChorales (strings, woodwind, and brass). The models are trained with the preconditioned objective from <cit.>. We train using audio segments of 8.35 s at f_s=22.05 kHz for MAESTRO and and 11.5 s at f_s=16 kHz for COCOChorales. We also experimented with training models with higher sample rates and obtained encouraging outcomes, but these were kept out from the evaluation for practical reasons. We trained the diffusion models using the Adam optimizer with a learning rate of 2× 10^-4, and a batch size of 4. For the MAESTRO experiment, the model was trained for 850k iterations taking roughly 4 days using a single NVIDIA A100-80GB GPU. The COCOChorales models with strings, woodwind, and brass data were trained for 190k, 390k, and 480k, respectively. We refer to the public code repository[https://github.com/eloimoliner/BABEhttps://github.com/eloimoliner/BABE] for further specifications. §.§ Sampling Details We use the second-order stochastic sampler from <cit.>. Note that the second-order corrections and the stochastic components that this sampler adds are not listed in Algorithm 1 for the sake of simplicity. We use the same noise schedule parametrization as in <cit.>, which discretizes the diffusion process as τ_i<T=(σ_start^ 1/ρ + iT-1( σ_min^ 1/ρ -σ_start^ 1/ρ))^ρ, where T is the number of discretized steps, σ_start is the starting noise level for warm initialization (see Sec. <ref>), σ_min is the minimum boundary noise level, and ρ controls the warping of the diffusion process. We use σ_start=0.2, σ_min=1× 10^-4, and ρ=8 for MAESTRO, and σ_start=0.6, σ_min=1× 10^-3 and ρ=9 for COCOChorales. The hyperparameter T defines a trade-off between sampling accuracy and speed, we find T=35 to work well. As a reference, sampling a 8.35-s segment at f_s=22.05 kHz with BABE takes approximately 1 min in an NVIDIA A100-80GB GPU. We elaborate more on some of these hyperparameters in Sec. <ref>. § EXPERIMENTS AND RESULTS §.§ Hyperparameter Search The proposed sampling method relies on a set of hyperparameters that need to be tuned. This section studies the effect of some of the most relevant hyperparameters that need to be specified on the inference algorithm. Our goal is to find a robust set of hyperparameters and elucidate some intuition on their role. We study the following hyperparameters: the number of lowpass filter breakpoints S, the starting noise level σ_start, the reconstruction guidance step size ξ, and the number of sampling steps T. To conduct a hyperparameter search, we define an experimental setup where we randomly extract a set of 32 examples from the MAESTRO validation set, each of 8.35 s. The validation set is kept relatively small to allow an extensive search, which would be unfeasible in a larger set due to computational constraints. We simulate the bandlimited observations by applying a lowpass filter designed with the piecewise-linear parametrization from Sec. <ref>, using a single stage with f_c=1 kHz and a slope of -20 dB/oct. We report the results of blind bandwidth extension in terms of Log-Spectral Distance (LSD), a standard reference-based metric. As the ground-truth filter magnitude response H_ref is known, we report the filter estimation error in terms of the Frequency-Response Error (FRE): FRE=20log_10∑_f |H_ref(f)-Ĥ_ϕ(f)|/H_ref(f) [dB], and report the result in dB. We also report the percentage of catastrophic failures (% fail), which are cases where the inference process does evidently not converge to a reasonable solution. We observe that these catastrophic failures often happen in very soft or even silent music passages, when the power of the observations is low and there is not enough guidance for the optimization. In Fig. <ref>, we can see a correlation between the root-mean-squared (RMS) signal level of the degradations and the FRE, also showing that the failures happen at low RMS values. In this study, we prioritize finding a hyperparameter set that avoids catastrophic failures and minimizes the filter estimation error, while we consider LSD with skepticism as, being a reference-based metric, is not always reliable for evaluating generative models. We search each of the hyperparameters sequentially starting from a set of hyperparameters that was chosen by trial and error during the development stages. §.§.§ Number of filter breakpoints S First, we study how the parametrization of the lowpass filter affects the performance by varying the number S of piecewise-linear breakpoints. In the top part of Table <ref>, we find that S=1 stage works well in terms of LSD, but the limited degrees of freedom affect the filter estimation error. We also observe that increasing the number of breakpoints helps on improving FRE, and more importantly, reduces the number of catastrophic failures. However, there does not seem to be a benefit in using more than five breakpoints, as we observe that more degrees of freedom usually lead to a more stable performance. We thus choose S=5. §.§.§ Starting noise level σ_start We study the optimal value for the starting noise level at which we initialize the diffusion process. In an informal qualitative analysis reported in Table <ref>, we identify that this parameter allows for tuning a trade-off between faithfulness and quality. Using a too-low value may be limiting the room for improvement by the diffusion-based generation, as it was observed in <cit.>. However, we find that higher values for σ_start cause more catastrophic failures, and when the initialization is (almost) pure noise σ_start=1, the algorithm fails more than half of the times. We choose σ_start=0.2, which strikes a good balance between realism and faithfulness, and produces a reliable performance. §.§.§ Step size ξ Next, we study the effect of the step size ξ, which controls the weight given to the cost function C_audio during sampling. This parameter plays a similar role as σ_data on controlling a trade-off between faithfulness and quality. On one hand, larger values of ξ encourage better consistency with the observations but the strong guidance introduces error to the sampling, sacrificing quality. Our results in Table <ref> show that too large values for ξ lead to catastrophic failures. On the other hand, too low values of ξ represent a lower guidance, sacrificing faithfulness to the observations. We choose ξ=0.2 as it strikes a balance between performance and reliability. §.§.§ Number of sampling steps T Finally, we study the effect of the number of sampling steps T in the diffusion process. As expected, increasing T leads to better filter estimation error, but we see diminishing returns in Table <ref> when T=50. As a consequence, we choose T=35. §.§.§ Other hyperparameters At this point, we also ablate the frequency weighting in C_filter and observe that, without it, the algorithm still works but the FRE increases up to -0.60 dB. The step size μ, used for the filter optimization, is also an important hyperparameter. Considering that the filter optimization iterations are relatively cheap, we choose a conservatively low value for μ, which allows for a stable convergence, although requiring a higher number of iterations. We also found it beneficial to use separate step sizes for the cutoff (μ_f_c=1000) and slope (μ_A=10) parameters. §.§ Objective Evaluation of Lowpass Filtered Signals In this study, we evaluate the proposed blind bandwidth extension method on a subset of the MAESTRO test set, which consists of 52 complete recordings, resulting in approximately 6 h of audio data. We use two different lowpass filters with cutoff frequencies of 1 kHz and 3 kHz, both designed as a finite impulse response filter with a Kaiser window and order 500. Table <ref> reports the results with two different objective metrics: LSD and Fréchet Distance (FD). LSD is a classic reference-based metric commonly used to evaluate audio bandwidth extension methods. This metric provides information about the similarity of the reconstructed signal with the ground truth target. However, when it comes to evaluating an audio bandwidth extension system based on a generative model, LSD may not be adequate as the generated audio may have different spectral content from the reference. FD <cit.> uses embeddings from PANNS <cit.>, a pre-trained audio classifier, to compare the distributions from a set of original and reconstructed sets of audio signals. This metric only provides information about the general audio quality of the reconstructed outputs, and it should be considered with skepticism as there are no guarantees about its reliability. We compare the performance of the proposed BABE method against several baselines and ablations. The first of them is the most directly comparable baseline BEHM-GAN, which is a GAN-based model designed for bandwidth extension of historical music <cit.>. During the training, BEHM-GAN was regularized so that it generalizes to a wide range of lowpass filters. In comparison to BABE, BEHM-GAN requires specialized training and, thus, it is not zero-shot. We only evaluate it at f_c=3 kHz because the method was designed to work at the range of f_c=1 kHz. Table <ref> shows that BABE outperforms BEHM-GAN in terms of FD, but BEHM-GAN wins on LSD. This is not a surprise, as BEHM-GAN was optimized with a reconstruction loss that encouraged it to “copy” the low-frequency (correct) part of the spectrum, whereas BABE does this with fewer constraints. The second compared method, AERO^∗ is based on the super-resolution model proposed by Mandel et al. <cit.>. The original method (AERO) consists of a spectral domain model trained with a mixture of reconstruction and adversarial losses with paired low-high resolution examples. However, since AERO was originally designed specifically for audio super-resolution, we were obliged to modify the training pipeline to incorporate the method into our evaluation setup, hence the differentiation ∗ in the acronym. Instead of applying the spectral upsampling proposed in <cit.>, we used the lowpass filtered signal as inputs. We trained two models with the MAESTRO dataset using the same lowpass filters as used for evaluation, without applying any kind of filter regularization <cit.> to not bias the results. As a consequence, the trained models are overfitted to the training filters and are unable to generalize to different unseen lowpass filters. For this reason, we refer to this method as Oracle, given that it has an advantageous and unrealistic position with respect to the other blind compared methods. Probably because of similar reasons as BEHM-GAN, AERO^∗ obtained a smaller LSD than BABE, but a larger FD. The next test condition, CQT-Diff+, is identical to the proposed method, but it uses knowledge of the true lowpass filter instead of blindly estimating it. This condition corresponds to the same method as proposed in <cit.>, but with the improved architecture from <cit.>, and using the same implementation details as reported for BABE in this paper (only those that apply for the non-blind setting). Therefore, this is also an Oracle baseline, but more directly comparable to BABE. We interpret this condition as an upper bound on the expected performance of BABE. As reported in Table <ref>, this condition and BABE obtain very similar values of LSD and FD, meaning that the blind enhancement performance is almost equal to its non-blind filter-informed counterpart. The last condition we include is what we consider a naive strategy for blind restoration using a diffusion model. This consists of applying an unconditional non-guided diffusion model using warm initialization as the only conditioning method. The diffusion model is then allowed to move freely, starting from the noisy lowpass-filtered observations, until it hits the data manifold. This strategy is referred to in other works as style transfer <cit.>. This condition obtained a relatively low FD score and a significantly worse LSD. The generated examples were also qualitatively inferior to the proposed method, as there was no constraint applied to ensure consistency with the observations. §.§ Subjective Evaluation of Lowpass Filtered Signals We acknowledge that the available objective metrics do often not correlate with perceptual audio quality and, thus, there is a need to carry out additional subjective experiments to more properly evaluate the performance of bandwidth extension systems. To do so, we design a listening test based on the MUSHRA recommendation <cit.>. The test includes four different 8.35 s audio excerpts extracted from the MAESTRO test set, which we process with the same lowpass filters as used in the objective evaluation (f_c=1 kHz and f_c=3 kHz), making a total of eight test pages. On each page, we included the same bandwidth extension baselines as in the objective evaluation, alongside the hidden reference and the lowpass filtered recording, which functions as a low-range anchor. The listeners were asked to rate the individual audio excerpts between 0 and 100 in terms of overall audio quality. This test question differs slightly from the MUSHRA recommendation, which is based on pairwise similarity to the reference. The audio examples included in the test are available in the companion webpage [http://research.spa.aalto.fi/publications/papers/ieee-taslp-babe/]. After completing the experiment, some participants reported that some of the examples had better quality than the reference. This explains the confidence intervals and outliers of the reference condition. We attribute this phenomenon to the fact that the diffusion model is unable to generate the noises and impurities that the original recording may contain and, thus, it additionally serves as a denoiser. The test results are represented in Fig. <ref>, using a boxplot representation. As it can be seen in Fig. <ref>b, BABE widely outperformed the baseline BEHM-GAN in f_c=3 kHz (p-value of 3×10^-9 in a paired t-test). As expected, both non-blind oracle conditions obtained high scores, but the proposed BABE method also obtained similarly high ratings. As examined through a paired t-test, the results do not show strong statistically significant differences between BABE and AERO^∗ in neither the f_c=1 kHz or f_c=3 kHz conditions (p-values of 0.98 and 0.08, respectively). When compared against CQT-Diff+, the distribution of the scores given to BABE are significantly inferior in f_c=1 kHz (p-value 2×10^-3), but not in f_c=3 kHz (p-value 0.16). These results indicate that the blind filter estimation does not affect much to the perceived audio quality, when compared to the oracle baselines, demonstrating the effectiveness of the proposed method. §.§ Subjective Evaluation of Processed Historical Recordings We experiment with applying BABE on real historical piano recordings, in particular 1920s gramophone recordings. We apply the pipeline specified in Sec. <ref>, where the original recordings are firstly denoised using <cit.> and then bandwidth-extended with the proposed method. We observe that, in this context, BABE removes some residual artifacts that are still present in the denoised signal. This phenomenon happens because the recording conditions the diffusion model through reconstruction guidance in a non-invasive manner and, since the diffusion model does only contain a prior on piano music, it is unable to regenerate the residual noises. We evaluate in a separate subjective test the performance of BABE in this context, where we aim to compare it against the baseline BEHM-GAN <cit.> and the original recordings denoised using <cit.>. To do so, we designed a MUSHRA-style test but, since a reference was unavailable, the reference presented to the listeners was a modern recording of the same piano piece, but played with a different piano and recording environment as well as, for obvious reasons, a different performer. The purpose of this reference was to set up an upper bound on the expected audio quality for the processed audio excerpts, but not to serve as a pairwise comparable example, thus it was not included as a hidden condition. We also included an easy-to-recognize low-quality anchor, which consisted of the original denoised recording lowpass filtered at 1 kHz. The test included four different stimuli, all of them classical piano recordings extracted from the internet archive[https://archive.org/https://archive.org/]. The results of the experiment are shown in Fig. <ref>. It can be noted that the proposed method obtained higher scores than the compared baselines. BABE obtained a median score of 64, which corresponds to the “Good” quality range, while BEHM-GAN was more often rated as “Fair”, and the original denoised recording as “Poor”. The results of a paired t-test indicate a significant improvement between the distribution of scores given to BABE and the denoised recording, as well as when compared to BEHM-GAN. We obtained small p-values (<1× 10^-5) when testing the statistical significance of the results in a paired t-test. We refer the reader to the companion webpage[http://research.spa.aalto.fi/publications/papers/ieee-taslp-babe/http://research.spa.aalto.fi/publications/papers/ieee-taslp-babe/] for the audio examples included in the listening test examples, as well as other full-length audio restoration demos. §.§ Application to Other Musical Instruments The BABE method is also applicable to other music recordings, not only piano music. The requirement is a sufficiently large dataset for training the model. As specified in Sec. <ref>, we trained string, brass, and woodwind instrument models using the different subsets of COCOchorales <cit.>. We then processed real historical gramophone recordings containing these instrument sounds. Figs. <ref>(b)-(d) present examples of pairs of denoised string, woodwind, and brass music excerpts, produced with the denoising model from <cit.>, and their enhanced versions produced using BABE. While the denoised signals have little content above about 3 or 4 kHz, the bandwidth-extended signals generally show spectral lines at all frequencies up to the highest frequency displayed, 8 kHz. To demonstrate the perceptual improvement offered by BABE when restoring recordings containing other musical instruments, we designed another subjective experiment. The question, in this case, was whether the proposed method produces a significant quality improvement with respect to the denoised-only version. We designed a two-way forced-choice listening test, or preference test, where listeners were asked to decide which of the two presented stimuli had a better sound quality. On each page of the test, three stimuli were presented to the listener, one was the “original” item, which was an unprocessed digitized gramophone recording, and the others were two of its restored versions: one denoised with the method in <cit.> and the other one also denoised and additionally processed with the proposed BABE method. The results of the preference test are reported in Fig. <ref>, where it can be seen that the bandwidth-extended version produced by BABE was preferred almost unanimously for the strings and woodwind classes. For the brass examples, the responses were divergent, and no advantage could be indicated. A potential explanation for these results is that the string and woodwind instrument sounds are brighter and benefit more from bandwidth extension than brass instrument tones, which do not contain as much energy above the cutoff frequency. These positive results indicate that the evaluated diffusion models have strong out-of-domain generalization by default, as we applied no specific regularization to account for the train-test distribution mismatch. We remark that the training data of COCOChorales is synthetic and only contains music compositions in the style of Bach chorales. Nevertheless, the models can generalize well to different real-world recordings, as long as they are relatively similar to the training data content-wise. § CONCLUSION A novel method for blind audio bandwidth extension was presented. The proposed method, called BABE, is capable of extending the high-frequency bandwidth of music signals while blindly estimating the lowpass filter degradation. BABE only requires training an unconditional diffusion model with data from the target domain of broadband high-quality music, and can be applied to perform blind bandwidth extension in a zero-shot setting. As evaluated with synthetic lowpass filtered signals using objective and subjective metrics, the proposed method outperforms existing blind bandwidth extension methods and delivers competitive performance against non-blind oracle baselines, which had knowledge of the true test lowpass filter. The proposed BABE method is applicable for restoring real historical music recordings, which suffer from an unknown lowpass degradation. According to the results of subjective listening tests, the BABE method delivers “Good” audio quality and is, in most cases, preferred against the original (only denoised) recordings. However, the imperfect quality of the historical music restoration is still affected by the distribution shift between training and test data. Luckily, the proposed diffusion model shows robustness in adapting to out-of-domain cases, but more efforts to minimize the distribution mismatch could be beneficial for improving its performance. This work assumes the bandwidth limitation as the only degradation to account for, in addition to the additive noise. However, in practice, historical recordings suffer from other degradations that are here overlooked, such as coloration, distortion, or pitch variation <cit.>. The task of jointly restoring different degradations using deep learning is a potential direction for future work. § ACKNOWLEDGMENTS We thank the participants of the listening tests. We acknowledge the computational resources provided by the Aalto Science-IT project. IEEEtran [ < g r a p h i c s > ]Eloi Moliner received his B.Sc. degree in Telecommunications Technologies and Services Engineering from the Polytechnic University of Catalonia, Spain, in 2018 and his M.Sc. degree in Telecommunications Engineering from the same university in 2021. He is currently a doctoral candidate in the Acoustics Lab of Aalto University in Espoo, Finland. His research interests include digital audio restoration and audio applications of machine learning. [ < g r a p h i c s > ]Filip Elvander (Member, IEEE) received the M.Sc. in Industrial Engineering and Management and the Ph.D. in Mathematical Statistics from Lund University, Sweden, in 2015 and 2020, respectively. He has been a postdoctoral research fellow at the Stadius Center for Dynamical Systems, Signal Processing and Data Analytics, KU Leuven, Belgium, and with the Research Foundation – Flanders (FWO). He is currently an Assistant Professor of Signal Processing at the Department of Information and Communications Engineering, Aalto University, Finland. His research interests include inverse problems, robust estimation, and convex modeling and approximation techniques in statistical signal processing and spectral analysis. Prof. Elvander is a member of the EURASIP Technical Area Committee on Signal and Data Analytics for Machine Learning. [ < g r a p h i c s > ]Vesa Välimäki (Fellow, IEEE) received his M.Sc. and D.Sc. degrees in electrical engineering from the Helsinki University of Technology (TKK), Espoo, Finland, in 1992 and 1995, respectively. He was a Postdoctoral Research Fellow at the University of Westminster, London, UK, in 1996. In 1997–2001, he was a Senior Assistant (cf. Assistant Professor) at TKK. In 2001–2002, he was a Professor of signal processing at the Pori unit of the Tampere University of Technology. In 2008–2009, he was a Visiting Scholar at Stanford University. He is currently a Full Professor of audio signal processing and the Vice Dean for Research in electrical engineering at Aalto University, Espoo, Finland. His research interests are in audio and musical applications of machine learning and signal processing. Prof. Välimäki is a Fellow of the IEEE and a Fellow of the Audio Engineering Society. In 2007–2013, he was a Member of the Audio and Acoustic Signal Processing Technical Committee of the IEEE Signal Processing Society and is currently an Associate Member. In 2005–2009, he served as an Associate Editor of the IEEE Signal Processing Letters and in 2007–2011, as an Associate Editor of the IEEE Transactions on Audio, Speech and Language Processing. In 2015–2020, he was a Senior Area Editor of the IEEE/ACM Transactions on Audio, Speech and Language Processing. In 2007, 2015, and 2019, he was a Guest Editor of special issues of the IEEE Signal Processing Magazine, and in 2010, of a special issue of the IEEE Transactions on Audio, Speech and Language Processing. Currently, he is the Editor-in-Chief of the Journal of the Audio Engineering Society.
http://arxiv.org/abs/2306.11672v1
20230620165008
Critical percolation in the ordering kinetics of twisted nematic phases
[ "Renan A. L. Almeida" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.soft" ]
[][email protected] Instituto de Física, Universidade Federal do Rio Grande do Sul, CP 15051, 91501-970, Porto Alegre RS, Brazil I report on the experimental confirmation that critical percolation statistics underlie the ordering kinetics of twisted nematic phases in the Allen-Cahn universality class. Soon after the ordering starts from a homogeneous disordered phase and proceeds towards a broken ℤ_2-symmetry phase, the system is attracted to the random percolation fixed point at a special timescale t_p. At this time, exact formulae for conformally invariant crossing probabilities in percolation theory agree with the corresponding probabilities in the experimental data. The ensuing evolution for the number density of hull-enclosed areas is described by an exact expression derived from a percolation model endowed with curvature-driven interface motion. Scaling relation for hull-enclosed areas versus perimeters reveals that the fractal percolation geometry is progressively morphed into a regular geometry up to the order of the classical coarsening length. Critical percolation in the ordering kinetics of twisted nematic phases R. A. L. Almeida July 31, 2023 ======================================================================= Phase-ordering kinetics via domain coarsening is a ubiquitous phenomenon that has been studied over many decades <cit.>. The iconic example is the ordering of ferromagnetic phases in the bidimensional kinetic Ising model after a quench from above to below the critical temperature. Evolving with single flip kinetics, the mosaic of spin domains acquires a morphology statistically equivalent to that of critical percolation <cit.> before developing the coarsening length, R(t) ∼ t^1/2, that dynamical scaling hypothesis in this case relies upon <cit.>; t is the time elapsed from the quench. At the continuum scaling limit, such a phenomenology is recast as a nonconserved scalar field evolved by the time-dependent Ginzburg-Landau equation (model A) with a symmetric double well potential with minima at ±ϕ_0 <cit.>. The dynamics is concentrated at the motion of interfaces (i.e., the zeros of the scalar field), whose curvatures are reduced according to the Allen-Cahn (AC) equation <cit.>: v = -Dκ, where v is the normal velocity of an infinitesimal segment of the interface, κ is the local curvature, and D is a parameter. Starting from a homogeneous disordered initial condition, the low-temperature dynamics of both discrete and continuum finite-size models quickly visit configurations characterized by the existence of giant percolating clusters, whose sizes, occupying a large fraction of the system, are at variance with the typical domain size kept at the microscopic level <cit.>. Because of the interplay between energy-conserving and energy-decreasing kinetical moves <cit.>, however, these initial percolating clusters are broken and rebuilt multiple times until the dynamics converges to the random percolation fixed point at a special timescale t_p <cit.>. The underlying percolating structure is then permanently sealed at this time <cit.>, leaving the role of smoothing boundaries and coarsening domain areas for the asymptotic AC dynamics. Interestingly, from t_p onwards, the crossing probabilities for spin domains in the Ising-Glauber model, lying in a rectangle of aspect ratio r, numerically follow <cit.> the conformally invariant probabilities exactly derived for critical percolation <cit.>. As I shall show below, such a numerical result is here experimentally confirmed, along with the first observation of t_p in real systems, in addition to the experimental confirmation for celebrated Cardy's formula <cit.>. For free boundary conditions, a domain that crosses over a rectangle by means of a vertical spanning component, without having a horizontal spanning component, occurs with probability <cit.> ℱ_hv(r) = η(r)/Γ(1/3)Γ(2/3)_3F_2(1, 1, 4/3; 2, 5/3; η), where Γ(·) and _mF_n(a_1, ..., a_m; b_1, ..., b_n; η) are the Gamma and the generalized hypergeometric functions; η(r) is defined, and implicitly related to r, as η = [(1-k)/(1+k)]^2, r ∈ℝ^+, with r = 2K(k^2)/K(1-k^2) <cit.>, where K(u) is the complete elliptic integral of the first kind. A π/2 rotation of the rectangle maps the horizontal direction onto the vertical direction, and vice-versa. Then, ℱ_hv(r) = ℱ_hv(1/r). The probability for a dual-spanning configuration, ℱ_hv, is obtained from the normalization requirement, 2ℱ_hv(r) = 1 - ℱ_hv(r) - ℱ_hv(r), where the factor 2 enters because a configuration with no spanning cluster in critical percolation shall be, by the up-down symmetry, counted as a mosaic of dual-spanning type for the ordering problem <cit.>. The crossing probability for domains containing at least a vertical spanning component is Cardy's formula. It reads <cit.>: ℱ_hv(r) + ℱ_hv(r) = 3Γ(2/3)/Γ(1/3)^2η^1/3_2F_1(1/3, 2/3; 4/3; η) By letting a critical percolation configuration, at the continuum scaling limit, evolve with the curvature-driven interface motion at zero temperature, one can also derive an exact expression for the number density of hulls with enclosed area between A and A+dA, n_h(A,t)dA <cit.>, n_h(A,t) = 2c_h/(A + λ_ht)^2, c_h = 1/8π√(3), with c_h being a universal constant <cit.>. Equation (<ref>) is valid for A_0 ≪ A ≪ L^2 and t ≥ t_p, where A_0 and L^2 denote a microscopic area and the system area, respectively; λ_h is a parameter. Despite being derived from a continuum model, Eq. (<ref>) describes the evolution of hull-enclosed areas in the kinetic Ising model with nonconserved dynamics evolving at low temperatures <cit.>. Based on fitting, the power-law decay of Eq. (<ref>) was glimpsed in an experiment <cit.>, however, without clear confirmation for its pivotal time dependence or universal constant c_h. In this Letter, I report on the experimental evidence that critical percolation statistics underlie the ordering kinetics of twisted nematic phases in the Allen-Cahn universality class <cit.>. I do it so by confirming all the exact formulae, Eqs. (<ref>) to (<ref>), based on percolation theory, including additional formulae and scaling relations derived from the interplay between ordering, percolation, and the coarsening regime. Below, I describe the experimental methods before presenting the results and discussions. For more detail on the experimental setup, see Ref. <cit.>. A twisted nematic liquid crystal (TNLC) cell was prepared by injecting a solution of N-4-methoxybenzylidene-4-butylaniline (purity > 98%) doped with 0.01wt% of tetrabutylammonium bromide in a rectangular region, 12μ m × 16m m×16m m, enclosed by parallel glass plates and polyester spacers. Inner surfaces of the plates, coated with indium tin oxide and polyvinyl alcohol, were mechanically rubbed to set an orientation for the nematic field right on them. These orientations were made orthogonal between the plates to induce left- and right-hand twisted nematic conformations along the bulk. For optical observations, I inserted the cell on the stage of a IX73 Olympus microscope before illuminating it with circularly polarized green-filtered light. Images formed by the light transmitted through the TNLC layer were recorded by a B1620 Imperx camera. Each image comprises an area L^2 = 1208a × 1608a with pixel size a = 1.82μ m. The temperature of the TNLC layer was kept at 25^∘ C with fluctuations of at most 10mK. To induce ordering kinetics in the material, a sinusoidal voltage (70V; 100Hz) was applied through the cell to generate the high density of string-like topological defects featuring the Dynamical Scattering Mode 2 (DSM2) <cit.>. DSM2 provides a nematic-disordered initial condition because the nematic order becomes chaotic with short correlations in space and time, nearly 1μ m and 10ms, respectively, for a 50μ m thick nematic layer under a.c. electric field (60V; 150Hz) <cit.>. After letting the cell by 2min in DSM2, the field was suddenly removed – definition of time t = 0s – and the stochastic ordering of twisted nematic phases was kept tracked. The ordering is quantified by a binary scalar nonconserved order parameter endowed with nearly-symmetric, AC dynamics <cit.>. Measuring the shrinking rate of circular domains, the timescale of the curvature-driven motion can be quantified by D = λ_h/2π = 122(4)μ m^2 s^-1 <cit.>, from where one reads λ_h = 767(25)μ m^2 s^-1. Using this setup, I focus on geometrical aspects of the domain morphology to test exact predictions based on percolation. To this aim, 1000 independent ordering histories lasting 30s each were collected. Images were acquired at 5s^-1 frame rate. In the analysis, a domain is defined as a connected path of the same phase. Each domain contains an external contour defined as its hull. Domains and hull-enclosed areas were detected by a labelling <cit.> and a biased-walker algorithm <cit.>. The hull perimeter is defined as the number of broken bonds of each pixel at the hull times the pixel size. By following the evolution of the domain morphology in Fig. <ref>, we observe that the macroscopic shapes of largest domains in the panel at 1.4s are preserved by the dynamics up to, at least, the latter panel at 30s. The main changes during the evolution of these domains take place at their interfaces and areas: the former becomes smoother because of the curvature-driven motion, and the last increases as a result of the shrinking and disappearance of inner domains. For t ≥1.4s, the largest domain in Fig. <ref> crosses over the opposite sides of the image by spanning it along both of its horizontal and vertical directions. Crossing events like this are the rule since all the 10^3 mosaics at t > 1s have at least a domain that crosses over the image. Both the crossing event and the crossing type, however, vary upon the geometry considered. To quantify these events over a simple geometry, I consider a rectangle of aspect ratio r = l_x / 1208a, r = 0.2, 0.3, ..., 1, located over the original image. The rectangle is oriented such that all sides are parallel to sides of the image, while its upper left corner is fixed at that same corner of the panels. Since images were taken in a region far from the borders of the sample, it is checked that different positions of the rectangle does not alter the results. Figure <ref>(a) shows the experimental crossing probabilities for each one of the three crossing types, as function of r, computed from configurations at 1.4s. Ordering times within 1.4s≤ t ≤4s yield statistically similar results. The outcomes are to be compared with the exact results for percolation theory, Eqs. (<ref>), (<ref>), (<ref>), shown as dashed or solid lines in the plot. Notice that the three exact curves have monotonic behaviors easily distinguishable, one from another. While ℱ_hv quickly decreases from 1 (for the thin slab geometry at r < 0.2) to ≈ 0.18 (for the square geometry at r = 1), 2ℱ_hv varies in an opposite trend, augmenting from 0 to ≈ 0.64 along the increasing scale for r. In its turn, being smaller or at most equal than their counterparts, ℱ_hv has only a moderate lift with r from 0 until the meeting point ℱ_hv(1) = ℱ_hv(1). By noting the exact result ℱ_hv(1) = 1/4 + (√(3)/4π)ln(27/16) = 0.322... <cit.>, we can read 2ℱ_hv(1) = 0.644.... Using this value in Eq. (<ref>), we find ℱ_hv(1) = 1/2 - ℱ_hv(1) = 0.177... <cit.>. Over the whole range of r, the experimental results are well described by the exact formulae for critical percolation – for all the three crossing types. The most likely values for the probabilities hv and hv are in excellent agreement with Eqs. (<ref>) and (<ref>), correspondingly. Their uncertainties are relatively small. Given that the measures are realized on a partial region of the sample, and that the rectangle defining a crossing event is considerably smaller than the image, such an agreement is yet more impressive. The data for the hv crossing type is right on the top of ℱ_hv(r). For the special square geometry, probabilities for the hv and hv types are statistically equal to 0.169(37), a value that encompasses ℱ_hv(1) = 0.177.... By consistence, the dual crossing probability in the liquid crystal setup is 0.66(4) at r = 1, again in agreement with the prediction 2ℱ_hv(1)=0.644.... Finally, for the meeting point ℱ_hv = 2ℱ_hv≈ 0.48 at r ≈ 0.63, the closest experimental data available gives 0.49(4) at r = 0.6. Having seen the accord with percolation solutions for the fundamental triad of crossing probabilities, the measurements for hv and hv crossing types can also be combined to confirm Eq. (<ref>), Cardy's formula, shown in Fig. <ref>(b). All of this accord, however, is only reached for configurations from 1.4s, a fact that unveils t_p = 1.4(1)s for a square of side l_x ∼ 10^3a. Now we turn to quantify the evolution of the domain morphology. Figure <ref>(a) shows results for n_h(A,t) after exclusion of domains that touch a border of the image. At fixed t, n_h(A,t) is formed by three parts along the A axis. In the smallest area part, 10μ m^2 < A < 50μ m^2≈ 27a, n_h(A,t) probes tiny bubble-like clusters in addition to thermal domains that are not related to the coarsening dynamics <cit.>. Because of thermal domains, n_h manifests a temperature-dependent decreasing with A that numerically can be accounted for by equilibrium distributions <cit.>. After such an initial decreasing with A, n_h has a plateau region. The extension of this plateau is delimited by the time-dependent coarsening area, R^2(t) ∼ t. Because of the curvature-driven motion, small domains at this regime shrink and disappear first than those having unusually large sizes. As a result, n_h(A,t) plateau shifts down as time elapses. Unlike in <cit.>, this temporal dependence is here clearly observed. Residing on large areas, on the other hand, are the structures similar to critical percolation clusters <cit.>. After t_p, relaxation of these large structures becomes much slower than that of typical domains, so that n_h(A,t) power-law decay is essentially time-independent. In Fig. <ref>(a), I also plot the exact predictions from Eq. (<ref>) using the most likely value for the experimental settings (assumed hereafter), λ_h = 767μ m^2 s^-1. Remarkable agreement with theory is seen for both the plateau and the power-law regimes, 27a ≪ A ≪ L^2 ∼ 10^6, covering nearly one decade of variation in time. Minor deviations at small and large areas are due to the finite thermal length and system size, respectively <cit.>. In its master and universal form, Fig. <ref>(b), the results respect the dynamical scaling hypothesis over the full spatial and temporal ranges. Noteworthy, the plateau's level is compatible with 2c_h = 0.0459..., thus passing through the stringent test of Eq. (<ref>) – see <cit.>. The typical area, A ∼λ_ht, demarcates the crossover to the power law inherited from the universal percolation statistics, n_h(A,t) ∼ A^-2 at λ_ht ≪ A ≪ L^2. The morphing of clusters into regular (i.e., non-fractal) structures due to the ensuing ordering can be observed through a simple relation for hull-enclosed areas and associated perimeters <cit.>, A /λ_h t≃ b (p/√(λ_h t))^α. The typical length, √(λ_h t), is used as a normalization factor; b is a parameter, while α = 2 for regular hull geometry, but α < 2 for fractal hull geometry. Figure <ref>(a) shows the experimental outcomes for the pairs of hull-enclosed area versus associated perimeter, after bin average, in their dynamical scaling form, Eq. (<ref>). Domains that touch a border of the image are excluded from the statistics. The data indeed collapse onto a master function made of two power laws, y ∼ x^α, with y = A /λ_h t and x = p/√(λ_h t): one power law below, and the other above, the crossover scale x_c≈ 7. To quantify them, the local slopes α_loc(x) = d(ln y)/d(ln x) from the master curves are averaged over the following regions to find: α = 2.04(20) over 0.3 < x < 5; and α = 1.16(10) over 20 < x < 200 (uncertainties are taken as twice the standard deviation of the local slopes). Amplitudes are b ≈ 0.046 and 0.25, respectively. Note that α = 1.16(10) agrees with the exact value for percolation hulls, α = 8/7 = 1.142..., obtained from α = 2/d_f <cit.> with the hull fractal dimension d_f = 7/4 <cit.>. Therefore, the fractal geometry of hulls in the data is progressively morphed into a regular geometry, α = 2. This occurs because of the spreading of correlations set up to the order of the coarsening length R(t) ∼√(λ_h t): interfaces are smooth up to such length, while larger boundaries, keeping the memory of the critical percolation state, are largely rough. To conclude the analysis, we also studied the number density of hulls with perimeters between p and p+dp, n_h(p,t)dp. Using n_h(A,t) from Eq. (<ref>), and A(p,t) from Eq. (<ref>), one can derive the exact expression proposed in <cit.>: (λ_h t)^3/2n_h(x) ≃ 2α b c_h x(p,t)^α-1/ (1 + b x^α)^2 , for x(p,t) far from x_c. Equation (<ref>) describes hull perimeters arising in the kinetic Ising model after a quench from infinite to zero temperature <cit.>. Figure <ref>(b) shows n_h(p,t) computed in the twisted nematic setup. The plot displays the data in the collapsed, dynamical scaling form f(x) = (λ_h t)^3/2n_h(x). Aside from a temperature-dependent region related to thermal domains at 0.03 ≤ x ≤ 0.3 (not shown), the universal scaling function f(x) comprises two parts along the x axis, from x = 0.3 onwards: a smooth increase that ends at a local maximum f ≈ 0.006 for x^*≈ 3, and a power law decay, f(x) ∼ x^-(α+1), in the tail x ≫ x_c. Both regimes are described by Eq. (<ref>). The formulae are consistently generated with values extracted from the analysis of Fig. <ref>(a). Explicitly, we use b = 0.046 and α=2 for small scales, 0.3 < x < 5; while b = 0.25 and α = 1.25 (best result) for large scales 20 < x < 200. Theoretical curves, shown as solid lines in Fig. <ref>(b), agree with the experimental results over their full extension of validity, which happens far from x_c≈ 7. In conclusion, the ordering kinetics of twisted nematic phases in the Allen-Cahn universality class, starting from a homogeneous disordered initial condition, acquires a domain morphology statistically equivalent to that of the critical percolation model soon after the ordering begins. On theoretical grounds, this connection has allowed theoretical physicists to propose a set of exact formulae for the class of bidimensional nonequilibrium systems with a nonconserved scalar field. As we have seen, many of these formulae are here experimentally confirmed: (i) the crossing probabilities formulae for rectangular geometries <cit.>; (ii) the evolution for the number density of hull-enclosed areas <cit.>; (iii) and the evolution for the number density of hull perimeters <cit.>. In addition, I also have observed (iv) the existence of t_p in a real system; besides measuring that (v) the fractal percolation geometry is progressively morphed into a regular geometry along with the spreading of correlations – the crossover from regular to irregular shapes occurring at the order of the coarsening length. Other important solvable aspects of percolation <cit.>, including that the scaling limit of hulls is described by stochastic Loewner evolution with diffusivity k = 6 (SLE_6) <cit.>, are rich directions for assessments in nonequilibrium systems. Additionally, the emergence of critical percolation statistics implies a new exponent t_p∼ L^ z_p <cit.> to appear in the dynamical scaling hypothesis. A measurement of the exponent z_p in continuum models or reals systems is a key piece required to complete the picture. The early fluctuating formation and reshaping of percolating clusters, as well as the universal behavior for the dynamical cluster size heterogeneity at this regime <cit.>, also constitutes an important path for future research. I am grateful to J. J. Arenzon for the motivation and inspiring scientific discussions along this study. I thank K. A. Takeuchi for advice on the experiments and initial discussions on this work. This research was partially funded by KAKENHI from JSPS Grant No. JP16J06923, and by the Brazilian National Council for Scientific and Technological Development – CNPq, and the Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul – FAPERGS, Grant No. 23/2551-0000154-3. Data are available upon reasonable request. 28 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bray(2002)]Bray02 author author A. J. Bray, title title Theory of phase-ordering kinetics, https://doi.org/10.1080/00018730110117433 journal journal Adv. Phys. volume 51, pages 481 (year 2002)NoStop [Cugliandolo(2015)]Cugliandolo15 author author L. F. Cugliandolo, title title Coarsening phenomena, https://doi.org/https://doi.org/10.1016/j.crhy.2015.02.005 journal journal Comptes Rendus Physique volume 16, pages 257 (year 2015)NoStop [Arenzon et al.(2007)Arenzon, Bray, Cugliandolo, and Sicilia]Arenzon07 author author J. J. Arenzon, author A. J. Bray, author L. F. Cugliandolo, and author A. Sicilia, title title Exact Results for Curvature-Driven Coarsening in Two Dimensions, https://doi.org/10.1103/PhysRevLett.98.145701 journal journal Phys. Rev. Lett. volume 98, pages 145701 (year 2007)NoStop [Sicilia et al.(2007)Sicilia, Arenzon, Bray, and Cugliandolo]Sicilia07 author author A. Sicilia, author J. J. Arenzon, author A. J. Bray, and author L. F. Cugliandolo, title title Domain growth morphology in curvature-driven two-dimensional coarsening, https://doi.org/10.1103/PhysRevE.76.061116 journal journal Phys. Rev. E volume 76, pages 061116 (year 2007)NoStop [Barros et al.(2009)Barros, Krapivsky, and Redner]Barros09 author author K. Barros, author P. L. Krapivsky, and author S. Redner, title title Freezing into stripe states in two-dimensional ferromagnets and crossing probabilities in critical percolation, https://doi.org/10.1103/PhysRevE.80.040101 journal journal Phys. Rev. E volume 80, pages 040101(R) (year 2009)NoStop [Olejarz et al.(2012)Olejarz, Krapivsky, and Redner]Olejarz12 author author J. Olejarz, author P. L. Krapivsky, and author S. Redner, title title Fate of 2D Kinetic Ferromagnets and Critical Percolation Crossing Probabilities, https://doi.org/10.1103/PhysRevLett.109.195702 journal journal Phys. Rev. Lett. volume 109, pages 195702 (year 2012)NoStop [Allen and Cahn(1979)]Allen79 author author S. M. Allen and author J. W. Cahn, title title A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening, https://doi.org/https://doi.org/10.1016/0001-6160(79)90196-2 journal journal Acta Metallurgica volume 27, pages 1085 (year 1979)NoStop [Blanchard et al.(2014)Blanchard, Corberi, Cugliandolo, and Picco]Blanchard14 author author T. Blanchard, author F. Corberi, author L. F. Cugliandolo, and author M. Picco, title title How soon after a zero-temperature quench is the fate of the Ising model sealed?, https://doi.org/10.1209/0295-5075/106/66001 journal journal EPL (Europhysics Letters) volume 106, pages 66001 (year 2014)NoStop [Blanchard et al.(2017)Blanchard, Cugliandolo, Picco, and Tartaglia]Blanchard17 author author T. Blanchard, author L. F. Cugliandolo, author M. Picco, and author A. Tartaglia, title title Critical percolation in the dynamics of the 2D ferromagnetic Ising model, https://doi.org/10.1088/1742-5468/aa9348 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2017, pages 113201 (year 2017)NoStop [de Azevedo-Lopes et al.(2022)de Azevedo-Lopes, Almeida, de Oliveira, and Arenzon]Azevedo22 author author A. de Azevedo-Lopes, author R. A. L. Almeida, author P. M. C. de Oliveira, and author J. J. Arenzon, title title Energy-lowering and constant-energy spin flips: Emergence of the percolating cluster in the kinetic ising model, https://doi.org/10.1103/PhysRevE.106.044105 journal journal Phys. Rev. E volume 106, pages 044105 (year 2022)NoStop [Watts(1996)]Watts96 author author G. M. T. Watts, title title A crossing probability for critical percolation in two dimensions, https://doi.org/10.1088/0305-4470/29/14/002 journal journal Journal of Physics A: Mathematical and General volume 29, pages L363 (year 1996)NoStop [Dubédat(2006)]Dubedat06 author author J. Dubédat, title title Excursion decompositions for SLE and Watts' crossing formula, https://doi.org/10.1007/s00440-005-0446-3 journal journal Probability Theory and Related Fields volume 134, pages 453 (year 2006)NoStop [Cardy(1992)]Cardy92 author author J. L. Cardy, title title Critical percolation in finite geometries, https://doi.org/10.1088/0305-4470/25/4/009 journal journal Journal of Physics A: Mathematical and General volume 25, pages L201 (year 1992)NoStop [Smirnov(2001)]Smirnov01 author author S. Smirnov, title title Critical percolation in the plane: conformal invariance, Cardy's formula, scaling limits, https://doi.org/https://doi.org/10.1016/S0764-4442(01)01991-7 journal journal Comptes Rendus de l'Académie des Sciences - Series I - Mathematics volume 333, pages 239 (year 2001)NoStop [Cardy and Ziff(2003)]Cardy03 author author J. Cardy and author R. M. Ziff, title title Exact Results for the Universal Area Distribution of Clusters in Percolation, Ising, and Potts Models, https://doi.org/https://doi.org/10.1023/A:1021069209656 journal journal J. Stat. Phys. volume 110, pages 1 (year 2003)NoStop [Sicilia et al.(2008)Sicilia et al.]Sicilia08 author author A. Sicilia et al., title title Experimental Test of Curvature-Driven Dynamics in the Phase Ordering of a Two Dimensional Liquid Crystal, https://doi.org/10.1103/PhysRevLett.101.197801 journal journal Phys. Rev. Lett. volume 101, pages 197801 (year 2008)NoStop [Almeida and Takeuchi(2021)]Almeida21 author author R. A. L. Almeida and author K. A. Takeuchi, title title Phase-ordering kinetics in the Allen-Cahn (Model A) class: Universal aspects elucidated by electrically induced transition in liquid crystals, https://doi.org/10.1103/PhysRevE.104.054103 journal journal Phys. Rev. E volume 104, pages 054103 (year 2021)NoStop [Joets and Ribotta(1986)]Joetz86 author author A. Joets and author R. Ribotta, title title Hydrodynamic transitions to chaos in the convection of an anisotropic fluid, https://doi.org/10.1051/jphys:01986004704059500 journal journal J. Phys. France volume 47, pages 595 (year 1986)NoStop [Kai and Zimmermann(1989)]Kai89 author author S. Kai and author W. Zimmermann, title title Pattern Dynamics in the Electrohydrodynamics of Nematic Liquid Crystals, https://doi.org/10.1143/PTPS.99.458 journal journal Progress of Theoretical Physics Supplement volume 99, pages 458 (year 1989)NoStop [Hoshen and Kopelman(1976)]Hoshen76 author author J. Hoshen and author R. Kopelman, title title Percolation and cluster distribution. I. Cluster multiple labeling technique and critical concentration algorithm, https://doi.org/10.1103/PhysRevB.14.3438 journal journal Phys. Rev. B volume 14, pages 3438 (year 1976)NoStop [Maier(2003)]Maier03 author author R. S. Maier, title title On Crossing Event Formulas in Critical Two-Dimensional Percolation, https://doi.org/10.1023/A:1023006413433 journal journal Journal of Statistical Physics volume 111, pages 1027 (year 2003)NoStop [not()]note1 @noop note Relation in Eq. (6) of <cit.> with the identification ζ-1 = α by Eqs. (62) and (63) in <cit.>, where ζ is the exponent setting the decay of the number density of hull perimeters. Consult also R. M. Ziff, Phys. Rev. Lett. 56, 545-548 (1986).Stop [Saleur and Duplantier(1987)]Saleur87 author author H. Saleur and author B. Duplantier, title title Exact Determination of the Percolation Hull Exponent in Two Dimensions, https://doi.org/10.1103/PhysRevLett.58.2325 journal journal Phys. Rev. Lett. volume 58, pages 2325 (year 1987)NoStop [Cardy(1998)]Cardy98 author author J. Cardy, title title The number of incipient spanning clusters in two-dimensional percolation, https://doi.org/10.1088/0305-4470/31/5/003 journal journal Journal of Physics A: Mathematical and General volume 31, pages L105 (year 1998)NoStop [Schramm(2001)]Schramm01 author author O. Schramm, title title A Percolation Formula, https://doi.org/10.1214/ECP.v6-1041 journal journal Electronic Communications in Probability volume 6, pages 115 (year 2001)NoStop [Schramm(2000)]Schramm00 author author O. Schramm, title title Scaling limits of loop-erased random walks and uniform spanning trees, https://doi.org/10.1007/BF02803524 journal journal Israel Journal of Mathematics volume 118, pages 221 (year 2000)NoStop [de Azevedo-Lopes et al.(2020)de Azevedo-Lopes, de la Rocha, de Oliveira, and Arenzon]Azevedo20 author author A. de Azevedo-Lopes, author A. R. de la Rocha, author P. M. C. de Oliveira, and author J. J. Arenzon, title title Dynamical cluster size heterogeneity, https://doi.org/10.1103/PhysRevE.101.012108 journal journal Phys. Rev. E volume 101, pages 012108 (year 2020)NoStop [Mazzarisi et al.(2021)Mazzarisi, de Azevedo-Lopes, Arenzon, and Corberi]Mazzarisi21 author author O. Mazzarisi, author A. de Azevedo-Lopes, author J. J. Arenzon, and author F. Corberi, title title Maximal diversity and zipf's law, https://doi.org/10.1103/PhysRevLett.127.128301 journal journal Phys. Rev. Lett. volume 127, pages 128301 (year 2021)NoStop
http://arxiv.org/abs/2306.04123v1
20230607033803
Retrosynthesis Prediction with Local Template Retrieval
[ "Shufang Xie", "Rui Yan", "Junliang Guo", "Yingce Xia", "Lijun Wu", "Tao Qin" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Magnetic Flux Budget in Mean-Field Dynamo Model of Solar Cycles 23 and 24 [ ========================================================================= Retrosynthesis, which predicts the reactants of a given target molecule, is an essential task for drug discovery. In recent years, the machine learing based retrosynthesis methods have achieved promising results. In this work, we introduce RetroKNN, a local reaction template retrieval method to further boost the performance of template-based systems with non-parametric retrieval. We first build an atom-template store and a bond-template store that contain the local templates in the training data, then retrieve from these templates with a k-nearest-neighbor (KNN) search during inference. The retrieved templates are combined with neural network predictions as the final output. Furthermore, we propose a lightweight adapter to adjust the weights when combing neural network and KNN predictions conditioned on the hidden representation and the retrieved templates. We conduct comprehensive experiments on two widely used benchmarks, the USPTO-50K and USPTO-MIT. Especially for the top-1 accuracy, we improved 7.1% on the USPTO-50K dataset and 12.0% on the USPTO-MIT dataset. These results demonstrate the effectiveness of our method. § INTRODUCTION Retrosynthesis, which predicts the reactants for a given product molecule, is a fundamental task for drug discovery. The conventional methods heavily rely on the expertise and heuristics of chemists <cit.>. Recently, machine learning based approaches have been proposed to assist chemists and have shown promising results <cit.>. The typical approaches includes the template-free methods that predict the reactants directly and the template-based methods that first predict reaction templates and then obtain reactants based on templates. For these different approaches, a shared research challenge is effectively modeling this task's particular property. As shown in Figure <ref>, a key property of a chemical reaction is that it is strongly related to modifying the local structure of the target molecule, such as replacing a functional group or breaking a bond. Therefore, much recent research focuses on better modeling the local structure of molecules <cit.>. Despite their promising results, we notice that it is still challenging to learn all reaction patterns only with neural networks, especially for the rare templates. Therefore, we introduce a non-parametric retrieval-based method to provide concrete guidance in prediction. Specifically, we use a local template retrieval method, the k-nearest-neighbor (KNN) method, to provide additional predictions to improve the prediction accuracy. Following LocalRetro <cit.>, We first take a trained graph-neural network (GNN) for the retrosynthesis task and offline build an atom-template and a bond-template store that contain reaction templates (Section <ref>). During this store construction phase, we iterate all target molecules in the training data and add the templates of each atom and each bond to the corresponding store. The templates are indexed by the hidden representations extracted by the GNN. During inference, for a given new target molecule, we first use the original GNN to extract the hidden representations as well as the original GNN predicted templates. Then, we use the hidden representations to search the two stores to retrieve local templates similar to the query. The GNN predicted templates and the KNN retrieved templates are merged with different weights to build the final output. Combining the GNN and KNN predictions is one key design factor in the above processes. The conventional way is to use fixed parameters to aggregate these predictions for all reactions, which may be sub-optimal and hurt the model's generalization <cit.>. Because each prediction may have a different confidence level, it would be beneficial to assign the weights adaptively for each reaction across different instances (Section <ref>). Therefore, we employ a lightweight adapter to predict these values conditioned on the GNN representations and the retrieved results. The adapter network has a simple structure and is trained with a few samples. Although the adapter has a little extra cost, it can help improve the model performance effectively. To sum up, our contribution is two fold: * We propose RetroKNN, a novel method to improve the retrosynthesis prediction performance with local template retrieval by the non-parametric KNN method. * We propose a lightweight meta-network to adaptively control the weights when combining the GNN and KNN predictions. We conduct experiments on two widely used benchmarks: the USPTO-50K and USPTO-MIT. These datasets contain organic reactions extracted from the United States Patent and Trademark Office (USPTO) literature. On the USPTO-50K dataset, we improve the top-1 accuracy from 53.4 points to 57.2 points (7.1% relative gain) and achieved new state-of-the-art. Meanwhile, on USPTO-MIT, we improve the top-1 accuracy from 54.1 points to 60.6 points (12.0% relative gain). Moreover, our method shows promising results on the zero-shot and few-shot datasets, which are challenging settings for conventional template-based methods yet essential for this research field. These results demonstrate the effectiveness of our method. § METHOD §.§ Preliminaries We denote a molecule as a graph G(V, E) where the V is the node set and the E is the bond set. Given a target molecule M as input, the retrosynthesis prediction task is to generate molecules set R that are reactants of M. Instead of directly predicting R, we follow LocalRetro <cit.> that predict a local reaction template t at reaction center c and apply (t, c) to molecule M. More specifically, the t is classified into two types: atom-template t ∈T_a and bond-template t∈T_b, depending whether c is an atom or a bond. We also assume that there are a training set D_train, a validation set D_val, and a test set D_test available. Each data split contains the target and corresponding reactants, which is formulated as D = { (M_i, t_i, c_i, R_i)}_i=1^|D| where c_i is the reaction center of M_i to apply the template t_i and |D| is the data size of D. Meanwhile, we assume a GNN model trained on D_train exist. Without loss of generality, we split the GNN into two parts: a feature extractor 𝐟 and a prediction head 𝐡. The feature extractor 𝐟 takes a molecule graph G(V, E) as input and output hidden representations h_v for each node v ∈V and h_e for each edge e ∈E. The h_v and h_e are processed by prediction head 𝐡 to predict the probability distribution over the template set T_a and T_b, respectively. §.§ Store Construction [tb] store construction algorithm Training data D_train. Feature extractor 𝐟. Atom store S_A and bond store S_B. Let S_A ∅, S_B∅ *[l]Initialize. (M, t, c, R) ∈D_train Let V denotes the node set of M Let E denotes the edge set of M v ∈V *[l]Loop each node. Let h_v 𝐟(v | M) v == c Let S_A S_A ∪{ (h_v, t) } Let S_A S_A ∪{ (h_v, 0) } e ∈E*[l]Loop each edge. Let h_e 𝐟(e | M) e == c Let S_B S_B ∪{ (h_e, t) } Let S_B S_B ∪{ (h_e, 0) } return S_A, S_B Our method uses two data store S_A and S_B that contain the information of atoms and bonds. Both of the store are constructed offline before inference. Inside the store are key-value pairs that are computed from D_train and the construction procedure details are in Algorithm <ref>. In this algorithm, the first step is to initialize the atom store S_A and bond store S_B as an empty set. Next, for each reaction in the training data D_train, we iterate all nodes v ∈V and all edges e ∈E of the target molecule M in line 5 to 13 and line 14 to 22, respectively. For each node v, if it is the reaction center, we add template t that indexed by the hidden representation h_v to the S_A. Otherwise, we add a special token 0 to indicate that template is not applied here. Similarly, for each edge e, we add either (h_e, t) or (h_e, 0) to the bond store S_B. Finally, we get the atom store S_A and the bond store S_B used during inference. §.§ Inference Method The overview of inference procedure is available in Figure <ref>. At inference time, given a new target molecule M, we first compute the hidden representation h_v, h_e and template probability P_GNN(t_a | M, a), P_GNN(t_b|M, b) for each atom a and bond b, respectively[Whenever possible, we omit the subscript of node and edge id to simplify the notations.]. Next, we retrieve the templates for each node and edge, which can be written as P_KNN(t_a|M, a) ∝∑_ (h_i, t_i) ∈N_a𝕀_t_a = t_iexp( -d(h_a, h_i) /T_A), P_KNN(t_b|M, b) ∝∑_ (h_i, t_i) ∈N_b𝕀_t_b = t_iexp( -d(h_b, h_i) /T_B). In Equations (<ref>, <ref>), the N_a, N_b are candidates sets that retrieved from S_A, S_B, the 𝕀 is the indicator function that only outputs 1 when the condition (i.e., t_a = t_i or t_b = t_i) is satisfied, and the T_A, T_B are the softmax temperate. Meanwhile, the d(·, ·) is the distance function to measure the similarity between h_i with h_v or h_e. In another words, the P_KNN(t_a|M, a) is proportional to the sum of the weights of the neighbours whose template is t_a. Finally, we combine the GNN output and KNN output with interpolation factors λ, which is P(t_a|M,a) = λ_a P_GNN(t_a | M,a) + (1 -λ_a)P_KNN(t_a|M,a), P(t_b|M,b) = λ_b P_GNN(t_b | M,b) + (1-λ_b)P_KNN(t_b|M,b). In the Equation (<ref>)-(<ref>), the temperature T_A, T_B ∈ℝ^+ and the interpolation factors λ_a, λ_b ∈ [0, 1] are predicted by the adaptor network and details are introduced in Section <ref>. In Figure <ref>, we only illustrate one node and one bond retrieval as examples, but in practice, we conduct such a process for all atoms and bonds. Following LocalRetro <cit.>, after we get the P(t_a|M,a) and P(t_b|M,b) for each atom a and bond b, we will rank all non-zero predictions by their probability. The atom template and bonds templates are ranked together, and the top 50 predictions are our system's final output. §.§ Adaptor Network To adaptively choose the T_A, T_B, λ_a, and λ_b for each atom and bond, we design a lightweight network to predict these values. The input to adapter are hidden representation h_v, h_e from GNN side and distance list d(h_v, h_i), d(h_e, h_i) from the KNN side. We use a one-layer GNN followed by a few fully connected (FC) layers for the network architecture. We use the the graph isomorphism network (GIN) with edge features  <cit.> layer to capture both node feature h_v and edge feature h_e, which is formulated as: h_v^(g) = W_vg((1+ϵ)h_v + ∑_e∈E(v)ReLU(h_v + h_e)) + b_vg, where the h_v^(g) is the output, ϵ and W are learnable parameters of GIN, and the E(v) is the set of edges around v. Meanwhile, we use the FC layer to project the KNN distances to extract the features that can be formulated as h_v^(k) = W_vk({d(h_v, h_i)}_i=1^K) + b_vk, h_e^(k) = W_ek({d(h_e, h_i)}_i=1^K) + b_ek, where the brackets {·}_i=1^K means building a K-dimensional vector. Finally, the feature from GNN and KNN are combined to a mixed representation, which are h_v^(o) = ReLU( W_voReLU(h_v^(g)‖ h_v^(k)) + b_vo), h_e^(o) = ReLU( W_eoReLU(h_es^(g)‖ h_et^(g)‖ h_e^(k)) + b_eo), where the ‖ denotes tensor concatenation and es and et are start and end node of edge e. The T_A, λ_a are predicted by h_v^(o) and the T_B, λ_b are predicated by h_e^(o) by another FC layer. We also use sigmoid function σ to guarantee the λ_a, λ_b ∈ (0, 1) and clamp the T_A, T_B into range [1, 100]. Formally, we have T_A = max(1, min(100, W_ta h_v^(o) + b_ta)), λ_a = σ (W_la h_v^(o) + b_la), T_B = max(1, min(100, W_tb h_e^(o) + b_tb, 1, 100)), λ_b = σ(W_lb h_e^(o) + b_lb). Because all the formulas used here are differentiable, we optimize the adapter parameters W with gradient decent to minimize the template classification loss L_M = - 1/|V|∑_a ∈Vlog P(t̂_a|M, a) - 1/|E|∑_b ∈Elog P(t̂_b|M, b), for each target molecule M with node set V and edge set E. The P(t̂_a|M), P(t̂_b|M) are computed by Equation (<ref>) and Equation (<ref>). The t̂_a, t̂_b are the ground truth template. § EXPERIMENTS §.§ Experimental Settings Data. Our experiments are based on the chemical reactions extracted from the United States Patent and Trademark Office (USPTO) literature. We use two versions of the USPTO benchmark: the USPTO-50K <cit.> and USPTO-MIT <cit.>. The USPTO-50K contains 50k chemical reactions, split into 40k/5k/5k reactions as training, validation, and test, respectively. Meanwhile, the USPTO-MIT consists of about 479k reactions, and the split is 409k/40k/30k. All the partitions are the same as in previous works <cit.> to make fair comparisons. We also use the preprocess scripts by <cit.> to extract the reaction templates from these reactions, which leads to 658 and 20,221 reaction templates in USPTO-50K and USPTO-MIT. Implementation details. We follow the same model configuration as LocalRetro <cit.> to build the backbone GNN model. The feature extractor 𝐟 is a 6-layer  <cit.> followed by a single layer <cit.> with 8 heads. We use the hidden dimension 320 and dropout 0.2. The atoms' and bonds' input feature is extracted by DGL-LifeSci <cit.>. The prediction head h consists two dense layers with ReLU activation. The backbone model is optimized by Adam optimizer with a learning rate of 0.001 for 50 epochs. We also early stop the training when there is no improvement in the validation loss for five epochs. The configurations for backbone are all same as <cit.>. The implementation of KNN is based on the faiss <cit.> library with index for fast embedding searching, and the K of KNN is set to 32. For the adapter network, we use the same hidden dimension as the backbone GNN. The adapter is also trained with Adam optimizer with a learning rate of 0.001. Considering the data size difference, we train the adapter for ten epochs and two epochs on the validation set of the USPTO-50K and USPTO-MIT datasets, respectively. The adapter with the best validation loss is used for test. Evaluation and baselines Following previous works, our system will predict top-50 results for each target molecule and report the top-K accuracy where K=1,3,5,10, and 50 by the script from <cit.>. We also use representative baseline systems in recent years, include: ∙ Template-based methods: retrosim <cit.>, neuralsym <cit.>, GLN <cit.>, Hopfield <cit.>, and LocalRetro <cit.>; ∙ Semi-template based methods: G2Gs <cit.>, RetroXpert <cit.>, and GraphRtro <cit.>; ∙ Tempate-free methods: Transformer <cit.>, MEGAN <cit.>, Chemformer <cit.>, GTA <cit.>, and DualTF <cit.>. §.§ Main Results The experimental results of the USPTO-50K benchmark are shown in Table <ref> when the reaction type is unknown and in Table <ref> when the reaction type is given. Meanwhile, the results on the USPTO-MIT benchmark are in Table <ref>. In these tables, we sort all systems by their top-1 accuracy and mark their type by filling the cycle symbols. Our method (RetroKNN) is in the last row and highlighted in bold. Comparing these accuracy numbers, we can find that our method outperforms the baseline systems with a large margin. When the reaction type is unknown, we achieved 57.2 points top-1 accuracy and improved the backbone result from LocalRetro by 3.8 points, which is a 7.1% relative gain. When the reaction type is given, we also improve the top-1 accuracy by 2.8 points from 63.9 to 66.7. Meanwhile, on USPTO-MIT, our method shows 60.6 points top-1 accuracy with a 6.5 points improvement or 12% relative gain. More importantly, these top-1 accuracies are also better than other strong baselines and state-of-the-art, demonstrating the effectiveness of our method. At the same time, we achieved 78.9 points top-3 accuracy and 86.4 points accuracy in USPTO-50K when the reaction type is unknown, which are also much higher than baselines. For the top-10 and top-50 accuracy, we get 92.7 and 98.1 points accuracy. Considering that the accuracy is already very high, the improvement is still significant. To sum up, the local template retrial method efficiently improves the retrosynthesis prediction accuracy. § STUDY AND ANALYSIS §.§ Case Study Retrieval case study. To better understand if we can retrieve useful reactions by the hidden representations, we conducted case studies on the USPTO-50K datasets, and the results are shown in Figure <ref>. We fist select an atom-template reaction and the first bond-template reaction from the data. Next, we query the atom and bond store by the corresponding atom and bond. Finally, for each retrieved template, we show the original target molecule in the training data, where the reaction atom/bond is highlighted by green background. The bond-template and atom-template reactions are available in the figure's first and second rows. In each row, we first show the target molecule M of the reaction and then five neighbors of M. From these cases, we can find that the neighborhoods retrieved by hidden representations can effetely capture the local structure of molecules. For example, the carbon-nitrogen bond retrieves all neighbors in the edge-template reaction. Moreover, all carbon atoms are surrounded by oxygen in a double bond () and a trifluorocarbon (), and all nitrogen atoms are connected to an aromatic ring. Meanwhile, for the node-template reaction, all retrieved atoms are the oxygen atoms that are connected to a phenyl. In conclusion, retrieving molecules with hidden representations is efficient because it can capture the local structure well. Therefore, we can improve the prediction accuracy by using the retrieved templates. Adapter case study. We show three representative cases for the effect of adapter in Table <ref>. In each row, we show the target molecule and ground truth template id, then the λ and T output by the adapter, and finally the GNN prediction and KNN retrieved neighbors. When the GNN prediction is accurate in the first row, the adapter will generate a high λ value (e.g., 0.96) so that the GNN output has a higher weight. However, when that is not the case (the second and third row), the λ tends to be lower (e.g., 0.14), which gives more weight to KNN prediction. Meanwhile, when only the N1 has the correct prediction (the second row), the adapter tends to output a small T (e.g., 7.89) to make the sharp distribution that gives more weight to N1's prediction. On the contrary (the third row), the adapter tends to output a larger value (e.g., 19.36) so that more neighbors can contribute to the final output. Moreover, our statistics show that when λ < 0.5, the GNN and KNN accuracy are 46.9% and 69.2%, showing that KNN is complementary to GNN prediction. §.§ Zero-shot and Few-shot Study We modify the USPTO-50K dataset to zero-shot and few-shot versions to study the domain adaptation ability of our method. Specifically, in the USPTO-50K data, each reaction has its reaction class available in class 1 to 10. To build the zero-shot data, we filter the train and validation data by removing all reactions with reaction class 6 to 10 and only keeping those with reaction class 1 to 5. Similarly, to build the few-shot data, we only keep 10% of reactions that have class 6 to 10. Finally, we evaluate the performance of these new data with the LocalRetro baseline and our RetroKNN method. The results are summarized in Figure <ref>. From these plots, we notice that zero-shot is a challenging setting for conventional template-based methods, which is a known shortcoming of this kind of methods. However, when combined with KNN, our system can generate meaningful results. For example, in reaction class 8, the RetroKNN haves 6.1 points top-5 accuracy and 9.8 points top-10 accuracy in the zero-shot data. The few-shot setting is easier than the zero-shot because a few examples are available during training. Nevertheless, the RetroKNN also outperforms baseline on all reaction types. On average, the RetroKNN improved 8.56 points top-5 accuracy and 5.64 points top-10 accuracy. These results show that our method is can also improve the performance on zero/few-shot data, which are important scenarios in this field. §.§ Ablation Study We conducted an ablation study on the USPTO-50K dataset to study the contributions of different components, and the results are shown in Table <ref>. We show the top-1 accuracy in the table by comparing different systems. The system 1 is the LocalRetro baseline without using KNN, which achieved 53.4 points accuracy. In system 2, we add the KNN without using the adapter. To find the optimal paramters, we conduct comprehensive grid search on by T ∈{1, 5, 25, 50} and λ∈{0.1, 0.3, 0.5, 0.7, 0.9}, which leads to total 20 combinations. We select the parameters by the validation loss and finally get the 56.3 points accuracy. Furthermore, in system 3, we add the adapter only for T and keep the λ same as system 2. Similarly, we only add the adapter only for λ in system 4. The system 5 is the full RetroKNN model. Comparing the system 1 with others that using KNN, we can find that introducing KNN to this task can effectively improve the model performance. These numbers show that the local template retrieval is vital for the system. Meanwhile, comparing system 34 to system 2, we notice that adding both T and λ adapter is helpful. Finally, when both parameters are adaptively predicted in system 5, the accuracy can be boosted to 57.2, showing that they can work together effectively. Therefore, all components are necessary for this system. §.§ Retrieved Templates Size In Table <ref>, we show how the number of retrieved reactions (i.e., K of KNN) affects the model performance. More specifically, in the KNN search, we set the K ∈ [1, 4, 8, 16, 32], then train adapters for each of them. Finally, we report the top-1 accuracy in the table. From these results, we first observe that only adding one retrieved template (K=1) can improve the accuracy from 53.4 to 55.6. When K is ≥ than 4, the accuracy can be further improved to around 57 points. There will be no further significant improvement when more reactions are retrieved, nor will more received templates hurt the performance. We suppose it is because there is already enough information to improve the accuracy as the templates far from the query will contribute less to the prediction. §.§ Inference Latency In Table <ref>, we study the datastore size and the inference latency. The last two rows present the latency with or without retrieval during inference, which are measured on a machine with a single NVIDIA A100 GPU. Each latency value, which is the average run time per reaction, is measured with ten independent runs. In the USPTO-50K dataset, we observe that the average latency increased from 2.71 ms to 3.31 ms, which is about 0.6 ms for each reaction. The extra latency is a little more prominent for the USPTO-MIT dataset because it is about ten times larger than the USPTO-50K. However, considering the hours or even days that a more accurate system can save for chemists, the extra ten-millisecond cost is not a real obstacle to the practical use of this method. Finally, some work <cit.> show that the KNN speed can be further accelerated, and we would like to add these techniques in future work. § RELATED WORK §.§ Retrosynthesis Prediction Retrosynthesis prediction is an essential task for scientific discovery and have achieved promising results in recent years <cit.>. A few research also use retrieval mechanisms for this task. For example, <cit.> use Hopfield networks to select templates, and <cit.> use retrieval method to fetch molecules from a database. Being differently, we are the first to combine deep learning and KNN retrieval in this task. §.§ Retrieval Methods Retrieving from data store or memory to improve the machine learning model's performance is an important research topic. SVM-KNN <cit.> first combines the SVM and KNN for recognition tasks. Furthermore, the KNN-LM <cit.> and KNN-MT <cit.> have shown promising results when combining KNN with Transformer networks. Meanwhile, <cit.> study the speed of retrival methods and <cit.> study the adaptation problem. However, we are the first to combine the strong capability of KNN with GNN and use them on the retrosynthesis task. § CONCLUSION Retrosynthesis prediction is essential for scientific discovery, especially drug discovery and healthcare. In this work, we propose a novel method to improve prediction accuracy using local template retrieval. We first build the atom and bond stores with the training data and a trained GNN and retrieve templates from these stores during inference. The retrieved templates are combined with the original GNN predictions to make the final output. We further leverage a lightweight adapter to adaptively predict the weights to integrate the GNN predictions and retrieved templates. We greatly advanced the prediction performance on two widely used benchmarks, the USPTO-50K and USPTO-MIT, reaching 57.2 and 60.6 points for top-1 accuracy. These results demonstrate the effectiveness of our methods. § ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for their insightful comments. This work was supported by National Natural Science Foundation of China (NSFC Grant No. 62122089 and No. 61876196), Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, and Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the “Double-First Class” Initiative, Renmin University of China. We also wish to acknowledge the support provided and contribution made by Public Policy and Decision-making Research Lab of RUC. Rui Yan is supported by Beijing Academy of Artificial Intelligence (BAAI).
http://arxiv.org/abs/2306.08175v1
20230613234253
DCTX-Conformer: Dynamic context carry-over for low latency unified streaming and non-streaming Conformer
[ "Goeric Huybrechts", "Srikanth Ronanki", "Xilai Li", "Hadis Nosrati", "Sravan Bodapati", "Katrin Kirchhoff" ]
eess.AS
[ "eess.AS", "cs.AI", "cs.LG", "cs.SD" ]
Note on the Corrected Hawking Temperature of a Parametric Deformed Black Hole with Influence of Quantum Gravity Rahmat Heidari[R. Heidari and M. Amos are with the Energy Business Unit, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Newcastle, NSW 2300, Australia, (e-mail: [email protected], [email protected])] 0000-0003-4835-2614, Matthew Amos[1], Frederik Geth[F. Geth is with the GridQube, Brisbane, QLD 4300, Australia, (e-mail: [email protected])] 0000-0002-9345-2959 =========================================================================================================================================================================================================================================================================================================================================================================================================================================== Conformer-based end-to-end models have become ubiquitous these days and are commonly used in both streaming and non-streaming automatic speech recognition (ASR). Techniques like dual-mode and dynamic chunk training helped unify streaming and non-streaming systems. However, there remains a performance gap between streaming with a full and limited past context. To address this issue, we propose the integration of a novel dynamic contextual carry-over mechanism in a state-of-the-art (SOTA) unified ASR system. Our proposed dynamic context Conformer (DCTX-Conformer) utilizes a non-overlapping contextual carry-over mechanism that takes into account both the left context of a chunk and one or more preceding context embeddings. We outperform the SOTA by a relative 25.0% word error rate, with a negligible latency impact due to the additional context embeddings. Index Terms: end-to-end speech recognition, unified streaming, Conformer, low latency, dynamic context carry-over § INTRODUCTION Recently, end-to-end automatic speech recognition (ASR) systems such as attention-based encoder-decoder <cit.>, CTC <cit.> and Transducer <cit.> have become popular due to their simplicity in combining pronunciation, language and acoustic models into one neural network. Although these state-of-the-art (SOTA) models perform well in full-contextual (i.e. non-streaming) situations, they experience a decline in performance when used in real-time streaming scenarios due to the lack of future context <cit.>. Likewise, performance degrades to an even greater extent when streaming with a limited instead of a full past context <cit.>. These two sources of streaming performance degradation hold both for purely streaming <cit.> and the more recent unified ASR systems <cit.>. The latter systems unify streaming and non-streaming into a single model, which helps reduce development, training and deployment cost. While some studies like <cit.> have explored solutions to mitigate the lack of future context, we focus on the challenge of limited past context as we are particularly interested in low latency systems. Similarly, we constrain ourselves to unified ASR systems for their advantages highlighted earlier. A commonly explored solution to overcome the limitation of a restricted past context in block-processing models <cit.> is to store and propagate the history context. Wu et al. <cit.> introduce a context-aware inheritance mechanism in the self-attention layers of a Transformer <cit.>. A context embedding is appended as extra frame to each chunk before the self-attention and is handed over from one chunk/layer to another to help encode not only local acoustic information but also global linguistic, channel and speaker attributes. Related works consider multiple history embeddings with the introduction of memory banks. The self-attention unit of the Augmented Memory Transformer <cit.> attends on a short segment of the input sequence and a bank of memories that stores the embedding information from all previous processed segments. In Emformer <cit.>, the long-range history context is distilled into an augmented memory bank in between the self-attention and feed-forward layers. While all these works made some great progress, there still exists a gap between streaming with a full and limited past context. Plus, none of these works have considered the unified streaming and non-streaming setting. In this work, we tackle the streaming performance degradation for unified ASR systems when using a limited chunk's left context. We propose the dynamic context Conformer (DCTX-Conformer) that builds on and enhances the SOTA with next 5 contributions: (1) We incorporate the contextual carry-over (CCO) mechanism of <cit.> in a SOTA unified ASR Conformer system <cit.>; (2) We improve upon the CCO mechanism by integrating a dynamic dependency on a chunk's left context; (3) We improve upon the CCO mechanism by adding a dynamic dependency on the number of context embeddings; (4) We conduct experiments using the dynamic CCO mechanism in the lower latency, non-overlapping streaming mode without any look-ahead frames; (5) We conduct an exhaustive experimental study of our model on different chunk sizes, various chunk's left contexts and multiple context embeddings. The results on numerous datasets and many different settings demonstrate the effectiveness and robustness of our proposed model. § APPROACH AND RELATED WORK In this work, we improve upon the CCO mechanism of <cit.> and integrate it in a SOTA unified ASR Conformer system <cit.>. We consider a joint CTC<cit.>-attention framework <cit.> for training our unified models. §.§ End-to-end unified ASR For unified ASR models to perform well in both streaming and non-streaming settings, they must be exposed to both limited (i.e. streaming) and full (i.e. non-streaming) contexts during training. To accomplish this, <cit.> propose a dynamic chunk training (DCT) for self-attention layers which involves varying the chunk size dynamically at training time. As in <cit.>, we randomly sample a chunk size between 8 (= 320ms) and 32 (= 1280ms) down-sampled self-attention frames 60% of the time and run a full-contextual training the remaining 40%. Moreover, we use the same dynamic left context mechanism that allows to vary the left context between zero and all preceding chunks so that the model becomes robust to numerous left context sizes at inference time. The downside of <cit.> and other (unified) streamable systems is that there remains a non-negligible gap between streaming with a full and limited past context. To overcome this limitation, we integrate our proposed dynamic CCO mechanism that we discuss in next subsection. §.§ Dynamic contextual carry-over mechanism The authors of <cit.> extend the streaming Transformer model with a context-aware inheritance mechanism (Fig. <ref>). Context embeddings are passed on from one layer/chunk to the next. An experimental study in <cit.> shows that taking the average of each chunk as context embedding in the first layer provides the best results. As opposed to the original work <cit.>, we integrate the CCO mechanism in a unified Conformer trained with dynamic chunk sizes. The Conformer is generally considered to be a better alternative than the Transformer for (unified) streaming ASR <cit.>, while the DCT approach makes the model more robust to different chunk sizes at inference time. Lastly, unlike <cit.> that perform block processing with overlapping chunks, we stream in the lower latency non-overlapping manner. Besides these differences in architecture and applications, we make 2 contributions to the mechanism. Firstly, we propose to keep a dynamic dependency on preceding chunks despite the presence of context embeddings. We demonstrate that this combination leads to significant performance gains for insignificant latency drops. The dynamic dependency consists in varying the chunk's left context size at training time. The context embedding is in that case handed over from the chunk that precedes the left context chunks and no longer from the one directly preceding the current chunk. This dynamic training trick allows to vary the amount of left context needed at inference time depending on the application's latency requirements. Fig. <ref> shows the design of the self-attention mask in the use-case of 4 non-overlapping chunks of size 4 with the left context size set to 1 chunk. We formalise the novel dynamic CCO process next. Let U_i denote the down-sampled chunks that are passed to the first self-attention layer, and c_i denote the corresponding context embeddings. The attention computation takes the embedding dimension d, queries Q, keys K, values V and Mask of Fig. <ref> as input: Attention(Q, K, V) = Softmax(Mask(QK^T)/√(d))V, with the Mask adapting Q, K and V as follows: For layer 1: Q^1_b = [U_b, c^0_b], K^1_b = V^1_b = [U_b-LC:b, c^0_b], where b denotes the chunk (or block) number, LC denotes the number of left context chunks and U_b-LC:b denotes the concatenation of chunks U_b-LC to U_b (included). For layer n > 1: Q^n_b = [Z^n-1_b, c^n-1_b], K^n_b = V^n_b = [c^n-1_b-LC-1, Z^n-1_b-LC:b, c^n-1_b], where Z^n_b is the output of the n^th encoder layer of chunk (or block) b. Secondly, inspired by the idea of memory banks <cit.>, we show further improvements by relying on more than one preceding context embedding at inference time. For instance, in Fig. <ref>, output chunk #4 would depend on all frames of chunks #3 and #4 and context embeddings #1 (dashed orange squares), #2 (orange squares) and #4 (dark gray squares). This additional dependency on the preceding context embedding #1 is set at inference time only, as opposed to <cit.>. The self-attention for chunk b>LC+1 and layer n>1 is still trained relying on one preceding context embedding c^n-1_b-LC-1 only (i.e. the dashed orange squares are set to 0 at training time). If we assume to depend on N_ctx preceding (successive) context embeddings, the calculation of the keys and values becomes: K^n_b = V^n_b = [c^n-1_b-LC-N_ctx:b-LC-1, Z^n-1_b-LC:b, c^n-1_b] Unlike <cit.>, our queries only depend on the actual chunk and not on any patched left or right context. Plus, our keys and values do not contain any redundant history information as we only include the context embeddings that summarize the context before the chunk's patched left context. Similarly to the keys, they do not contain any patched right context. More importantly, in <cit.> memory bank entries are recomputed at every layer as an average projection of the chunk. In our CCO mechanism, context embeddings are only initialised as a chunk's average in the first layer. This gives the model more freedom to learn superior contextual representations in subsequent layers. Every intermediate contextual embedding c^n_b also explicitly depends on context embeddings c^n-1_b-LC-1 and c^n-1_b, allowing to better model interactions between them than the memory banks in <cit.> where this explicit interaction does not exist. § EXPERIMENTAL SETTINGS §.§ Datasets Training We consider 3 different speech corpora varying in size for training our models: (1) The open-source LibriSpeech <cit.> corpus, for which we combine train-clean-100, train-clean-360 and train-other-500 to have 960 hours of training data; (2) A small-scale 1k hour English corpus and (3) a large-scale 10k hour superset, sampled from in-house paired audio and text data. Both corpora include audio files with a good mix of accents, speakers, sampling rates and background noise. These 3 data regimes are representative of a wide range of end-to-end ASR systems for various speech applications. Evaluation For the LibriSpeech experiments, we evaluate our models on test-clean and test-other, whose average utterance length is 7s. For the small- and large-scale experiments we use the following diverse public test sets: (1) MTDialogue[https://github.com/Phylliida/Dialogue-Datasets]: Collection of movie and Twitter data. The dataset is 1.2h long and the average utterance length is 3s; (2) Wall Street Journal (WSJ): We use WSJ's eval_test92 <cit.>, prepared using Kaldi's <cit.> WSJ recipe. The dataset is 0.7h long and the average utterance length is 8s; (3) Voxpopuli <cit.>: We use the English test partition. The dataset is 4.9h long and the average utterance length is 10s. §.§ Setup Training We use a Conformer as the encoder and a shallow single-layer Transformer as the attention-based decoder. The inputs are 80 dimensional log-mel features extracted with 25ms FFT windows and 10ms frame shifts. For the LibriSpeech experiments, we use a Conformer-12x512x8, which consists of 12 encoder layers with 512 feature dimensions and 8 self-attention heads. We use ESPnet's pre-trained 24-layered Transformer-based neural LM on the LibriSpeech-train dataset for rescoring. For the small-scale experiments, we use a Conformer-16x256x4 and a BPE embedding of size 1024. For the large-scale experiments, we use a Conformer-16x512x8 and a BPE embedding of size 2048. We train a 4-gram LM on the training text for shallow fusion. The kernel size of our convolution modules is 31. We optimise our model via the hybrid CTC and attention losses. All of our models are trained for 60 epochs with the Adam optimizer <cit.> and a warm-up learning rate scheduler. The unified ASR models are fine-tuned with DCT for 30 epochs from a full-contextual model trained for 30 epochs, as is done in <cit.>. We make use of ESPnet <cit.> and p4de.24xlarge Amazon EC2 instances that consist of 8 NVIDIA A100 Tensor Core GPUs. Evaluation We discard the attention-based decoder and use the CTC decoder to generate outputs with a CTC prefix beam search and beam size of 20 for the LibriSpeech experiments and of 50 for the small- and large-scale experiments. A CTC decoder optimises the real-time factor compared to the attention-based decoder, as the latter is non-autoregressive and needs triggered attention <cit.> for streaming inference. All of our streaming results are obtained via non-overlapping streaming without any look-ahead context, unless otherwise stated. § RESULTS §.§ Performance impact of chunk size In Table <ref>, we analyze the impact of the chunk size on the word error rate (WER) by comparing 3 types of models on 5 test sets: (A) A purely non-streamable model trained in a full-contextual setting (i.e. no DCT); (B) A unified SOTA model <cit.> without CCO and with full (B.1) and no (B.2) left context at inference time; (C) Our unified model with dynamic CCO and no left context at inference time. All models share the same architecture and were trained on the small-scale, large-scale and LibriSpeech datasets. The results show that our model (C) roughly maintains the full-contextual performance of models (A) and (B), except for the LibriSpeech test-other dataset, while significantly reducing the streaming performance gap between (B.1) and (B.2). We observe that the relative improvements increase as the chunk size decreases because context embeddings are increasingly impactful for smaller and therefore inferior acoustic representations. For the large-scale model with inference chunk size 320ms, we even exceed the gap. Our hypothesis is that context embeddings provide a more effective way of incorporating past context than utilizing all frames from a full past context input. Moreover, we believe that the larger training set leads to a better modelling of those extra embeddings. Their use also significantly reduces the computational memory over non-contextual models that stream with full past context as the number of keys and values in the self-attention calculations decreases substantially. Overall, we notice an average gap reduction of 24.3%, 49.5% and 109.2% across all datasets and models between no and full past context when streaming with a chunk size of 1280ms, 640ms and 320ms respectively. §.§ Performance impact of left context size As can be observed in Table <ref>, the streaming gap between the model without CCO and with full left context (B.1) and our model with dynamic CCO and with no left context (C) can still be further reduced in most settings. In Fig. <ref>, we analyze the impact of adding a chunk's left context on top of the CCO mechanism for the LibriSpeech and large-scale models when running inference with a chunk size of 640ms. The graphs indicate that the aforementioned gap not only further decreases as we add more left context, but also that with only 1 left context chunk we now outperform the large-scale model (B.1) with full left context (as we already did in Table <ref> with our large-scale model for a chunk size of 320ms and left context size 0). Taking a closer look at WSJ in particular, we observe that taking 3 left context chunks instead of none while still carrying over context leads to a WERR of 35.2% and 49.0% over the model with and without context carry-over respectively. As our models are trained with a dynamic left context, it gives the user the flexibility to easily adjust the left context size at inference time depending on the latency and memory requirements. §.§ Performance impact of context embeddings In Fig. <ref>, we showcase that relying on more than 1 preceding context embedding substantially improves the WER. We observe those improvements despite the fact that the model was trained with 1 preceding context embedding only. The more context embeddings we provide each chunk, the better the model performs, except for the LibriSpeech test sets where a minor degradation of 0.9% is observed across the two depicted left context settings when using 16 context embeddings instead of 1. For WSJ and VoxPopuli on the other hand, we observe an average WERR improvement of 36.1% and 19.5% respectively. Compared to the baseline Conformer model without CCO those numbers even increase to 55.8% and 27.8%. For MTDialogue, we notice a faster saturation and only notice slight improvements beyond 4 context embeddings because the average utterance length in that dataset is small (= 3s). Therefore, for most utterances the model can only rely on a few past context embeddings. Overall, we demonstrate that our approach leads to an average 25.0% WERR improvement over the non-contextual baseline model across the datasets and the two depicted left context settings when using 16 context embeddings. §.§ LibriSpeech comparison with SOTA In Table <ref>, we compare our model (with 2 context embeddings) to the Augmented Memory Transformer <cit.> (with infinite memory bank entries and WAS <cit.>), the Emformer hybrid system <cit.> (with 4 memory bank entries and SMBR <cit.>) and the baseline Conformer model without CCO <cit.> on the LibriSpeech test sets. We provide numbers given in the respective papers in similar settings, where numbers were however given for the higher latency overlapping streaming mode with look-ahead/right context (RC). As demonstrated in <cit.>, while no look-ahead context improves latency, it significantly degrades the WER on the LibriSpeech test sets. Despite this, our overall smaller segment (=LC+CC+RC) and our model being trained in a unified fashion, our lower latency model performs equally well on test-clean and is competitive on test-other wrt the SOTA. When we do use look-ahead context, we even outperform every SOTA model on test-clean and are just behind Emformer on test-other, even though our model did not see any look-ahead context during training and uses only half of the Emformer center context (CC) size. §.§ Latency study The added dependency on the chunk's left context and the context embeddings comes with a certain latency trade-off, with the latter dependency being much smaller and independent from the chunk size. In Table <ref> we provide some latency results for different non-overlapping streaming settings using the small-scale model, a 640ms chunk and the VoxPopuli dataset. The numbers demonstrate the minor impact of context embeddings on the latency. In Table <ref>, we provide some latency results in function of the number of context embeddings using the small-scale models, a 640ms chunk, a 1280ms left context and the VoxPopuli dataset. The measurements indicate the negligible latency impact of the context embeddings. § CONCLUSIONS In this work, we incorporate an improved version of the contextual carry-over mechanism in a state-of-the-art unified ASR system. We modify the contextual carry-over mechanism by integrating a dynamic dependency on both the chunk's left context size and preceding context embeddings. With an exhaustive experimental study on many datasets, we show the efficacy and robustness of our proposed approach. The results demonstrate that our DCTX-Conformer model more effectively captures a full past context with reduced latency and computational memory usage in streaming scenarios, without compromising its non-streaming performance. IEEEtran
http://arxiv.org/abs/2306.05805v1
20230609104232
DynaBench: A benchmark dataset for learning dynamical systems from low-resolution data
[ "Andrzej Dulny", "Andreas Hotho", "Anna Krause" ]
cs.LG
[ "cs.LG" ]
DynaBench A. Dulny et al. University of Würzburg, Germany {dulny,andreas.hotho,anna.krause}@uni-wuerzburg.de DynaBench: A benchmark dataset for learning dynamical systems from low-resolution data. Andrzej Dulny Andreas Hotho Anna Krause ======================================================================================= Previous work on learning physical systems from data has focused on high-resolution grid-structured measurements. However, real-world knowledge of such systems (e.g. weather data) relies on sparsely scattered measuring stations. In this paper, we introduce a novel simulated benchmark dataset, DynaBench, for learning dynamical systems directly from sparsely scattered data without prior knowledge of the equations. The dataset focuses on predicting the evolution of a dynamical system from low-resolution, unstructured measurements. We simulate six different partial differential equations covering a variety of physical systems commonly used in the literature and evaluate several machine learning models, including traditional graph neural networks and point cloud processing models, with the task of predicting the evolution of the system. The proposed benchmark dataset is expected to advance the state of art as an out-of-the-box easy-to-use tool for evaluating models in a setting where only unstructured low-resolution observations are available. The benchmark is available at <>. § INTRODUCTION Dynamical systems, which are systems described by partial differential equations (PDEs), are ubiquitous in the natural world and play a crucial role in many areas of science and engineering. They are used in a variety of applications, including weather prediction <cit.>, climate modeling <cit.>, fluid dynamics <cit.>, electromagnetic field simulations <cit.> and many more. Traditionally, these systems are simulated by numerically solving a set of PDEs that are theorized to describe the behavior of the system based on physical knowledge. An accurate modelling technique is crucial for ensuring accurate predictions and simulations in these applications. However, the equations used are often just an approximation of a much more complex reality, either due to the sheer complexity of a more accurate model which would be computationally infeasible or because the true equations are not known <cit.>. In recent years, several models have been proposed in the deep learning community, which address the problem of simulating physical systems by learning to predict dynamical systems directly from data, without knowing the equations a priori <cit.>. These types of approaches have a distinct advantage over classical numerical simulations, as they do not require estimating the parameters of the equations, such as the permeability of a medium or the propagation speed of a wave. To ensure that the proposed models and architectures perform and generalize well and to be able to draw a fair comparison between them, it is necessary to compare them in a common experimental setting. As there are very few real-world datasets readily available for this purpose, it is common practice to employ simulated data as a simplified but easy-to-use and available alternative to evaluate novel machine learning methods <cit.>. While some progress has been made towards creating a standardized benchmark <cit.> dataset of physical simulations, the previous work in this area mainly focuses on the task of reconstructing the forward operator of the numerical solver, for which the full computed solution on a high-resolution grid of the differential equation is needed as training data. This makes it difficult to assess the applicability of any approach evaluated this way on real data, where measurements are typically neither high resolution nor grid-based, but instead rely on a sparse network of measuring stations (cf. <Ref>). To achieve greater fidelity to real-world conditions, we propose a novel benchmark dataset, DynaBench, that focuses on the challenging task of predicting the evolution of a dynamical system using a limited number of measurements that are arbitrarily distributed within the simulation domain. This more closely resembles a real-world setting and allows for a more accurate assessment of the applicability of different models to real-world data. The benchmark consists of simulations of six physical systems with different properties that are commonly used as synthetic data for learning dynamical systems. The simulations have been generated using a numerical solver. Our aim is not to cover all possible physical systems, parameters, and equations but rather to provide a good starting point to develop and compare machine learning models suited for this task. The selection we propose is a combination of typical equations used to evaluate deep learning models and equations with different properties (such as order of derivatives and number of variables) that complement them. In addition, we present a detailed evaluation of various comparison models capable of learning functions on arbitrary geometries, including graph neural networks <cit.>, point cloud neural networks <cit.>, and continuous convolution models <cit.>. Our objective is to provide a set of strong baselines for further research, and thus facilitate the development and testing of new machine learning methods for predicting physical systems from unstructured low-resolution data. Our results show that the selected models are capable of providing accurate short-term predictions, but long-term forecasting remains an open challenge. With the release of DynaBench, we hope to provide a valuable resource for the machine learning community, which will facilitate research and thus advance the state-of-the-art in learning dynamical systems from data on unstructured low-resolution observations. The main contributions of our work can be summarized as follows. * We propose a new benchmark dataset for learning dynamical systems from data under the assumption that measurements are sparse and not structured on a grid. * We generate the dataset by simulating several differential equation systems typically used for the task of learning dynamical systems. * We thoroughly evaluate several models capable of learning functions on arbitrary geometries on the DynaBench dataset, including both graph neural networks and point-cloud processing models. * We release both the dataset and the code for evaluating all models, to facilitate further research in this field [The code is available at <>]. § RELATED WORK Several approaches for learning dynamical systems from grid data have been proposed in recent years, but they lack comparability as different sets of equations and simulation parameters are used. Ayed et al. <cit.> propose a hidden-state neural solver-based model and use a system of shallow water equations and an Euler fluid simulation to evaluate it. Long et al. <cit.> evaluate their numeric-symbolic hybrid model on the Burgers' equation, diffusion equation and convection-diffusion equation with a reactive source. Dulny et al. <cit.> evaluate their neuralPDE Model based on neural solvers on several PDE systems, including advection-diffusion, Burgers' and wave equations. Li et al. <cit.> propose a resolution invariant method based on the fourier transformation and test it on Burgers' equation, simplified Navier-Stokes system and steady-state darcy flow. Similarly, authors proposing models for unstructured data (i.e. measurements not on a grid) also do not evaluate their models on a common set of systems. Karlbauer et al. <cit.> propose a graph-based recurrent model (Distana) to learn spatio-temporal processes and evaluate it on the wave propagation equation. Iakovlev et al. <cit.> use an advection-diffusion problem, as well as the heat equation and Burger's equation, to evaluate their graph message passing approach. Another approach proposed by Li et al. <cit.>, the multipole graph neural operator, is evaluated on the steady state darcy flow, as well as the viscous variant of the Burgers' equation. Recently, some progress has been made towards creating a standardized benchmark for learning PDEs from data. Huang et al. <cit.> proposed a dataset containing simulations of incompresible Navier-Stokes equations for fluid dynamics. While the main audience of the dataset is not the machine learning community, as its central purpose is to compare different discretization and solving schemes, the data could in theory still be used to train different models for learning the solutions from data. However, it remains limited in the choice of equations, as it only uses the Navier-Stokes equations, and furthermore is not suited for evaluating models in a low-resolution regime. Otness et al. <cit.> propose a benchmark specifically aimed at learning to simulate physical systems from data. However, the simulations are discrete systems (spring systems) rather than continuous spatiotemporal processes defined by partial differential equations. For this reason they cannot be used for the intended purpose of learning continuous systems from low-resolution measurements. Takamoto et al. <cit.> propose a very extensive benchmark of eleven different equation systems called PDEBench, including fluid simulations, advection and diffusion equations, Burgers' equation and more. The authors also provide extensive experiments and evaluations for a variety of models. The benchmark is well suited for learning in a high-resolution framework, where the whole discretized grid used during numerical solving is also used for training the models. However, the selection of equations consisting mainly of fluid simulations is unsuitable for low-resolution predictions, as such systems show turbulent and chaotic behavior <cit.> and therefore require a high-resolution discretization. As such PDEBench is neither suited nor easily usable in a low-resolution regime, where only limited number of scattered observation are available. § DATASET In this section we describe the overall structure of the datasets, which equations were included in the benchmark, how the simulations were executed, and what postprocessing steps were performed. §.§ Setting A PDE is a equation in which an unknown function is to be found, based on the relations between itself and its partial derivatives in time and space. It can be summarized in the form: F(u, ∂ u/∂ t, ∂ u/∂ x, ∂ u/∂ y, ...)=0 As mentioned in <Ref> such equations can be used to model a variety of physical systems, by solving a previously known equation system using a measured initial state. In the context of scientific machine learning, a typically researched task is to reconstruct the parameters of the equation (i. e. the function F) from data obtained from a mixture of exact measurements and simulations. Reconstructing the differential equations requires high-resolution data (both in time and space), which is unavailable in a real world setting <cit.>. Our benchmark is focused on a different task, namely learning to predict the evolution of a dynamical system from data, under the assumption that only low-resolution measurements are available. Formally, a PDE solver seeks to approximate the true solution uΩ× T⟶ℝ by some approximate û_hΩ̂_h×T̂_h⟶ℝ, where Ω̂_h is a high-resolution discretization of the solution domain Ω⊆ℝ^n (typically a grid) and T̂_h is a high-resolution time discretization of T⊆ℝ (typically T̂_h = {t_k^(h), k∈ℕ} for t_k^(h) := t_0 + kΔ_h t and some small Δ_h t > 0). For our task we assume that only low-resolution observations û_l at measurement locations Ω̂_l of the physical process u are available (i.e. |Ω̂_l|≪|Ω̂_h|), and the temporal resolution T̂_l = {t_k^(l), k∈ℕ} for t_k^(l) := t_0 + kΔ_l t of the measurements is also low (|Δ_l t| ≪Δ_h t). The task is then to predict the evolution of the system û_l(Ω̂_l, t_k+1^(l)), û_l(Ω̂_̂l̂, t_k+2^(l)), …, û_l(Ω̂_l, t_k+R^(l)), from the past observations û_l(Ω̂_l, t_k-H^(l)), …, û_l(Ω̂_l, t_k-1^(l)), û_l(Ω̂_l, t_k^(l)). §.§ Equations Overall we curated a set of six different PDE equation systems, typically used in the context of learning dynamical systems from data, with various properties as summarized in <Ref>. In the following we shortly describe each equation in more detail. Advection The advection equation ∂ u/∂ t = - ∇· (𝐜u) describes the displacement of a quantity described by a scalar field u in a medium moving with the constant velocity 𝐜. It is a widely used benchmark equation due to its simplicity and straightforward dynamics <cit.> Burgers' Equation The Burgers' equation ∂𝐮/∂ t = R(ν∇ ^2 𝐮 - 𝐮·∇𝐮) is a non-linear second order PDE with respect to spatial derivatives The equation describes the speed u of a fluid in space and time with ν representing the fluid's viscosity and R describing the rate of the simulation. It is one of the most often used equations in the context of deep learning for dynamical systems <cit.>. Gas Dynamics In gas dynamics, the system of coupled non-linear PDEs ∂ρ/∂ t = -𝐯·∇ρ - ρ∇·𝐯 ∂ T/∂ t = -𝐯·∇ T - γ T∇·𝐯 + γMk/ρ∇^2 T ∂𝐯/∂ t = - 𝐯·∇𝐯 - ∇ P/ρ + μ/ρ∇(∇𝐯) describes the evolution of temperature T, density ρ, pressure P and velocity 𝐯 in a gaseous medium. The equations are derived from the physical laws of mass conservation, conservation of energy, and Newton's second law <cit.>. The parameters specify the physical properties of the system, γ being the heat capacity ratio, M the mass of a molecule of gas, and μ the coefficient of viscosity. This equation can be seen as a simplified weather system. Kuramoto-Sivashinsky The Kuramoto-Sivashinsky equation ∂ u/∂ t = - 1/2|∇ u|^2 - ∇^2u - ∇^4u describes a model of the diffusive–thermal instabilities in a laminar flame front. Solutions of the Kuramoto–Sivashinsky equation possess rich dynamical characteristics <cit.> with solutions potentially including equilibria, relative equilibria, chaotic oscillations and travelling waves. Reaction-Diffusion The Reaction-Diffusion system ∂ u/∂ t = D_u∇^2 u + a_u(u - u^3 - k - v) ∂ v/∂ t = D_v∇^2 v + a_v(u-v) describes the joint concentration distribution of a two component chemical reaction, where one of the components stimulates the reaction and the other inhibits it. The parameters D_u and D_v describe the diffusion speed of the activator and inhibitor respectively, k is the activation threshold, while a_u and a_v describe the reaction speed of the two components. The equation has applicability in describing biological pattern formation and forms rich and chaotic systems <cit.>. Wave The wave equation ∂^2 u/∂ t^2 = ω^2 ∇ ^2 u describes the propagation of a wave in a homogeneous medium (e.g. water surface) where u describes the distance from equilibrium and ω represents the material-dependent speed of propagation. It is a linear, second-order PDE that has been widely used in scientific machine learning <cit.>. §.§ Simulation Parameters The machine learning task for which our benchmark has been designed, is to learn predictions from observations of a physical system. The system is assumed to evolve according to a set of fixed physical laws that are have constant parameters such as thermal conductivity, diffusion coefficients etc. To create simulations of such systems, we specify the constant parameters with which the selected equations are solved, as shown <Ref>. The parameters have been chosen to ensure a good balance between the complexity of the system and the numerical stability of the simulations. The spatial domain of the simulation is set to Ω=[0, 1]×[0, 1] and the temporal domain to T=[0, 200]. We initialize the state of each system using zeros, uniform (u) or normally (n) distributed noise, or a sum of Gaussian curves, individually for each field, similar to what has been used in related work <cit.>. The exact specification of which initial condition is used for each individual variable is summarized in <Ref>. The sum of Gaussian curves has been calculated in the following manner: I(x, y) = ∑^K_i=1A_ie^-(x-μ_ix)^2+(y-μ_iy)^2/σ^2 The positions (μ_ix, μ_iy) of each component i are sampled uniformly from the simulation domain Ω, while their contributions A_i are sampled uniformly from the interval [-1, 1]. The fixed parameters K and σ are set to 5 and 0.15 respectively. To run the simulations, the domain Ω is discretized as a 64× 64 grid, which yields a cell size of Δ x = Δ y = 0.0156. The equations are solved using the method of lines as numerical scheme  <cit.>. We use the Explicit Runge-Kutta method of order 5(4) <cit.> as the numerical integrator. §.§ Postprocessing The simulation is saved with a temporal resolution of Δ t = 1, producing exactly 201 observations per simulation. As some of the equations produce non-stationary physical processes, we normalize the data to ensure that range of values remains similar across different equations, simulations and times. Finally, we sample measurements to form the non-grid observation domain, by selecting uniformly K points from the simulation domain Ω and bilinearily interpolate the values from the grid measurements. §.§ Data availability In total we generate 7000 different simulations for each equation, divided into 5000 training simulations and 1000 validation and test simulations each. For each simulation, we use a different initial seed to sample the initial condition. The benchmark is available in three different resolutions, where either K=225, K=484, or K=900 measurements are recorded. Additionally we provide a low-resolution variant of the simulation measured on a grid with the same number of points in total - 15× 15, 22× 22, 30× 30. The full dataset (including the original high-resolution simulations) can be downloaded from <https://zenodo.org/>[To ensure anonymity, the link to the data will be published upon acceptance. The data used for our experiments can be accessed from the following link: <https://drive.google.com/drive/folders/1IOgHdQxRxGn41mIHM3tM4pSssQjbStk9?usp=sharing>]. Alternatively the same data can be generated from scratch using the provided source code and predefined seeds[The code is available at <>]. § EXPERIMENTS In this section we describe a selection of experiments that we performed on the DynaBench dataset. §.§ Models In the following, we briefly describe the models used during the experiments. We select several graph neural network and point cloud network baselines as a comparison for available state-of-the-art architectures proposed for learning dynamical systems from scattered measurements - graph kernel networks and graph PDE networks. We do not include Distana <cit.> and Multipole Graph Operator <cit.> (cf. <Ref>) as there is no code available for the former and the latter requires measurements obtained at different resolution levels and is unsuitable for our setting. Additionally, to better understand how the change of structure affects the accuracy of the predictions, we evaluate three models that work on grid data trained on a version of the dataset using the same number of measurements but aligned on a grid, as described in <Ref>. These include two variants of a simple convolutional neural network <cit.> - with and without residual connections <cit.> and neuralPDE, a model specifically designed to learn dynamical systems from gridded data <cit.>. Finally, we use the persistence baseline as a reference point for all deep learning models. PointGNN is a graph neural network proposed by <cit.> to solve the task of object detection in a LiDAR point cloud. It uses MLP-based feature aggregation within a local neighborhood with an additional perturbation mechanism to offset the coordinates of the neighboring points. This increases the translation invariance of the calculated filters with respect to the center vertex coordinates. Point Transformer (Point TF) is a model originally proposed by Zhao et al. <cit.> for object classification and segmentation on 3D point clouds. It uses self-attention, similar to transformer networks, to process features within a spatially local neighborhood. We modify the original segmentation architecture to use 2D point coordinates where the physical system has been measured. Feature-Steered Graph Convolutions (FeaStNet) is a graph convolution operator developed by Verma at al. <cit.> for 3D object analysis. It uses the node features from the preceding layer to determine the correspondence between filter weights and nodes in a local neighborhood. Thus it is able to adjust the filters dynamically based on the final prediction task. Graph Convolution Network (GCN) proposed by Kipf et al. <cit.> is a simple generalization of convolutions to graph structures where no ordering of the neighbors exists. It uses a first-order approximation of spectral graph convolutions to aggregate features from neighboring nodes. Graph Attention Network (GAT) proposed by Veličković et al. <cit.> incorporates an attention mechanism into convolutions on graphs used as weights for aggregating the features from neighboring nodes in each layer. The attention mechanism is able to (implicitly) assign different weights to different nodes in a neighborhood. Graph Kernel Network (KernelNN) is a deep learning approach proposed by Anandkumar et al. <cit.> for learning a mapping between two infinite-dimensional spaces. It uses kernel integration with a learnable Nyström kernel as an approximation of the true neural operator. In the original experiments Anandkumar et al. use a high-resolution grid on which the simulation is computed, but the model itself can be applied to non-grid measurements. Graph PDE Networks (GraphPDE) proposed by Iakovlev et al. <cit.> use the neural network to parameterize the dynamics (rate of change) of the system rather than making predictions directly. Similar approaches have been proposed for grid data <cit.>, outperforming classical architectures for this type of task. All of these approaches, including graph PDE networks, use the parameterization learned by message passing graph neural networks together with an differentiable ODE solver to obtain predictions. CNN originally developed by LeCun et al. <cit.> uses learnable convolutional filters to enforce translation invariance of the learned mapping with respect to the input position. While it was originally proposed for computer vision tasks it has since been used in the context of learning to predict dynamical systems from data. In our experiments we include a simple architecture with several stacked CNN layers, as well as ResNet variant with residual connections <cit.>. NeuralPDE is a model proposed by Dulny et al. <cit.> combing a convolutional neural network used to parametrize the dynamics (rate of change) of a physical system with differentiable ODE solvers to calculate predictions. The authors use convolutional layers to approximate partial differential operators, as they directly translate into a discretization using finite differences. This type of architecture has been shown to perform exceptionally well on a variety of physical data. Persistence describes the baseline obtained by applying the rule “today's weath-er is tomorrow's weather”. It suggests the last known input as the prediction of the next state. Any forecasting model should be able to outperform this baseline, to be counted as useful. The persistence baseline is a common method used in machine learning for time series forecasting tasks. §.§ Setup We trained and evaluated all selected models on the DynaBench dataset using 7000 simulations for each equation as training data, and 1000 for validation and testing each. The input for the models is a H-step lookback of the system state (the previous H states) measured at K locations that we merge along the feature dimension. Specifically, for an physical system describing D variables, the resulting input has the dimension H× D. We train all models on predicting the next step of the simulation by minimizing the mean squared error (MSE): ϕmin 𝔼[m_ϕ(X_t‖ X_t-1‖…‖ X_t-H+1) - X_t+1]^2 Where X_t+1, X_t, X_t-1,… describes the state of the physical system at times t+1,t, t-1,…; m_ϕ is the neural network model with learnable parameters ϕ; H is the lookback history; and ‖ denotes the concatenation operator. For evaluating the models we rollout R predictions steps in a closed-loop setting where the predictions of previous states are used as input for predicting the new state. Specifically: X̂_t+1 = m_ϕ(X_t‖ X_t-1‖…‖ X_t-H+1) X̂_t+2 = m_ϕ(X̂_t+1‖ X_t‖…‖ X_t-H+2) X̂_t+3 = m_ϕ(X̂_t+2‖X̂_t+1‖…‖ X_t-H+3) ⋮ X̂_t+R = m_ϕ(X̂_t+R-1‖X̂_t+R-2‖…‖X̂_t-H+R) In our experiments we use H=8, K=900 and R = 16. §.§ Results <Ref> shows the results of our experiments for single-step predictions on the test simulations. Our results show that non-grid models, such as kernel-based neural networks and graph-based neural networks, can perform similarly to grid-based models for short-term (1-step) predictions. Among the models trained on unstructured data, the PointGNN and Point Transformer show the best performance. However, for longer-term predictions, the grid-based models outperform the non-grid models as shown in <Ref>. For the grid-based models the underlying spatial structure is fixed and they do not need to additionally learn the dependencies between neighboring measurements. We hypothesize that because of the simpler spatial dependencies, grid-based models are able to generalize better and thus capture the long term evolution of the system more accurately. Interestingly, we found that the models specifically designed to learn solving PDEs, such as KernelNN and GraphPDE, were not as good as the other models when the data was low-resolution as opposed to high-resolution data on which they were originally evaluated. This suggests that their underlying assumptions may be too strong to handle such data effectively. Additionally, our study brings to light that long-term predictions are still an unsolved challenge for all models. The divergence in predictions, as illustrated in <Ref>, occurs rapidly and is particularly prominent in systems such as Gas Dynamics and Kuramoto-Sivashinsky equations, where the prediction error exceeds 0.5 after only 16 prediction steps. This level of error, which is half of the standard deviation of the data (as explained in <Ref>), renders it impossible to make use of these long-term predictions. Thus, our findings emphasize the need for further research and development in this field to address this issue. § CONCLUSION We have proposed a new benchmark dataset for learning dynamical systems from data under the assumption that measurements are sparse and not structured on a grid. This is closer to real-world data than other resources available, as typically measurements are obtained from monitoring stations scattered withing the observation domain. The DynaBench dataset covers a wide range of physical systems with different properties such as number of connected variables, degree of the differential operators etc. We have thoroughly evaluated several models capable of learning functions on arbitrary geometries on the DynaBench dataset, including graph neural networks, point-cloud processing models and several state-of-the-art approaches. Our results show that the selected models are on par with state-of-the-art grid models in providing accurate short-term predictions, but long-term forecasting remains an open challenge. We hope that the release of DynaBench will facilitate and encourage research in this area, leading to advancements in the state-of-the-art and as a consequence more accurate models for real-world data, which our benchmark is mirroring. § ETHICAL STATEMENT This research paper proposes a benchmark dataset and evaluates several machine learning models for learning dynamical systems from data. The use of benchmarking is a common practice in the machine learning community to compare different models in a standardized setting. Synthetic datasets are used because they allow for a controlled environment and can be generated easily. However, it should be noted that synthetic data can never perfectly represent real-world data, and as such, every model should also be evaluated on real-world data before being used in critical applications. Potential risks associated with incorrect predictions of important systems such as weather and climate simulations or electromagnetic field simulations for safety assessment should be discussed thoroughly. Synthetic datasets can provide a useful starting point for model evaluation and the development of new approaches, but they need to be assessed on domain-specific data for real-world deployment. Particularly for safety-critical applications. While our proposed benchmark dataset and evaluated machine learning models provide useful insights into learning dynamical systems, they should not be used as the sole basis for making important political decisions, particularly concerning weather or climate data. While data-driven approaches have again and again shown their superiority over classical methods in a variety of applications, they are also prone to overfitting and adversarial attacks, if not carefully designed and validated. The risks and benefits of replacing existing numerical simulations or expert knowledge with deep learning approaches should always be taken into account and thoroughly discussed when developing and applying new models. Any decision based on machine learning models should be made after considering the potential sources of errors the models introduce, as well as the lack of explainability of black-box approaches. splncs04.bst
http://arxiv.org/abs/2306.04880v1
20230608022101
Low-Scaling Algorithm for the Random Phase Approximation using Tensor Hypercontraction with k-point Sampling
[ "Chia-Nan Yeh", "Miguel A. Morales" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.chem-ph", "physics.comp-ph" ]
< g r a p h i c s > We present a low-scaling algorithm for the random phase approximation (RPA) with k-point sampling in the framework of tensor hypercontraction (THC) for electron repulsion integrals (ERIs). The THC factorization is obtained via a revised interpolative separable density fitting (ISDF) procedure with a momentum-dependent auxiliary basis for generic single-particle Bloch orbitals. Our formulation does not require pre-optimized interpolating points nor auxiliary bases, and the accuracy is systematically controlled by the number of interpolating points. The resulting RPA algorithm scales linearly with the number of k-points and cubically with the system size without any assumption on sparsity or locality of orbitals. The errors of ERIs and RPA energy show rapid convergence with respect to the size of the THC auxiliary basis, suggesting a promising and robust direction to construct efficient algorithms of higher-order many-body perturbation theories for large-scale systems. § INTRODUCTION Kohn-Sham density functional theory (KS-DFT)<cit.> has become the standard tool in the study of ground-state properties of molecules and solids due to its capability of efficiently treating large-scale systems with reasonable accuracy. Nevertheless, there are many well-known cases in which DFT fails to provide even qualitatively correct results, especially when local and semilocal functionals are used. Despite intense theoretical focus over the years and given the inherent difficulties in developing universally accurate approximations to the unknown exchange-correlation functional for correlated systems, a systematically improvable framework purely within the context of DFT has not yet emerged<cit.>. In contrast, many-body perturbation theories (MBPTs), as a promising alternative, provide a systematic framework to include electron correlations for ground-state as well as excited-state properties<cit.>. Among different MBPTs, the random phase approximation (RPA)<cit.> is one of the simplest and most popular choices for calculating correlation energies beyond DFT. While many formulations and variants of RPA exist<cit.>, the framework based on the adiabatic connection fluctuation-dissipation theorem (ACFDT) is typically used in connection with advanced exchange-correlation functional for ground-state properties<cit.>. In addition, the RPA approach is connected to MBPT through the Klein functional<cit.>, evaluated at the level of the GW approximation<cit.>. The infinite sum of bubble diagrams in RPA provides the screening effects which are important for non-local correlation effects and van der Waals interactions. As a result, RPA is thus applicable to small-gap and metallic systems, unlike second-order Moller-Plesset perturbation theory (MP2) where the correlation energy diverges for systems with vanishing gaps<cit.>. The conventional RPA energy for solids in a plane-wave basis requires a number of operations that scales quartically with the system size (N) and quadratically with the number of k-points (N_k), which makes its applications to large-scale systems rather expensive compared to DFT. Numerical techniques and optimizations have been introduced to reduce the formal scalings<cit.> and prefactors<cit.>. Particularly, the space-time approach<cit.> is proposed with a cubic scaling in terms of the system size and a linear scaling in terms of the number of k-points. This is achieved by transforming the computation of the polarizability on a real-space grid and the imaginary-time axis. Despite its appealing scaling, the space-time approach has only recently become competitive with other formulations as a result of new developments on efficient Fourier transforms on the imaginary axis<cit.>. Nevertheless, due to the large dimension of a real-space grid, the memory load is rather high, and the prefactors of the scaling laws are large compared to the quartic-scaling algorithm formulated in a canonical basis. A quartic-scaling algorithm for RPA can also be formulated in a localized single-particle basis with decomposition schemes of electron repulsion integrals (ERIs) such as Cholesky decomposition (CD)<cit.> and the resolution-of-the-indentity (RI) (also known as density fitting (DF)) technique<cit.>. Conceptually, both of these decomposition schemes factorize a rank-4 ERI tensor into a product of two rank-3 tensors by introducing an auxiliary basis whose size grows linearly with the system size. These types of decompositions result in a great amount of saving both in storage requirement and the number of operations, reducing the scaling of RPA in a localized basis from O(N^6) to O(N^4). One advantage of localized bases is their relatively compact size compared to plane-waves, so that the prefactors are significantly smaller compared to the space-time approach. For molecules and Γ-point supercells with small or intermediate sizes, the quartic-scaling algorithm in a localized basis could be more efficient compared to the cubic-scaling algorithm from the space-time approach, especially in the presence of core electrons. Further complexity reduction can be achieved by exploiting the sparsity of the fitting coefficients from DF with the overlap or the Coulomb-attenuated metric<cit.>, and the locality of atomic orbitals<cit.>. Nevertheless, the assumptions of sparsity and locality of orbitals are valid only in the limit of large systems or for particular electronic properties which restrict their general applicability. In constrast, the O(N^4) algorithm for RPA based on DF/CD for ERIs is less appealing to solid-states systems due to the quadratic scaling with the number of k-points, which originates from the fact that the k-point indices in a rank-3 DF/CD tensor are not fully separable. Unlike the quartic scaling with the system size, the O(N_k^2) complexity can not be straightforwardly alleviated by exploiting sparsity or locality of orbitals. Furthermore, the lack of customized atomic orbitals for solids hinders convergence to the complete basis set limit. Standard Gaussian-type orbitals (GTOs) optimized in an atomic environment cannot be directly transferred to periodic systems due to the linear dependency problems in the presence of diffuse orbitals<cit.>. The problem becomes even more severe for DF whose accuracy relies on existence of a customized auxiliary basis set for a solid environment. An alternative decomposition of ERIs is tensor hypercontraction (THC) proposed by Hohenstein and co-workers<cit.>. THC expresses an ERI tensor as a product of five matrices such that full separation of the four orbital indices in an ERI is obtained. There are different approaches to achieve the THC factorization such as the PARAFAC (PF) THC<cit.>, least-squares (LS) THC<cit.>, and interpolative separable density fitting (ISDF)<cit.>. Due to the full separation of the four orbital indices, THC is able to further reduce the memory loads and the number of operations compared to DF and CD approaches. THC has been extensively applied to molecules and Γ-point supercells in the context of hybrid functionals<cit.>, Hartree-Fock (HF) theory<cit.>, coupled-cluster (CC) theory<cit.>, MP2 and MP3<cit.>, RPA<cit.>, GW<cit.>, and auxiliary-field quantum Monte-Carlo (AFQMC)<cit.>. In contrast, for periodic calculations with k-point sampling, THC has only been used to accelerate the computation of hybrid functionals<cit.>. In this paper, we present an efficient algorithm for RPA with k-point sampling in the framework of THC. The formulation is based on a revised ISDF procedure for Bloch orbitals with a momentum-dependent THC auxiliary basis, resulting in full separation of both the orbital and the k-point indices. Both the preparation steps of ERIs and the evaluation steps of RPA energy can be performed at the cost of O(N_kN^3) in the number of operations and O(N_kN^2) in memory load without assumptions on sparsity or locality of orbitals. In the evaluation of RPA energy, the largest dimension of N corresponds to the size of the THC auxiliary basis rather than the size of a real-space grid, which makes the prefactors much smaller compared to the standard space-time approach. We analyze the error convergence of ERIs and RPA energy with respect to the size of the THC auxiliary basis for different numbers of virtual orbitals, different numbers of k-points, and different sizes of unit cells. The paper is organized as follows. Sec. <ref> introduces ERIs for periodic calculations, and Sec. <ref> presents k-point THC via our revised ISDF procedure. In Sec. <ref>, we discusses the formulation of RPA in the framework of THC with k-point sampling. We then summarize the computational details in Sec. <ref>, and then reports results of our implementations of THC and RPA in Sec. <ref>. Lastly, our conclusion is presented in Sec. <ref>. § ELECTRON REPULSION INTEGRALS In the presence of translational symmetry, a suitable single-particle basis for the electronic Hamiltonian of a crystalline system is the Bloch orbital: ϕ^k_i(r) = u^k_i(r)e^ikr where the superscripts {k} denote crystal momenta, the subscripts are referred to as orbital indices, and u^k_i(r) are periodic functions with respect to lattice translations. In practice, the Bloch orbitals could be “downfolded” KS orbitals from a plane-wave basis, periodic Gaussian basis functions, or any other properly symmetry adapted set of basis functions. The ERIs in this basis are defined as V^k_ik_jk_kk_l_ i j k l = ∫ dr∫ dr' ϕ^k_i*_i(r)ϕ^k_j_j(r)1/|r-r'|ϕ^k_k*_k(r')ϕ^k_l_l(r') where crystal momenta live in the first Brillouin zone with the assumption of momentum conservation, i.e. k_i - k_j = k_l - k_k + G and G is a reciprocal lattice vector. In first-principles calculations, the electronic Hamiltonian is constructed by discretizing the first Brillouin zone with a finite number of k-points (N_k) and truncating the Hilbert space using a fixed number of orbitals per unit cell (N_orb). The size of the ERI tensor thus grows cubically with N_k and quartically with N_orb, which becomes a bottleneck both in computation cost and memory requirements as the system size increases. Furthermore, any operation on this bulky rank-4 tensor would lead to a poor scaling in terms of the number of operations due to the inseparability of the orbital and momentum indices. § TENSOR HYPERCONTRACTION We assume the following tensor hypercontraction (THC) representation of Eq. <ref> in a generic Bloch basis set: V ^k_ik_jk_kk_l_ i j k l≈∑_μνX^k_i*_μ iX^k_j_μ jV^q_μνX^k_k*_ν kX^k_l_ν l where the momentum transferred between the Bloch pair densities is folded back to the first Brillouin zone (q = k_i - k_j + G = k_l - k_k + G'), and the greek letters denote the auxiliary basis introduced in the THC decomposition. For a given size of the auxiliary basis (N_μ), the procedure of a THC decomposition consists of the determination of X^k and V^q matrices. When N_μ is smaller than N_orb^2, Eq. <ref> corresponds to a low-rank approximation to an ERI tensor. In practice, N_μ = O(N_orb) is expected to achieve good accuracy due to the low-rank structure of the ERI tensor. The expression of Eq. <ref> provides full separation of the orbital and momentum indices which not only reduces the memory requirements but also enables a low-scaling algorithm for RPA energy (see Sec. <ref>). In this work, we proposed a revised ISDF procedure, based on the works from Lu and coworkers<cit.>, to construct the THC factorization with a momentum-dependent auxiliary basis. §.§ Interpolative separable density fitting for solids For a fixed transferred momentum q that lives in the first Brillouin zone, we view the Bloch pair densities for arbitrary k-points as a matrix ρ^q(kij, r) = ϕ^k-q*_i(r)ϕ^k_j(r), and then we perform an interpolative decomposition (ID)<cit.> to ρ^q: ρ^q(kij, r) = ϕ^k-q*_i(r)ϕ^k_j(r) ≈∑_μϕ^k-q*_i(r_μ)ϕ^k_j(r_μ)ζ^q(μ, r) where {r_μ} is a set of interpolating points, and {ζ^q_μ(r)} are interpolating vectors that interpolate pair densities to an arbitrary real-space point r from {r_μ}. The number of interpolating points (N_μ) can either be an input parameter or determined on-the-fly for given accuracy. Since the size of the real-space grid (N_r) scales linearly with the number of electrons (N_e), N_μ is expected to grow as 𝒪(N_e) as well. Due to the periodicity of the pair densities in the momentum space, it is easy to verify that ζ^q_μ(r) is also a Bloch function, i.e. ζ^q+G_μ(r) = ζ^q_μ(r). The structure of Eq. <ref> resembles the widely-used density fitting decomposition<cit.> if one identifies the interpolating vectors {ζ^q_μ(r)} as the auxiliary basis set. However, the fitted coefficients are now separable both in the orbital and the k-point indices, and the auxiliary basis set is numerically determined during the fitting procedure rather than taken from a set of predefined functions. This fitting procedure is performed independently for each q-point to generate a set of q-dependent interpolating basis {ζ^q_μ(r)}. In principle, the optimal interpolating points should also be q-dependent. However, as shown in the Supporting Information, we empirically found that taking {r_μ} from q=0 consistently results in comparable accuracy as in the case that uses q-dependent interpolating points. Finally, a THC representation of ERIs is obtained by inserting Eq. <ref> into Eq. <ref>: V^k_ik_jk_kk_l_ i j k l ≈∑_μνϕ^k_i*_i(r_μ)ϕ^k_j_j(r_μ) [∫ dr∫ dr' ζ^-q_μ(r)1/|r-r'|ζ^q_ν(r') ] ϕ^k_k*_k(r_ν)ϕ^k_l_l(r_ν) =∑_μνX^k_i*_μ iX^k_j_μ jV^q_μνX^k_k*_ν kX^k_l_ν l where we define q = k_i - k_j + G = k_l - k_k + G', and X^k_i_μ i = ϕ^k_i_i(r_μ), V^q_μν = ∫ dr∫ dr' ζ^-q_μ(r)1/|r-r'|ζ^q_ν(r'). The accuracy of Eq. <ref> is controlled by the accuracy of ISDF procedure (Eq. <ref>) which can be systematically improved by increasing N_μ. What remains is how to obtain the ID representation in Eq. <ref>. The standard procedure of ID consists of first selecting the interpolating points and then solving a least-squares problem to obtain the interpolating vectors<cit.>. In our implementation, we select the interpolating points using the recently-proposed scheme based on the Cholesky decomposition of the THC metric matrix at q=0<cit.> (see Sec. <ref>). Once the interpolating points are chosen, the interpolating vectors are obtained from the least-squares solution of the following over-determined set of linear equations: C^qΘ^q = Z^q where Z^q_νr = ∑_kijρ^q*(kij,r_ν)ρ^q(kij, r), C^q_νμ = ∑_kijρ^q*(kij,r_ν)ρ^q(kij, r_μ), Θ^q_μr = ζ^q_μ(r). Due to the separability of the orbital indices in ρ^q, the evaluation of Eq. <ref> scales as 𝒪(N_kN_orbN_μN_r + N_klnN_kN_μN_r). Once Z^q and C^q are assembled, solving the linear system scales as 𝒪(N_kN_μ^2N_r + N_kN_μ^3). Overall, the evaluation of interpolating vectors scales linearly with N_k and cubically with the system size. §.§.§ Cholesky-based approach for interpolating points In the Cholesky-based approach<cit.>, the interpolating points are selected through the pivoted Cholesky decomposition on the matrix S^q = (ρ^q)^†ρ^q: S^q = Π^q(R^q)^†R^q(Π^q)^-1 where Π^q is the pivoting matrix and R^q consists of the Cholesky vectors with the diagonal elements in the descending order. For a given N_μ, the interpolating points are then chosen to be those rows that correspond to the first N_μ pivots in Π^q. This approach is a reformulation of QR factorization with column pivoting (QRCP) on the matrix ρ^q, ρ^qΠ^q = Q^qR^q, which is the standard approach to select interpolating points in IDs<cit.>. However, Eq. <ref> has several advantages over Eq. <ref> from a numerical point of view. First of all, due to the separability of the orbital indices in ρ^q, the evaluation of S^q scales as 𝒪(N_kN_orbN_r^2) which is asymptotically cheaper than the evaluation of ρ^q (𝒪(N_k^2N_orb^2N_r)). Secondly, an direct QRCP on the matrix ρ^q is prohibitively expensive. Instead, the randomized algorithm of QRCP is typically implemented to reduce the cost to 𝒪(N_kN_orb^2N_r). On the other hand, the iterative procedure of pivoted Cholesky allows one to construct the matrix R^q incrementally in a deterministic manner and terminate the algorithm once the error is below a user-defined threshold or the number of Cholesky vectors exceeds N_μ. Therefore, the pivoted Cholesky decomposition on S^q can be done at the cost of 𝒪(N_kN_μ^2N_r). § RPA ENERGY The grand potential Ω of an interacting many-electron system can be expressed using the Klein functional<cit.> Ω_K[G] = Φ[G] + E_H + Tr[1 - G^-1_0G] - Tr[ln(-G^-1)] with the Hartree (Coulomb) energy E_H, the non-interacting Green's function G_0, the interacting Green's function G, and the Luttinger-Ward functional Φ[G]<cit.>. The interacting Green's function relates to its non-interacting counterpart through the Dyson equation G^-1(ω) = G^-1_0(ω) - Σ(ω) in which G and the self-energy Σ are solved in a self-consistent manner. Since a self-consistent solution of the Dyson equation is computationally demanding, Eq. <ref> is usually evaluated at an effective non-interacting Green's function such as the Kohn-Sham (KS) Green's function G_KS(r,r', ω) = ∑_iψ^*_i(r)ψ_i(r')/ω - ϵ_i + iδ where {ψ_i} are the KS orbitals and {ϵ_i} are the KS single-particle energies. Inserting Eq. <ref> into Eq. <ref>, the single-particle nature of G_KS allows us to write down the relation<cit.> F[G_KS] = Ω_K[G_KS] + μ N = E_HF[{ψ_i}] + Φ_c[G_KS] where F is the Helmholtz free energy, E_HF[{ψ_i}] is the Hartree-Fock (HF) energy evaluated using the KS orbitals, and Φ_c[G_KS] is the correlation part of the Luttinger-Ward function evaluated at G_KS. In the RPA approximation, Φ_c is represented as a sum of bubble diagrams, Φ^RPA_c = -1/2Tr{ [(χ_0V) + 1/2(χ_0V)^2 + 1/3(χ_0V)^3 + 1/4(χ_0V)^4 + …] - (χ_0V)} = 1/2Tr{ln[1 - χ_0V] + χ_0V }, where χ_0 = G_KSG_KS is the KS polarizability, V is the bare Coulomb interaction, and the Tr{} operator denotes a sum over all degrees of freedom. At the zero-temperature limit, Eq. <ref> is identical to the RPA correlation energy in the framework of adiabatic-connection fluctuation-dissipation theorem (ACFDT)<cit.>. §.§ THC-HF The HF energy expressed in a canonical basis {ψ_i} is E_HF[{ψ_i}] = 1/2N_k∑_k∑_ijρ^k_ij(V^HF)^k_ji where ρ^k is the single-particle density matrix and (V^HF)^k is the canonical HF potential. With the THC representation of ERIs from Eq. <ref>, (V^HF)^k_ij = J^k_ij + K^k_ij can be reformulated as J^k_ij = 2/N_k∑_k'∑_abρ^k'_abV^kkk'k'_ijba = 2/N_k∑_k'∑_μνX^k*_iμX^k_jμV^q=0_μν∑_abX^k'_aνρ^k'_abX^k'*_bν = ∑_μX^k*_iμ{2/N_k∑_k'∑_νρ^k'(r_ν, r_ν)V^q=0_μν}X^k_jμ and K^k_ij = -1/N_k∑_q∑_abρ^k-q_abV^k,k-q,k-q,k_i, a, b, j = -1/N_k∑_q∑_μνX^k*_iμV^q_μνX^k_jν∑_abX^k-q_aμρ^k-q_abX^k-q*_bν = ∑_μνX^k*_iμ{-1/N_k∑_qρ^k-q(r_μ, r_ν)V^q_μν} X^k_jν in which J^k is the Coulomb term, K^k is the exchange term, ρ^k is the single-particle density matrix, and ρ^k(r_μ, r_ν) is referred to as the electron density evaluated on the THC interpolating points: ρ^k(r_μ, r_ν) = ∑_abX^k_aμρ^k_abX^k*_bν = ∑_abϕ^k_a(r_μ)ρ^k_abϕ^k*_b(r_ν). The most time-consuming part in THC-HF is Eq. <ref> which scales as O(N_klnN_kN_μ^2+N_kN_orbN_μ^2). Here, the logarithmic complexity comes from the fast Fourier transform (FFT) convolution in the momentum space. Therefore, the evaluation of THC-HF scales linearly with N_k and cubically with the system size. This complexity is asymptotically much better than other approaches formulated in a canonical basis, such as those based on the Gaussian density-fitting technique<cit.> or the Cholesky decomposition<cit.>. In addition, compared to the real-space formalism which has the same formal scaling, the prefactor of THC-HF is several orders of magnitude smaller since N_μ≪ N_r. Even though the preparation steps for obtaining the THC decomposition of ERIs still acquires a large prefactor from the dimension of the real-space grid, this step is only done at once in the beginning of the calculation, no matter the number of self-consistent cycles in THC-HF. §.§ THC-RPA Similar to HF, the non-interacting polarizability can be reformulated using the THC interpolating points and the THC auxiliary basis on the imaginary-time axis. On a real-space grid, the polarizability reads χ_0(r, r'; τ) = G(r, r'; τ)G(r', r; -τ) = ∑_kq∑_abcdϕ^k_a(r)ϕ^k-q*_c(r)G^k_ab(τ)G^k-q_dc(-τ)ϕ^k-q_d(r')ϕ^k*_b(r') = ∑_qk∑_μνζ^q_μ(r)G^k(r_μ, r_ν; τ)G^k-q(r_ν, r_μ; -τ)ζ^q*_ν(r') = ∑_q∑_μνζ^q_μ(r)χ_0^q(r_μ, r_ν; τ)ζ^q*_ν(r') where G^k(r_μ, r_ν; τ) = ∑_abϕ^k_a(r_μ)G^k_ab(τ)ϕ^k*_b(r_ν), χ^q_0(r_μ, r_ν; τ) = ∑_kG^k(r_μ, r_ν; τ)G^k-q(r_ν, r_μ; -τ). Inserting Eq. <ref> into Eq. <ref>, we recast the first-order term into -1/2Tr {χ_0V} = -1/2β∑_n∫ dr∫ dr' χ_0(r, r'; iΩ_n)V(r, r') =-1/2β∑_n∑_q∑_μνχ_0^q(r_μ, r_ν; iΩ_n) V^q_νμ in which V^q_νμ is defined in Eq. <ref>. Similar reformulation can be applied to the higher-order terms of Eq. <ref>, and the final expression of Φ^RPA_c reads Φ^RPA_c = 1/2β∑_n∑_q∑_μ{ln[1 - χ_0^q(iΩ_n)V^q] + χ_0^q(iΩ_n)V^q}_μμ. The formal scaling of Eqs. <ref> and <ref> scales as O(N_τN_kN_orbN_μ^2) and O(N_ΩN_kN^3_μ) respectively, and Eq. <ref> can be evaluated using the FFT convolution at the cost of O(N_τN_kln N_kN_μ^2). Therefore, each step formally scales linearly with N_k and cubically with the system size. Particularly, Eq. <ref> would be the most time-consuming step, assuming the sizes of the Matsubara frequencies and the imaginary-time mesh are similar. Note that the low-scaling algorithm of THC-RPA is a consequence of the full separability in the orbital and k-point indices from the THC factorization of ERIs. This formalism does not rely on any assumption on sparsity, and it can be applied to any generic Bloch orbitals as long as there is an reasonably compact ID for the pair densities. In addition to the formal cubic scaling, the THC-RPA algorithm has a much smaller prefactor compared to the space-time formalism. This can be seen from the construction of the non-interacting polarizability (Eqs. <ref> and <ref>) in which the two formalisms look almost the same except that the real-space grid in the space-time formalism is replaced by the THC interpolating points. Since the dimension of the later is often orders of magnitude smaller than the former, THC-RPA gains further speedup even compared to the space-time formalism. § COMPUTATIONAL DETAILS For all the data presented in this work, we choose the KS orbitals from a DFT calculation as the single-particle Bloch basis. Unless mentioned otherwise, the total number of KS states is taken to be 8 times of the number of electrons per unit cell. All functions in this basis set are used to construct the electronic Hamiltonian in THC factorization and compute the HF and the RPA correlation energy. All DFT calculations are performed with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional<cit.> using <cit.>. Core electrons are described by norm-conserving pseudopotentials optimized for the PBE functional<cit.>, and the kinetic energy cutoff is set to 55 a.u. for all systems unless mentioned otherwise. RPA calculations are performed exclusively on the imaginary axes at inverse temperature β = 2000 a.u. (T ≈ 158 K). Dynamic quantities, including fermionic and bosonic functions, are expanded into the intermediate representation (IR)<cit.> with sparse sampling on both the imaginary-time and Matsubara frequency axes<cit.>. Both the IR basis and the sampling points are generated using <cit.> open-source software package. § RESULTS In this section, we present the results of our implementation of the THC decomposition of ERIs, THC-HF and THC-RPA. To facilitate the comparison between different physical systems and basis sets, we define the metrics α = N_μ/N_orb which represents the size of the auxiliary basis as a multiple of the size of the single-particle Bloch orbitals. §.§ ERI comparison between different factorization schemes We first investigate the error of the THC representation of ERIs for a given size of the auxiliary basis. Fig. <ref> shows the maximum error of the ERI tensor V^k_ik_jk_kk_l_ i j k l, including the occupied-occupied, the occupied-virtual, and the virtual-virtual orbital blocks, for selected physical systems calculated from the THC, and the Cholesky decomposition at different sizes of the auxiliary bases. In the following, the auxiliary basis for Cholesky decomposition is referred to as the Cholesky vectors. The selected systems are chosen to be Si, LiH, and MgO with increasing band gaps on a 2×2×2 Γ-centered Monkhorst-Pack grid. Such a small k-grid is to alleviate the high computation cost of assembling the full ERI tensor from the decomposed forms. As will be demonstrated in the next section (Fig. <ref>), the convergence of THC should remain consistent no matter the size of the k-mesh. The reference data are calculated from Cholesky decomposition with convergence tolerance equals to 10^-8. Overall, the two decomposition schemes show monotonic convergence as the sizes of the auxiliary bases increase. The similar convergence behavior among the three selected systems with different numbers of orbitals suggests that the rank of the full ERI tensor only grows linearly with the system size, independent to the details of the system, e.g. the size of the band gaps. Among the two factorization schemes, Choleksy decomposition consistently shows faster convergence since it does not require a fully separable form in the orbital and k-point indices. For THC, we found that α_THC=8 already gives us accuracy better than 1 mHartree for all orbital blocks in ERIs. The consistent accuracy for different orbital blocks is because both of the decomposition schemes treat the occupied and the virtual orbitals on an equal footing. Therefore, both of the Cholesky and the THC decomposition are applicable to not only mean-field calculations but also correlated methods which involve the occupied-virtual and the virtual-virtual interactions. Despite a larger auxiliary basis, THC is still computationally favorable compared to Cholesky decomposition due to the linear scaling with the number of k-points and the cubic scaling with the system sizes. From the perspective of memory usage, the fully separable form of THC reduces the storage requirement from O(N_k^2N_orb^2N_μ) to O(N_kN_μ^2) compared to Cholesky decomposition. Such memory reduction allows for the possibility to compute and store the full decomposed ERI on-the-fly and avoid I/O entirely. §.§ RPA free energy Next, we analyze the convergence of the RPA free energy with respect to the size of the auxiliary basis. Fig. <ref> shows the error of the HF (Eq. <ref>) and the RPA correlation energy (Eq. <ref>) per atom, calculated using ERIs in the THC decomposition as a function of α = N_μ/N_orb. For consistency, we choose the same physical systems on the same k-mesh as in Sec. <ref>. We also show the HF and the RPA results from the Cholesky decomposed ERI, denoted as Chol-HF and Chol-RPA, using our in-house library for many-body theory which closely follows the finite-temperature implementation in Ref.. Both of our implementations are able to systematically converge to the same results within given accuracy as α increases since ISDF and Cholesky decomposition are both systematically controlled approximations. Such a high accuracy calculation is not possible with the conventional density fitting techniques in which the error is subject to the choice of a pre-defined auxiliary basis set. Compared to the error in ERIs, the convergence of energetics is less smooth since the errors coming from the ERI factorization propagate non-linearly in the energy evaluation. However, the overall trend remains the same, i.e. one can achieve approximately 1 mHartree and 0.01 mHartree accuracy at α_THC = 8 and 16, respectively. Unlike HF, the RPA correlation energy requires the information of interactions from virtual orbitals. The consistent accuracy for both THC-HF and THC-RPA once again demonstrates that all orbital blocks in ERIs are well described by THC. Even though the order of magnitude of the THC-HF and THC-RPA errors are different, we do see a systematic convergence trend in all quantities consistently. From the perspective of computational cost, in order to achieve 1 mHartree accuracy, our implementation of the THC-based algorithms (THC-HF/THC-RPA) are already faster than Chol-HF/Chol-RPA for our selected systems. As the number of k-points and the system size increase, the speedup in THC-HF and THC-RPA would be even more pronounced. Next, we analyze the error of the THC-based methods with respect to the number of single-particle basis functions. We construct the electronic Hamiltonians in THC factorization with N_orb = c N_elec (c=4∼10) at α_THC=10, and then perform THC-HF and THC-RPA calculations respectively. As shown in Fig. <ref>, the accuracy of THC-HF and THC-RPA remains similarly as the size of the basis set increases, which suggests that the rank of ERI tensors scales linearly, rather than quadratically, with the number of basis functions. This behavior is observed in systems with different band gaps and even in a metallic system (SrVO_3) with transition metal atoms. Note that this is in contrast to Ref. in which the error of THC-based methods is reported to increase as the size of the basis set enlarges. We believe the consistent accuracy observed in this work is due to the more robust choice of interpolating points provided by the pivoted Cholesky decomposition of the metric matrix, which leads to a consistent treatment of both occupied and virtual spaces. Such consistent accuracy among different physical systems manifest the power of THC-based methods compared to low-scaling algorithms which rely on sparsity and locality. We now look at how the error of the THC decomposition behaves with respect to the number of k-points and the size of a supercell. As shown in Fig. <ref>, we compute the free energy in the random phase approximation using the THC decomposed ERIs for a primitive cell of Si on different k-meshes and Γ-point supercells of Si with different number of atoms. As we go to a larger k-mesh, the error of the free energy in the random phase approximation converges in a quantitatively similar manner. This is somewhat expected since the THC decomposition in our formulation is performed for each q-point independently, and therefore the q-dependent auxiliary bases are tailored to fit the Bloch pair densities for each q-point specifically. This further verifies that our previous analysis on a small 2×2×2 k-mesh should be transferable to finer k-point sampling. Likewise, the error of RPA free energy per atom remains similarly as the size of the unit cell increases, especially for when α≤ 8. This is consistent to Ref. in which the error of extensive quantities scales linearly with the system size. Lastly, we show the RPA equation of state of Carbon in the diamond phase in Fig. <ref>. The HF and the RPA correlation energy are calculated on a 15×15×15 and a 8×8×8 Γ-centered Monkhorst-Pack grid respectively. Due to the infinite summation over the virtual orbitals in the polarizability, the convergence of RPA correlation energy is notoriously slow with respect to the number of KS states<cit.>. To obtain the converged values, we perform THC-RPA calculations for different numbers of KS orbitals and then extrapolated to the infinite basis set limit by fitting the formula Φ^RPA_c(N_orb) = a/N_orb + b. The Birch-Murnaghan equation<cit.> is then fit to the extrapolated curve to extract the lattice constant (a) and bulk modulus (B). The predictions are a = 3.57 Å and B = 430 GPa respectively. In addition, we have also performed RPA calculations using <cit.> with the same numbers of KS orbitals and the same extrapolation strategy (dashed brown line). The results are a=3.57 Å and B=433 GPa which is in a good agreement with our implementation. §.§ Complexity analysis To demonstrate the low-scaling complexity of THC-based many-body perturbation theory, we show the total CPU timing of our THC-RPA implementations, including the steps for the preparation of ERI and the steps for the evaluation of RPA correlation energy. The systems are chosen to be the conventional unit cell of Si on a n× n × n Monkhorst-Pack grid (n=1∼6) and the Γ-point supercells of Si with 8, 16, 54, 128 atoms per unit cell. The kinetic energy cutoff is set to 30 hartree. As shown in Fig. <ref>, the time of the preparation of ERI and the steps for RPA energy scales linearly with the number of k-points and cubically with the number of atoms per unit cell. The observed speedup against to the O(N_kN^3) scaling is expected to be alleviated as the dimensions of a system further increase. Despite the same asymptotic scaling in the preparation and the RPA steps, the prefactors of these algorithms are quite different. In THC-ERI, the prefactor is proportional to O(N_μ^2N_r) while the prefactor of the dominant steps in THC-RPA scales as O(N_ωN_μ^3), coming from Eq. <ref> where N_ω is the dimension of the Matsubara frequencies. Therefore, the relative computational cost of these two steps is given by the ratio of N_r and N_ωN_μ. For the systems considered in this section, the timings of THC-ERI are slightly larger than those of THC-RPA. However, as the size of the single-particle basis increases, the cost of THC-RPA would increase faster compared to THC-ERI. In addition, THC-RPA becomes more expensive at lower temperature since the number of Matsubara frequency points required increases. On the other hand, for systems with very deep-lying orbitals, THC-ERI could become more computationally expensive due to a very large kinetic energy cutoff. § CONCLUSION We introduce a low-scaling algorithm for RPA with k-point sampling based on the THC decomposition of ERIs. The THC representation of ERIs is achieved via a q-dependent ISDF procedure for Bloch pair densities in which both of the auxiliary basis and the fitting coefficients are computed on-the-fly for a given Bloch single-particle basis. Both the preparation steps of ERIs and the RPA parts scale linearly with the number of k-points and cubically with the system sizes due to the full separability of k-point and orbital indices in the THC representation of ERIs. The formalism is applicable to generic Bloch functions without an assumption on locality of orbitals, and its accuracy is systematically controlled by the size of the auxiliary basis. For our selected systems, we found N_μ = 8N_orb is enough to achieve 1 mHartree accuracy for the ERI tensor, including all orbital blocks, and energies per atom. Such an observation is independent to the number of virtual orbitals, the number of k-points, and the size of a unit cell. The compactness of the size of the THC auxiliary basis enables many-body calculations for large-scale systems. Extending the periodic THC formulation to GW and vertex corrections will be explored in the follow-up works, and the code will be made open-source in the near future. Errors of ERI calculated from the THC factorization with q-dependent interpolating points; comparison with the original ISDF procedure for Bloch orbitals. We thank Alexander Hampel, Olivier Parcollet, and Antoine Georges for helpful discussions. We also thank Nils Wentzell for help with and libraries. The Flatiron Institute is a division of the Simons Foundation.
http://arxiv.org/abs/2306.09511v1
20230615211455
Generation of multiple ultrashort solitons in a third-order nonlinear composite medium with self-focusing and self-defocusing nonlinearities
[ "André C. A Siqueira", "Edilson L. Falcão-Filho", "Boris A. Malomed", "Cid B. de Araújo" ]
physics.optics
[ "physics.optics", "nlin.PS" ]
APS/123-QED Corresponding author: [email protected] Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Recife, PE, Brazil Department of Physical Electronics, School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica, Chile Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Recife, PE, Brazil Theoretical consideration of the propagation of femtosecond-Gaussian pulses in a 1D composite medium, consisting of alternating self-focusing (SF) and self-defocusing (SDF) waveguide segments with normal group-velocity dispersion predicts the generation of trains of bright solitons when an optical pulse first propagates in the SF segment, followed by the SDF one. The multiple temporal compression (MTC) process, based on this setting, offers a method for controllable generation of multiple ultrashort temporal solitons. Numerical solutions of the generalized nonlinear Schrödinger equation modeling this system demonstrate that the intrapulse Raman scattering plays a major role in the temporal and spectral dynamics. Collisions between ultrashort solitons with different central wavelengths are addressed too. The paper provides, for the first time, a procedure for producing controllable trains of ultrashort temporal solitons by incident optical pulses propagating in a composite medium. Generation of multiple ultrashort solitons in a third-order nonlinear composite medium with self-focusing and self-defocusing nonlinearities Cid B. de Araújo July 31, 2023 ============================================================================================================================================ § INTRODUCTION The dynamics of pulse propagation in media with anomalous dispersion and self-focusing nonlinearity is crucially important for soliton formation. Indeed, considering higher-order dispersion and the nonlinear (NL) terms affecting the pulse propagation, such as the self-phase modulation, self-steepening, and intrapulse Raman scattering, the generation of multiple solitons may occur during the pulse propagation, initiated by temporal compression followed by soliton fission <cit.>. Most commonly, the input pulse evolves into a single temporal soliton with the remaining energy being spilled out as dispersive waves. However, under specific conditions, the energy released by the first soliton fission may also evolve into multiple temporal solitons <cit.>. Then, as the first fission takes place around the central peak of the input pulse, the first emerging soliton has a higher peak power than the secondary solitons. Hence, the first solitary pulse features a stronger soliton self-frequency shift (SSFS), which is accompanied by temporal deceleration through the intrapulse Raman scattering and anomalous dispersion <cit.>. Also, efforts have been made to explore media providing a negative NL refractive index (n_2) in the normal-dispersion regime. For instance, birefringent NL crystals, such as LiNbO_3, BBO, and KTP, have been investigated to control the effective NL refractive index (n_2,eff) acting on the pulse, where the strong negative second-order cascading NL regime can overcompensate the positive Kerr nonlinearity; then, a medium with a defocusing effective nonlinearity, n_2,eff< 0, in the normal dispersion regime <cit.> is obtained. Such media have also been used to compress pulses, and to control the supercontinuum generation <cit.>. On the other hand, studies of composites containing metallic nanoparticles (NPs) have drawn much interest of the NL optics community. These metal-dielectric nanocomposites may present large and controllable n_2,eff due to contributions of the host material and NPs. The amplitude and sign of the nonlinearity can be controlled by proper selection of size, shape, and volume fraction of the NPs <cit.>. For example, it was shown that optical fibers based on fused silica, doped with silver NPs, exhibit n_2,eff < 0, and, in the case of the normal dispersion, they can provide the generation of bright solitons controlled by the volume fraction of the metal NPs and the central wavelength of the pulse <cit.>. In the present work, we report numerical results, based on the generalized NL Schrödinger equation (GNLSE), which exhibits generation of trains of ultrashort solitons under the action of the normal dispersion, without soliton fission near the central peak of the pulse. Our approach addresses the propagation of 800 nm femtosecond pulses in a stacked 1D system in such a way that the pulse, initially, propagates in a self-focusing (SF) waveguide followed by a self-defocusing (SDF) one. The dispersion of both waveguide segments is assumed to be normal in the entire composite system, providing generation of solitons via what is called the multiple temporal compression (MTC) process. For the calculation, the first segment in the waveguide is assumed to be made of pure fused silica (with n_2 > 0), while the second one is made of silica doped with silver NPs, having n_2 < 0. In this scenario, all generated solitons feature a temporal acceleration due to the intrapulse Raman scattering. § THE THEORETICAL APPROACH To perform simulations of the pulse propagation through the two segments of the composite medium with n_2 of opposite signs, pure and doped fused silica based waveguides were considered. The unidimensional propagation of an optical pulse, represented by a slowly varying envelope amplitude A(z,T) of the electric field, can be modeled by the GNLSE <cit.>, ∂ A/∂ z - (∑_n⩾ 2β_ni^n+1/n!∂^n A/∂ T^n) = iγ_eff(1 + i/ω_0∂/∂ T)( (1- f_r) A|A|^2 + γ_0/γ_efff_rA∫_0^∞h_r(τ)|A(z,T- τ)|^2dτ). Numerical solutions of Eq.(1) were performed by means of the fourth-order Runge-Kutta method in the interaction picture, which is more accurate in comparison to other methods, such as the conventional split-step Fourier-transform scheme <cit.>. In Eq.(1), β_n (n⩾2) are the higher-order dispersion coefficients which come from the Taylor expansion of the propagation constant β(ω) around the central frequency (ω_0). The respective values of the high-order dispersion coefficients at 800 nm, numerically calculated from the Sellmeier expression for fused silica, are <cit.>: β_2 = +3.63· 10^-2 ps^2m^-1 , β_3 =+ 2.75· 10^-5 ps^3m^-1, β_4 = -1.10· 10^-8 ps^4m^-1, β_5 = +3.15· 10^-11 ps^5m^-1, β_6 = - 8.00· 10^-14 ps^6m^-1, and β_7 = +2.50· 10^-16 ps^7m^-1. The right-side of Eq.(1) describes the NL effects, where γ_0 = ω_0n_2(ω_0)/cA_eff is the usual NL waveguide coefficient and the A_eff is the effective modal area. The value of γ_0 is taken to be +0.0025 W^-1 m^-1, assuming that for fused silica at 800 nm we have n_2=+2.5 · 10^-20 m^2W^-1, and the waveguide diameter is 10 μ m. We set the effective NL parameter for the propagation in the first segment as γ_eff= γ_0 > 0, and γ_eff= ω_0n_2,eff(ω_0)/cA_eff = -γ_0 in the second segment. The respective negative NL effective refractive index, n_2,eff, can be provided by doping fused silica with silver NPs (with the volume fraction ≈ 10^-4), as enabled by the Maxwell-Garnet theory <cit.>. Another possibility to implement a negative NL parameter is offered by a BBO crystal for type-I phase-matching with θ≈ 27.5^∘, where θ is the angle between the input beam and optical c-axis of the BBO crystal <cit.>. The temporal derivative on the right-hand side of Eq.1 is associated with the third-order NL effects, such as self-steepening and optical-shock formation. Concerning the integral term on the right-side of Eq.(1), the contribution of the delayed Raman response to the NL polarization is represented by f_r (equal to 0.18 for silica fibers <cit.>), which leads to effects such as the intrapulse Raman scattering and SSFS, while the local term represents the self-phase modulation (SPM) produced by the instantaneous electronic Raman contribution. Therefore, in this work, like in Ref.<cit.>, we assume that both the self-focusing and defocusing waveguide segments are characterized by equal Raman terms. § GENERATION OF SOLITON PAIRS BY MULTIPLE TEMPORAL COMPRESSION (MTC) §.§ Pulse propagation in uniform media To examine the generation and evolution of bright solitons due to the pulse propagation in the composite medium with n_2 of opposite signs, it is instructive to analyze the soliton dynamics considering only negative (Fig. <ref>(a)-(c)) or positive (Fig. <ref>(d,e)) nonlinearity. Then, we considered the propagation of a Gaussian input pulse A (0,T)=√(P_0)exp⁡(-0.5(1.665T/T_FWHM)^2), with central wavelength (λ_0 = 800 nm), time duration (T_FWHM = 90 fs), and peak power (P_0 = 85 kW). In the sample segment with n_2 < 0 (γ_eff = - 0.0025 W^-1 m^-1) with total length L = 150 mm, the interplay between n_2 < 0 and the normal dispersion gives rise to the typical soliton fission around the central region of the pulse after the propagation distance L ≈ 22 mm. A modest spectral broadening driven by SPM due to the temporal compression of the pulse is shown in Fig. <ref>(a)-(c). In cases of stronger spectral broadening it is possible to generate supercontinuum spectra throughout the visible to near-IR range, as the result of the first soliton fission <cit.>. As soon as the bright soliton acquires its form, we observe its robust evolution with high peak power (≈ 3.8P_0) under the influence of the intrapulse Raman scattering, which shifts the central wavelength to the red (Stokes) side during the propagation (Fig. <ref>(c)). Once the bright soliton propagates under the action of the normal dispersion, its group velocity increases as its spectral center shifts towards longer wavelengths, leading to the soliton acceleration. This phenomenon exemplifies the SSFS effect and may strongly influence the temporal and spectral evolution of ultrashort solitons <cit.>. Concerning the pulse propagation in the medium with the positive nonlinearity (γ_0 = + 0.0025 W^-1 m^-1), Figs. <ref>(d,e) show the respective temporal and spectral evolution. The most important feature observed in this configuration is the temporal broadening which leads to reduction of the pulse peak power during the propagation. In the next section we explore the pulse propagation in the composite sample, with the first and second waveguide segments having n_2 > 0 and n_2 < 0, respectively. Figures <ref>(d,e) are helpful for selecting the length of the first segment so as to avoid spectral saturation due to the decrease of the pulse peak power. Thus, the criterion for choosing the length of the first segment with positive n_2 should be based on the balance between the NL phase accumulated from the action of the temporal SPM and temporal broadening of the pulse under the action of the normal dispersion. §.§ Pulse propagation in the composite medium In this subsection we investigate the pulse propagation initiated by the same Gaussian pulse considered in the previous section (λ_0 = 800 nm, T_FWHM = 90 fs, and P_0 = 85 kW). The pulse propagates first in the segment with n_2 > 0, and then in the second segment with n_2 < 0. The calculation was performed for three different lengths of the first segment (L_1 = 5, 10, and 30 mm). Figure 2(a-f) shows the spectral and temporal evolution of the pulse for each value of L_1. Notice that for L_1 = 5 mm, besides the generation of a soliton pair caused by the double temporal compression, almost symmetrically with respect to the pulse’ center, the pair of bright solitons collide (at L ≈ 50 mm) and fuse into one after propagation for few millimeters. Then, the fused soliton propagates with a higher peak power ,≈ 4P_0, similar to the case displayed above in Fig. <ref>(b). Because there is no soliton collision for L_1 = 10 mm (Fig. <ref>(b)), the soliton pair generated due to the double temporal compression gives rise to two fundamental bright solitons with similar peak powers, on both sides of the pulse. An essential feature of this dynamics is the contribution of the intrapulse Raman scattering, which is also similar for both solitons; they propagate close to each other as they experience similar red-shifts due to the SSFS. Therefore, increasing L_1 from 5 to 10 mm, it is worthy to mention that the scenarios explored in this work offer a potential for studying collisions between solitons with slightly different central wavelengths. Note the oscillatory pattern in the spectral evolution displayed in Fig. <ref>(e). These oscillations are attributed to the spectral superposition of the soliton pair, which is not observed in the case of L_1 = 5 mm because in such a case the soliton collision allows the soliton to experience relevant SSFS only at the earlier stage of the evolution, while later it propagates with a spectrum around 800 nm. Concerning a longer first segment (L_1 = 30 mm), as shown in Fig.<ref>(c,f), even though the pulse has accumulated more NL phase in the first segment in comparison to the other cases, the temporal broadening of the Gaussian pulse in the normal-dispersion regime is enough to weaken the soliton generation in the second waveguide segment. This fact indicates the importance of knowing results of the propagation through the first segment, concerning the spectral saturation, to predict the soliton generation in the second medium. § GENERATION OF SOLITONS BY MULTIPLE TEMPORAL COMPRESSION (MTC) In the previous sections instead of the most common scenario with the soliton generation occurring around the central region of the pulse, the soliton generation was observed from the double temporal-compression effect occurring on both sides of the pulse passing the composite medium. In this section, we explore the soliton generation by considering a Gaussian input pulse like that addressed in the previous section (λ_0 = 800 nm, T_FWHM = 90 fs), but with higher peak powers. First we fix the lengths of the first and second segments as L_1 = 15 mm and L_2 = 75 mm. Then, like the previous section, the pulse propagates, at first, in the segment with γ_0 = + 0.0025 W^-1 m^-1, and then in the second one with γ_eff = - 0.0025 W^-1 m^-1. In this configuration, the pulse demonstrates well-balanced temporal and spectral broadening in the first segment, which is required to observe the soliton generation in the second one. The simulations were performed by taking the following values of input peak power: 0.15 MW, 0.30 MW, 0.45 MW, and 0.60 MW. For 0.15 MW a modest spectral broadening (from 750 nm to 870 nm) was observed in the first waveguide segment at -30 dB with respect to the maximum value. This power was enough to generate two pairs of bright solitons in the second segment, as shown in Fig. <ref>(e) (the red curve). Furthermore, it is worthy to note that this configuration leads to the spectral shift towards the red side, chiefly due to the contribution of the intrapulse Raman scattering that acts individually onto each fundamental soliton according to the relation Ω_p (z)∝-z/T_FWHM^4 <cit.>. Considering input peak powers larger than 0.15 MW, more bright solitons are generated in the leading and trailing edges of the pulses due to the MTC, which is an intrinsic feature of the soliton generation driven by the pulse propagation in the segments with n_2 of opposite signs and normal dispersion. Thus, the soliton generation starts from the first generation of the soliton pair, almost temporally symmetric with respect to the central region. The pair represents the soliton leading (S_LE) and trailing (S_TE) edges. As the pulse propagates in the second nonlinear segment additional fundamental soliton pairs are generated due to the MTC process. The generation of additional solitons which considers the soliton order of N ≈ 1 due to the influence of the Raman term on the spectral and temporal evolution of the solitons, is illustrated by Eq.(2): N = T_FWHM/1.665√(γ_effP_0^soliton/β_2) = = 0.1581√(P_0^solitonT_FWHM^2[ps])≈ 1 . To analyze the characteristics of the generated solitons, Table 1 shows typical values of peak power (P_0^soliton) and time duration (T_FWHM) of the S_LE and S_TE according to the peak power of the Gaussian input pulse. Evaluating the increase in the input peak power from Table 1, temporal compression of S_LE was observed, accompanied by an increase in its peak power, while the S_TE did not show conspicuous variations in their peak powers and time durations for the Gaussian input pulse with P_0 > 0.30 MW. Concerning the secondary solitons generated in this case, we note that they tend to be slightly broader, as they are generated closer to the central region of the pulse, without a strong variation in their peak powers, as shown in Fig. <ref>(e-h). Another interesting feature revealed by the analysis is presented in Fig.4, which shows the relation between the input peak power of the Gaussian input pulse (the horizontal axis) and the respective number of generated solitons (black dots), following the relation between the input peak power of the Gaussian pulse and its equivalent soliton orders, viz., N(P_0) ∝√(P_0) (the blue curve in Fig. 4). For instance, considering the Gaussian input pulse corresponding to an N-th order soliton, it creates approximately N fundamental solitons during the propagation, thus demonstrating a noteworthy phenomenon of the energy redistribution among the bright solitons. Because of the discrete nature of the soliton creation, there are different values of input peak power resulting in the same number of the generated solitons as illustrated by horizontal black segments in Fig. 4; in the degeneracy ranges, small temporal compressions of the solitons were observed with the increase of the input peak power. This feature is more relevant for higher input peak powers, making the degeneracy ranges broader. Concerning the soliton dynamics illustrated by Fig. <ref>, when SSFS pushes the central soliton wavelength towards longer values, conspicuous soliton acceleration occurs in the normal-dispersion regime, leading to increase of the soliton's group velocity as they individually experience the action of the SSFS effect. Then, to investigate the temporal and spectral evolution of the bright solitons in this setting, we focused on the specific case with the input peak power 0.25 MW, L_1 = 15 mm, and second segment length L_2 = 135 mm. In this way, Fig. <ref>(a,b) shows the temporal and spectral evolution of the Gaussian pulse with the same parameters as in Fig. <ref> (except for L_2). We applied numerical filters to temporally isolate each fundamental soliton during its propagation, and then applied the Fourier transform to produce their spectra. As shown in Fig. <ref>(c), when all solitons have already been generated (at L = 40 mm), it is straightforward to conclude that the solitons generated at the frontal region of the pulse are red-shifted (λ_0 > 800 nm), and ones generated at the back of the pulse are blue-shifted (λ_0 < 800 nm), in such a way that the soliton's central wavelength slightly decreases while proceeding from the leading edge of the pulse (S_1, S_2, and S_3) to the trailing edge (S_4, S_5, and S_6). Once the two solitons generated at the early stage of the evolution in the leading edge (S_1 and S_2) are temporally isolated, there is no soliton collision which would change their central wavelengths, and hence their group velocities. Therefore, as both solitons have similar peak powers and pulse durations, they experience a similar SSFS, which provides a continuous soliton acceleration in the normal-dispersion regime. Concerning the solitons generated in the trailing edge, each blue-shifted soliton (S_4, S_5, and S_6) experience its own SSFS, a trend to acquire longer wavelengths is seen for all solitons in Fig. <ref>(a), from the observation of the increase in their group velocities. In the case of the propagation length L = 60 mm, due to the higher peak power of S_6 in comparison to neighboring solitons (S_4 and S_5), S_6 undergoes stronger SSFS, so that the central wavelengths of S_4, S_5, and S_6 coalesce to the value of 800 nm. Hence, close to L = 60 mm solitons S_4, S_5, and S_6 have similar group velocities, propagating together with the central region of the pulse if the Raman contribution suddenly turns off. Although strong SSFS suffered by the S_6 soliton produces notable soliton acceleration, after propagation in a distance L ≈ 100 mm, its central wavelength exceeds the wavelengths of neighboring solitons. This trend stops when the S_6 soliton transfers part of its energy to the S_5 soliton through an inelastic collision at L ≈ 120 mm. After collision, the S_5 soliton quickly shifts its central wavelength to longer values, while the S_6 soliton experiences a transient effect due to the soliton depletion. Thus, only after the propagation distance of L = 140 mm, the S_6 soliton demonstrate conspicuous SSFS. To summarize, the generation of multiple solitons through the MTC process is initiated by the pulse propagation in the first segment under the action of the normal dispersion and SF. The respective output signal carries positive chirp, as shown in the spectrogram at L = L_1 = 15 mm in Fig.6 (a). At this point, the pulse has accumulated positive nonlinear and linear phases. Because the second segment applies the normal dispersion and negative nonlinearity to the propagating pulse, there is a transient length associated with the NL phase compensation. It means that the frequency generation by SPM occurs at opposite edges with respect to the frequency generation in the first segment. For instance, the leading edge of the pulse supports the generation of red-shifted components in the first waveguide segment and blue-shifted ones in the second segment. Once the red-shifted components propagate faster than their blue-shifted counterparts (in the normal-dispersion regime), the generation of multiple solitons through the temporal compression after passing the transient length is enabled at both edges of the pulse. In Fig. 5(b), the transient length is between L = 15 mm and L = 30 mm, where the accumulation of the positive linear phase prevents the full compensation of the pulse chirp, what would prevent the MTC process. After the transient length, the first soliton pair is generated at both edges of the pulse, as shown in the spectrogram for L = 30 mm in Fig.6 (b). Then, almost the whole pulse is split into multiple solitons. In particular, at L = 75 mm, the system creates six solitons (see Fig. 6 (d)), and the above-mentioned soliton collision between S_5 and S_6 at L = 120 mm can be seen in the spectrogram in Figs. 6(e,f). § CONCLUSIONS In this work, we propose a method to generate multiple ultrashort temporal solitons by a pulse propagating in a composite waveguide consisting of two segments with opposite signs of cubic refractive nonlinearity, while the dispersion is normal in both segments. Systematic simulations of the corresponding generalized NL Schrödinger equation demonstrate that pairs of temporal solitons are generated symmetrically with respect to the central region of the pulse, and as they propagate more solitons are generated from the leading and trailing edges until the central region of the pulse creating optical scenarios able to observe soliton collision. The physical process that enables the soliton generation is MTC (Multiple Temporal Compression), which may be obtained in systems composed of successive pairs of SF (self-focusing) and SDF (self-defocusing) materials. This may be considered as a scheme of nonlinearity management, i.e., a chain of alternating SF and SDF segments with a common GVD value <cit.>. The multi-soliton dynamics considered in this work can be extended by producing Newton's cradle already investigated in <cit.>, i.e., propagation of collision waves in a multi-soliton chain. Also, we demonstrated, for the first time, that the MTC process provides a method to generate ultrashort temporal solitons from a single input pulse, where the number of generated solitons can be controlled by the input peak power. This phenomenon is hard to be observed in a single waveguide under anomalous dispersion regime due to the energy limitation once a high energy concentration is localized in the first generated soliton around the central peak of the pulse. Exploitation of the MTC process is a powerful method for controllable generation of multiple solitons due to the energy redistribution among them. This approach was already exploited for generation of spatial solitons <cit.> but investigation of a composite sample like the one studied here was not considered before. The main advantage of the present approach is the possibility to control the number of generated solitons, which is manageable due to the energy redistribution. This point will be further addressed in a separate work, where a detailed comparison between the MTC process and high-order soliton fission in a PCF fiber will be reported. Another relevant direction for further studies is implementation of MTC for generation of multiple dark solitons and subsequently addressing collisions between them. § ACKNOWLEDGMENTS This work was supported by the Brazilian agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq (Grant: 431162/2018-2 and the National Institute of Photonics (INCT) program - Grant: 465.763/2014), Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco (FACEPE), and the doctoral scholarship of A.C.A. Siqueira was provided by the CNPq. The work of B.A.M. was supported, in part, by the Israel Science Foundation through grant No. 1695/22. 99 sysoliatin2007soliton A. A. Sysoliatin, A. K. Senatorov, A. I. Konyukhov, L. A. Melnikov, and V. A. Stasyuk, Opt. Express 15, 16302 (2007). tai1988fission K. Tai, A. Hasegawa, and N. Bekki, Opt. Lett. 13, 392 (1988). driben2013newton R. Driben, B. A. Malomed, A. V. Yulin, and D. V. Skryabin, Phys. Rev. A 87, 063808 (2013). dudley2006supercontinuum J. M. Dudley, G. Genty, and S. Coen, Rev. Mod. Phys. 78, 1135 (2006). dudley2002numerical J. M. Dudley, and S. Coen, IEEE J. Quantum Electron. 8, 651 (2002). braud2016solitonization F. Braud, M. Conforti, A. Cassez, A. Mussot, and A. Kudlinski, Opt. Lett. 41, 1412 (2016). demircan2008effects A. Demircan, M. Pietrzyk, and U. Bandelow, Opt. Quantum Electron. 40, 455 (2008). bose2015experimental S. Bose, S. Roy, R. Chattopadhyay, M. Pal, and S. K. Bhadra, J. Opt. 17, 105506 (2015). gordon1986theory J. P. Gordon, Opt. Lett. 11, 662 (1986). mitschke1986discovery F. M. Mitschke and L. F. Mollenauer, Opt. Lett. 11, 659 (1986). Agrawal2013nonlinear G. P. Agrawal, Nonlinear Fiber Optics, 5th ed. (Academic Press, Oxford, 2013). desalvo1992self R. DeSalvo, D. J. Hagan, M. Sheik-Bahae, G. Stegeman, E. W. Van Stryland, and H. Vanherzeele, Opt. Lett. 17, 28 (1992). ashihara2002soliton S. Ashihara, J. Nishina, T. Shimura, and K. Kuroda, J. Opt. Soc. Am. B 19, 2505 (2002). bache2008limits M. Bache, O. Bang, W. Krolikowski, J. Moses, and F.W. Wise, Opt. Express 16, 3273 (2008). guo2014few H. Guo, X. Zeng, B. Zhou, and M. Bache. Opt. Lett. 39, 1105 (2014). vsuminas2017second R. Šuminas, G. Tamošauskas, V. Jukna, A. Couairon, and A. Dubietis, Opt. Express 25, 6746 (2017). conforti2013extreme M. Conforti and F. Baronio, J. Opt. Soc. Am. B 30, 1041 (2013). zhang2017nonlinear Y. Zhang, and Y. Wang, RSC Adv. 7, 45129 (2017). reyna2017high A. S. Reyna and C. B. de Araújo, Adv. Opt. Photon. 9, 720 (2017). kassab2018metal L. R. P. Kassab, C. B. de Araújo, Metal nanostructures for photonics, 1st ed. (Elsevier, 2018). reyna2022beyond A. S. Reyna and C. B. de Araújo, J. Opt. 24, 104006 (2022). bose2016study S. Bose, R. Chattopadhyay, S. Roy, and S. K. Bhadra, J. Opt. Soc. Am. B 33, 1014 (2016). arteaga2018soliton F. R. Arteaga-Sierra, A. Antikainen, and G. P. Agrawal, Phys. Rev. A 98, 013830 (2018). bose2018dispersive S. Bose, R. Chattopadhyay, and S. K. Bhadra, Opt. Commun. 412, 226 (2018). bose2016implications S. Bose, A. Sahoo, R. Chattopadhyay, S. Roy, S. K. Bhadra, and G. P. Agrawal, Phys. Rev. A 94, 043835 (2016). driben2010solitary R. Driben and J. Herrmann, Opt. Lett. 35, 2529 (2010). zhao2022effects S. Zhao, R. Guo, and Y. Zeng, Phys. Rev. A 106, 033516 (2022). hult2007fourth J. Hult, J. Lightwave Technol. 25, 3770 (2007). malitson1965interspecimen I. H. Malitson, J. Opt. Soc. Am. 55, 1205 (1965). zhavoronkov2011observation N. Zhavoronkov, R. Driben, B. A. Bregadiolli, M. Nalin, and B. A. Malomed, Europhys. Lett. 94, 37011 (2011). towers2002stable I. Towers and B. A. Malomed, J. Opt. Soc. Am. B 19, 537 (2002). jisha2019generation C. P. Jisha, J. Beeckman, F. V. Acker, K. Neyts, S. Nolte, and A. Alberucci, Opt. Lett. 44, 1162 (2019).
http://arxiv.org/abs/2306.04296v1
20230607095428
New upper bounds for the $q$-numerical radius of Hilbert space operators
[ "Arnab Patra", "Falguni Roy" ]
math.FA
[ "math.FA", "47A12, 47A30" ]
New upper bounds for the q-numerical radius of Hilbert space operators Arnab PatraaCONTACT Arnab Patra. Email: [email protected] and Falguni Royb aDepartment of Mathematics, Indian Institute of Technology Bhilai, GEC campus, Raipur, India 492015; bDepartment of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal, India 575025 July 31, 2023 ====================================================================================================================================================================================================================================================================================================================== This article introduces several new upper bounds for the q-numerical radius of bounded linear operators on complex Hilbert spaces. Our results refine some of the existing upper bounds in this field. The q-numerical radius inequalities of products and commutators of operators follow as special cases. Finally, some new inequalities for the q-numerical radius of 2 × 2 operator matrices are established. q-numerical range; q-numerical radius; operator matrix Primary 47A12; 47A30 Secondary 15A60 § INTRODUCTION Let ℋ denotes a complex Hilbert space with the inner product .,. and ℬ(ℋ) denotes the C^*-algebra of bounded linear operators on ℋ. For T ∈ℬ(ℋ), the operator norm of T can be defined as T = sup_x = 1Tx. Another expression of T in terms of the inner product is as follows T = sup_x = y = 1 | Tx, y |. A norm |||.||| on ℬ(ℋ) is said to be a unitarily invariant norm if it satisfies |||UTV||| = |||T||| for all T ∈ℬ(ℋ) and for all unitary operators U and V in ℬ(ℋ). A norm |||.||| on ℬ(ℋ) is said to be a weakly unitarily invariant norm if it satisfies |||UTU^*||| = |||T||| for all T ∈ℬ(ℋ) and for all unitary operators U in ℬ(ℋ). The numerical range of T, which is denoted by W(T), is defined by W(T) = { Tx,x : x ∈ℋ, x = 1}. The most important properties of W(T) are that it always forms a convex set and its closure contains the spectrum of T. The numerical radius ω(T) and the Crawford number c(T) of T ∈ℬ(ℋ) are defined by ω(T) = sup_x = 1 | Tx, x |, c(T) = inf_x = 1 | Tx, x |. The numerical radius ω(T) defines a weakly unitarily invariant norm in ℬ(ℋ). The operator norm and numerical radius are both equivalent which follows from the following well-known inequality T/2≤ω(T) ≤T. The above inequalities are sharp. Equality holds in the first inequality if T^2 = 0 and in the second inequality if T is a normal operator. For the past several years researchers have attempted to refine the above inequality. Kittaneh <cit.> proved, respectively, that, if T ∈ℬ(ℋ), then ω(T) ≤1/2|T| + |T^*|≤1/2(T + √(T^2)), 1/4T^*T + TT^*≤ω^2(T) ≤1/2 T^*T + TT^* , and if A, B, C, D, S, T ∈ℬ(ℋ) then ω(ATB + CSD) ≤1/2 A|T^*|^2(1 - α) A^* + B^* |T|^2 α B + C |S^*|^2(1 - α) C^* + D^* |S|^2 α D , for all α∈ [0,1] where |T| = (T^*T)^1/2, the absolute value of T. Later on these bounds were refined extensively. For a detailed review of the numerical radius inequalities, we refer to the book <cit.>. There are several generalizations of the classical numerical range exist in the literature. Our focus will be on the q-numerical range and its radius of an operator. Let T ∈ℬ(ℋ) and q ∈ [0,1]. The q-numerical range W_q(T) and q-numerical radius ω_q(T) of T are defined respectively as W_q(T) = { Tx, y : x,y ∈ℋ, x = y = 1, x,y = q}, ω_q(T) = sup_z ∈ W_q(T) |z|. It is easy to verify that if q = 1 then W_q(T) reduces to the classical numerical range W(T). The set W_q(T) was first introduced by Marcus and Andresen <cit.> in 1977 for a linear transformation T defined over an n-dimensional unitary space. Nam-Kiu Tsing <cit.> established the convexity of the q-numerical range. Several properties of W_q(T) are discussed by Li et al. <cit.> and Li and Nakazato <cit.>. Chien and Nakazato <cit.> described the boundary of the q-numerical range of a square matrix using the concept of the Davis-Wieldant shell. The q-numerical range of shift operators is also studied <cit.>. Duan <cit.> draws attention to the vital significance that the idea of q-numerical range plays in characterizing the perfect distinguishability of quantum operations. Recently, Moghaddam et al. <cit.> have studied several q-numerical radius bounds. The following are a few of the inequalities they have derived. q/2(2 - q^2)T≤ω_q(T) ≤T, ω^2_q(T) ≤q^2/4(T + √(T^2))^2 + (1 - q^2 + 2q ) |T| ^2 , q^2/4(2 - q^2)^2 T^*T + TT^* ≤ω^2(T) ≤q^2/2(1 - )^2 T^*T + TT^* . These results are in fact generalizations of the corresponding inequalities in (<ref>), (<ref>), and (<ref>) respectively for numerical radius. Our interest in this paper lies in the direction of obtaining refined q-numerical radius inequalities. In section 2 we established upper bounds for q-numerical radius that generalize the results of <cit.>. In addition, the q-numerical radius bounds for 2 × 2 operator matrices are also discussed in section 3. Several examples with figures are provided to supplement the results. § Q-NUMERICAL RADIUS OF T ∈ℬ(ℋ) First, we record a few important properties of the q-numerical radius in the following lemmas. <cit.> Let T ∈ℬ(ℋ) and q ∈ [0,1], then (i) if ℋ = 1 then W_q(T) is non-empty if and only if q = 1 and for ℋ≥ 2, W_q(T) is always non-empty, (ii) W_q(T) is a bounded subset of ℂ and it is compact if ℋ is finite-dimensional, (iii) W_q(U^*TU) = W_q(T) for any unitary operator U ∈ℬ(ℋ), (iv) W_q(aT + bI) = a W_q(T) + bq for complex numbers a and b, (v) W_λ q(T) = λ W_q(T) for any complex numeber λ with |λ| = 1, (vi) W_q(T)^* = W_q(T)^* = {z : z ∈ W_q(T)}, (vii) q σ(T) ⊆W_q(T). <cit.> The q-numerical radius defines a semi-norm on ℬ(ℋ). <cit.> Let T ∈ℬ(ℋ) and m(T) = min{ T - λ I : λ∈ℂ} then W_0(T) = { z : |z| < m(T) } W_0(T) = { z : |z| ≤ m(T) }. The number m(T) is known as the transcendental radius of T. Stampfli <cit.> proved that there exists a unique complex number μ∈W(T) such that m(T) = min{ T - λ I : λ∈ℂ} = T - μ I . Prasanna <cit.> derived another expression for m(T) which is m^2(T) = sup_x = 1 (Tx^2 - | Tx,x |^2). For our study, the following results are crucial. (Bessel's Inequality) Let ℰ be a orthonormal set in ℋ and h ∈ℋ, then ∑_e ∈ℰ |⟨ h, e ⟩|^2 ≤h^2. <cit.> If T ∈ℬ(ℋ), then | Tx,y |^2 ≤ |T|^2 α x, x |T^*|^2 (1 - α)y,y , for all x,y ∈ℋ and α∈ [0,1]. Now we are ready to prove the q-numerical radius inequalities. Let T ∈ℬ(ℋ) and q∈ [0,1], then ω_q^2(T) ≤ q^2 ω^2(T) + ( 1 - q^2 + q√(1 - q^2)) T^2. For q=1 the inequality holds trivially. Let q ∈ [0,1) and x, y ∈ℋ such that x = 1 = y with ⟨ x,y ⟩ = q. Then y can be expressed as y = q x + √(1 - q^2)z, where z = 1 and x,z = 0. In this setting we have |⟨ Tx, y ⟩ | ≤ q |⟨ Tx, x ⟩| + √(1 - q^2)|⟨ Tx, z ⟩ |. Let ℰ be any orthonormal set in ℋ containing x and z. From Bessel's Inequality it follows ∑_e ∈ℰ∖{x} | Tx, e |^2 + | Tx, x |^2 ≤Tx^2 ⇒ ||^2 ≤∑_e ∈ℰ∖{x} | Tx, e |^2 ≤Tx^2 - | Tx, x |^2. Using the above relation in equation (<ref>), we get |⟨ Tx, y ⟩ | ≤ q |⟨ Tx, x ⟩| + √(1 - q^2) (Tx^2 - ||^2)^1/2 ⇒ ||^2 ≤ ( q |⟨ Tx, x ⟩| + √(1 - q^2) (Tx^2 - ||^2)^1/2)^2 = q^2 ||^2 + 2q || (Tx^2 - ||^2)^1/2 + (1 - q^2) (Tx^2 - ||^2) ≤ q^2 ||^2 + 2q || (Tx^2 - ||^2)^1/2 + (1 - q^2) Tx^2 ≤ q^2 ||^2 + q (||^2 + Tx^2 - ||^2 ) + (1 - q^2) Tx^2 = q^2 ||^2 + (1 - q^2 + q ) Tx^2 ≤ q^2 ω^2(T) + ( 1 - q^2 + q√(1 - q^2)) T^2. Taking supremum for all x,y ∈ℋ with x = y = 1 and x,y = q we get ω_q^2(T) ≤ q^2 ω^2(T) + ( 1 - q^2 + q√(1 - q^2)) T^2. Now we mention a few observations and derive several corollaries based on the above Theorem. * From the result ω(T) ≤1/2 (T + T^2^1/2), mentioned in Theorem 1 of <cit.>, we have the following corollary from Theorem <ref>. For T ∈ℬ(ℋ) and q ∈ [0,1] we have ω_q^2(T) ≤q^2/4 (T + T^2^1/2)^2 + ( 1 - q^2 + q√(1 - q^2)) T^2. Using the fact |T| = T , we have the following relations ω_q^2(T) ≤ q^2/4 (T + T^2^1/2)^2 + ( 1 - q^2 + q√(1 - q^2)) |T|^2 ≤ q^2/4 (T + T^2^1/2)^2 + ( 1 - q^2 + 2q√(1 - q^2)) T^2. This shows that the inequality mentioned in (<ref>) provides an improvement on the result (<ref>), proved in Theorem 2.10 in <cit.>. * From the relation ω^2(T) ≤1/2T^*T+TT^* as obtained in Theorem 1 in <cit.> we have the following corollary follows from the Theorem <ref>. For T ∈ℬ(ℋ) and q ∈ [0,1], the following relation holds ω_q^2(T) ≤q^2/2T^*T+TT^* + (1 - q^2 + q ) T^2. Here we prove the refinement of the above result in comparison to the existing upper bound (<ref>) of ω^2(T) mentioned in Theorem 3.1 in <cit.>. For this, we use the fact that T^*T + TT^*≥T^2. From Corollary <ref> it follows ω_q^2(T) ≤ q^2/2T^*T+TT^* + (1 - q^2 + q ) T^2 ≤ ( 1 - q^2/2 + q ) ) T^*T+TT^* ≤ q^2/2(1- )^2T^*T+TT^*. The last inequality follows from the fact that q^2/2(1- )^2 - ( 1 - q^2/2 + q ) ) = (1 - q^2)(2 - q^2) + 2 (1 - q^3)/2q^2≥ 0. It is crucial to note that the upper bound given in equation (<ref>) goes unbounded when q approaches zero. However, the upper bound stated in Corollary <ref> is applicable to all q∈ [0,1]. Now we provide a few examples to demonstrate our results. For this the following lemma is crucial. (<cit.>) Suppose 0≤ q ≤ 1 and T∈ M_2(ℂ). Then T is unitarily similar to e^it[ γ a; b γ ] for some 0≤ t ≤ 2π and 0≤ b ≤ a. Also, W_q(T)=e^it{γ q + r((c+pd)coss+i(d+pc)sins): 0 ≤ r ≤ 1, 0 ≤ s ≤ 2π}, with c=a+b/2,d=a-b/2 and p=√(1-q^2). Consider the matrix T=[ 0 1/35; 0 0 ]. Then T=1/35. From Lemma <ref>, W_q(T)={re^is/70(1+√(1-q^2):0≤ r ≤ 1,0≤ s ≤ 2π)}. Therefore, ω_q(T)=1/70(1+√(1-q^2)). In this example, we will compare the upper bounds (<ref>) and (<ref>) of ω_q(T) obtained in <cit.> with our results Corollary <ref> and Corollary <ref>. Since T^2=0, from equations (<ref>) and (<ref>) we repectively get ω_q(T) ≤1/35√(1-3q^2/4+2q√(1-q^2)) and ω_q(T) ≤1/35√(1-3q^2/4+q√(1-q^2)). Figure <ref> presents the graphical representation of upper bounds (<ref>), (<ref>) and ω_q(T). Again using T^*T+TT^*=1/35^2, from eqations (<ref>) and (<ref>) we get the upper bounds ω_q(T)≤q/35√(2)(1-√(1-q^2)) and ω_q(T)≤1/35√(1-q^2/2+q√(1-q^2)). Figure <ref> presents the graphical representation of upper bounds (<ref>), (<ref>) and ω_q(T). It is evident from Figure <ref> and <ref> that upper bounds obtained using Corollary <ref> and Corollary <ref> are more refined than those of <cit.>. Figure <ref> shows the comparison between the obtained results (<ref>) and (<ref>). In this example, the upper bound obtained using Corollary <ref> gives a better result. Consider another matrix T=[ 0 1/25; 1/36 0 ]. Then T=1/25. From Lemma <ref>, W_q(T)={r/1800((61+√(1-q^2)11)+i(11+√(1-q^2)61)):0≤ r ≤ 1,0≤ s ≤ 2π)}. Therefore, ω_q(T)=1/1800(61+√(1-q^2)11). In this example also we will compare the upper bounds (<ref>) and (<ref>) of ω_q(T) obtained in <cit.> with our results Corollary <ref> and Corollary <ref>. Since T^2=1/900, from equations (<ref>) and (<ref>) we repectively get ω_q(T) ≤1/25√(1-23q^2/144+2q√(1-q^2)) and ω_q(T) ≤1/25√(1-23q^2/144+q√(1-q^2)). Figure <ref> presents the graphical representation of upper bounds (<ref>), (<ref>) and ω_q(T). Now using T^*T+TT^*=1921/25^2 × 36^2, from eqations (<ref>) and (<ref>) we get the upper bounds ω_q(T)≤√(1921/2)q/900(1-√(1-q^2)) and ω_q(T)≤1/25√(1-671q^2/2592+q√(1-q^2)). Figure <ref> presents the graphical representation of upper bounds (<ref>), (<ref>) and ω_q(T). Again, it is clear from Figure <ref> and <ref> that upper bounds obtained using Corollary <ref> and Corollary <ref> are more refined than those of <cit.>. Figure <ref> shows the comparison of our results (<ref>) and (<ref>). Contrary to Example <ref>, in this example, the upper bound obtained using Corollary <ref> gives a better result than that of Corollary <ref>. Figure <ref> and Figure <ref> show that the upper bounds obtained in Corollary <ref> and <ref> are noncomparable. Now we derive another upper bound of the q-numerical radius which refines the Theorem <ref> and also the corresponding corollaries <ref> and <ref>. Let T ∈ℬ(ℋ) and q∈ [0,1], then ω_q^2(T) ≤ q^2 ω^2(T) + ( 1 - q^2 + q√(1 - q^2)) T^2 - (1 - q^2) c(T). For q=1 the inequality holds trivially. Let q ∈ [0,1). In accordance with the proof of Theorem <ref>, it follows from the relation (<ref>) |⟨ Tx, y ⟩ | ≤ q |⟨ Tx, x ⟩| + √(1 - q^2) (Tx^2 - ||^2)^1/2 ⇒ ||^2 ≤ ( q |⟨ Tx, x ⟩| + √(1 - q^2) (Tx^2 - ||^2)^1/2)^2 = q^2 ||^2 + 2q || (Tx^2 - ||^2)^1/2 + (1 - q^2) (Tx^2 - ||^2) ≤ q^2 ||^2 + (1 - q^2 + q ) Tx^2 - (1 - q^2) ||^2 ≤ q^2 ω^2(T) + ( 1 - q^2 + q√(1 - q^2)) T^2 - (1 - q^2) c^2(T). Hence the required relation follows. Here we mention a few important observations related to the above theorem. (i) Clearly the relation (<ref>) improves the relation (<ref>) of Theorem <ref> when c(T)>0. If c(T) = 0, then the the upper bound in (<ref>) reduces to the upper bound of (<ref>). (ii) In this regard it is worth mentioning that c(T) > 0 if and only if 0 ∉W(T). The Corollary <ref> and Corollary <ref> can be improved by using the Theorem <ref> as follows. For T ∈ℬ(ℋ) and q ∈ [0,1] we have ω_q^2(T) ≤q^2/4 (T + T^2^1/2)^2 + ( 1 - q^2 + q√(1 - q^2)) T^2 - (1 - q^2) c^2(T). For T ∈ℬ(ℋ) and q ∈ [0,1], the following relation holds ω_q^2(T) ≤q^2/2T^*T+TT^* + (1 - q^2 + q ) T^2 - (1 - q^2) c^2(T). The following corollary relates the q-numerical radius, numerical radius, and the transcendental radius of an operator T ∈ℬ(ℋ). Let T ∈ℬ(ℋ) and q∈ [0,1], then ω_q(T) ≤ q ω(T) + √(1 - q^2) m(T). Since the case q = 1 is obvious, let q ∈ [0,1) and x, y ∈ℋ such that x = 1 = y with ⟨ x,y ⟩ = q. Then y can be expressed as y = q x + √(1 - q^2)z, where z = 1 and x,z = 0. In this setting we have |⟨ Tx, y ⟩ | ≤ q |⟨ Tx, x ⟩| + √(1 - q^2)|⟨ Tx, z ⟩ | . Using Bessel's inequality, similar to Theorem <ref>, we have ||^2 ≤Tx^2 - | Tx, x |^2. Using the above relation in equation (<ref>), we get | | ≤ q || + √(1 - q^2) (Tx^2 - ||^2)^1/2 ≤ q ω(T) + m(T). Taking supremum over all x and y where x = 1, y = 1, x,y = q we get ω_q(T) ≤ q ω(T) + m(T). This proves the result. Now we provide a more general q-numerical radius inequality from which several other inequalities related to product and commutators of operators follows. This result is q-numerical radius version of the inequality (<ref>). If A, B, C, D, S, T ∈ℬ(ℋ) and q ∈ [0,1], then ω_q(ATB+CSD) ≤ q/2B^*|T|^2αB+A|T^*|^2(1-α)A^*+D^*|S|^2αD+C|S^*|^2(1-α)C^* + (√(1-q^2)+√(2q√(1-q^2)))(√(B^*|T|^2αBA|T^*|^2(1-α)A^*)+ . . √(D^*|S|^2αD^*C|S^*|^2(1-α)C^*)). The case q=1 follows directly from the relation (<ref>) which was derived in <cit.>. Let x,y∈ℋ such that x = y = 1 with x,y = q. Then we have y = qx + z where z = 1 and x,z = 0. Then from Lemma <ref> we have | (ATB+CSD)x,y | ≤ | TBx,A^*y |+| SDx,C^*y | ≤ B^*|T|^2αBx,x^1/2 A|T^*|^2(1-α)A^*y,y^1/2 + D^*|S|^2αDx,x^1/2 C|S^*|^2(1-α)C^*y,y^1/2 ≤ B^*|T|^2αBx,x^1/2(q^2 A|T^*|^2(1-α)A^*x,x+(1-q^2) A|T^*|^2(1-α)A^*z,z. . + 2q√(1-q^2)| A|T^*|^2(1-α)A^*x,z| )^1/2 + D^*|S|^2αDx,x^1/2(q^2 C|S^*|^2(1-α)C^*x,x+(1-q^2) C|S^*|^2(1-α)C^*z,z. . + 2q√(1-q^2)| C|S^*|^2(1-α)C^*x,z| )^1/2 ≤ B^*|T|^2αBx,x^1/2(q A|T^*|^2(1-α)A^*x,x^1/2+√(1-q^2) A|T^*|^2(1-α)A^*z,z^1/2. . + √(2q√(1-q^2))| A|T^*|^2(1-α)A^*x,z|^1/2) + D^*|S|^2αDx,x^1/2(q C|S^*|^2(1-α)C^*x,x^1/2+√(1-q^2) C|S^*|^2(1-α)C^*z,z^1/2. . + √(2q√(1-q^2))| C|S^*|^2(1-α)C^*x,z|^1/2) = q ( B^*|T|^2αBx,x^1/2 A|T^*|^2(1-α)A^*x,x^1/2 + D^*|S|^2αDx,x^1/2 C|S^*|^2(1-α)C^*x,x^1/2) + √(1-q^2)( B^*|T|^2αBx,x^1/2 A|T^*|^2(1-α)A^*z,z^1/2 + D^*|S|^2αDx,x^1/2. × . C|S^*|^2(1-α)C^*z,z^1/2) + √(2q√(1-q^2))( B^*|T|^2αBx,x^1/2| A|T^*|^2(1-α)A^*x,z|^1/2. . + D^*|S|^2αDx,x^1/2| C|S^*|^2(1-α)C^*x,z|^1/2) ≤ q/2(B^*|T|^2αB+A|T^*|^2(1-α)A^*+D^*|S|^2αD+C|S^*|^2(1-α)C^*)x,x + √(1-q^2)( B^*|T|^2αBx,x^1/2 A|T^*|^2(1-α)A^*z,z^1/2 + D^*|S|^2αDx,x^1/2. × . C|S^*|^2(1-α)C^*z,z^1/2) + √(2q√(1-q^2))( B^*|T|^2αBx,x^1/2| A|T^*|^2(1-α)A^*x,z|^1/2. . + D^*|S|^2αDx,x^1/2| C|S^*|^2(1-α)C^*x,z|^1/2) ≤ q/2B^*|T|^2αB+A|T^*|^2(1-α)A^*+D^*|S|^2αD+C|S^*|^2(1-α)C^*+ (√(1-q^2)+√(2q√(1-q^2)))(√(B^*|T|^2αBA|T^*|^2(1-α)A^*)+√(D^*|S|^2αDC|S^*|^2(1-α)C^*)). The result follows by taking supremum over all such x,y ∈ℋ with x = y = 1, and x,y = q on the left-hand side. Here we mention a few particular cases of the above theorem. Consider a few particular cases of the above result: (i) If T=B=I and S=0 then ω_q(A)≤q/2AA^*+I+(√(1-q^2)+√(2q√(1-q^2)))A. Also, if A=B=I, S = 0 and α = 1/2 then we have ω_q(T) ≤ q/2|T| + |T^*|+(√(1-q^2)+√(2q√(1-q^2)))√(|T||T^*|) = q/2|T| + |T^*|+(√(1-q^2)+√(2q√(1-q^2))) T. (ii) If T=I and S=0 then ω_q(AB)≤q/2AA^*+BB^*+(√(1-q^2)+√(2q√(1-q^2)))AB. (iii) If T=I, C=B and D=A then ω_q(AB+BA)≤q/2AA^*+B^*B+A^*A+BB^*+(√(1-q^2)+√(2q√(1-q^2)))A^2B^2. Now we focus on some bounds of q-numerical radius which are not dependent on q. For the following theorem, we use a similar concept of Theorem 2 in <cit.>. If A, B, C, D, S, T ∈ℬ(ℋ) and q ∈ [0,1], then ω_q(ATB + CSD) ≤1/2( A|T^*|^2(1 - α) A^* + C |S^*|^2(1 - α) C^* + B^* |T|^2 α B + D^* |S|^2 α D ), holds for all α∈ [0,1]. Let x,y ∈ℋ with x = y =1 and x,y = q. Then from Lemma <ref> and AM-GM inequalty we have | (ATB + CSD)x,y | ≤ | TBx,A^*y | + | SDx,C^*y | ≤ |T|^2 α Bx, Bx ^1/2 |T^*|^2(1- α) A^*y, A^*y ^1/2 + |S|^2 α Dx, Dx ^1/2 |S^*|^2(1- α) C^*y, C^*y ^1/2 ≤ 1/2( |T|^2 α Bx, Bx + |T^*|^2(1- α) A^*y, A^*y ) + 1/2( |S|^2 α Dx, Dx + |S^*|^2(1- α) C^*y, C^*y ) = 1/2(A|T^*|^2(1- α) A^* + C|S^*|^2(1- α) C^*)y, y + 1/2(B^*|T|^2 α B + D^*|S|^2 αD)x, x ≤ 1/2A|T^*|^2(1 - α) A^* + C |S^*|^2(1 - α) C^* + 1/2 B^* |T|^2 α B + D^* |S|^2 α D . The required result follows by taking supremum over all x,y ∈ℋ with x = y =1 and x,y = q on the left-hand side. (i) In particular, let A = B = I and S = 0 then we have ω_q(T) ≤1/2( |T|^2 α + |T^*|^2(1- α)), α∈ [0,1]. By choosing α = 1/2 and using |T| = |T^*| = T, we get ω_q(T) ≤1/2( |T| + |T^*| ) = T. (ii) If T=I and S=0 then we get ω_q(AB) ≤1/2 (AA^* + B^*B). Also by putting A = I, T = A and S = 0 we have ω_q(AB) ≤1/2 ( |A^*|^2(1- α) + B^* |A|^2 α B), and by choosing α = 1/2 it follows ω_q(AB) ≤1/2 ( A + B^* |A| B). § Q-NUMERICAL RADIUS OF 2 × 2 OPERATOR MATRICES In this section, we derive a few bounds for the q-numerical range of 2 × 2 operator matrices. Let and are two complex Hilbert spaces with the inner product .,.. Then forms a Hilbert space and any operator T ∈ℬ() has an 2 × 2 matrix representation of the form T = [ A B; C D ], where A ∈ℬ(), B ∈ℬ(, ), C ∈ℬ(, ), D ∈ℬ(). As the q-numerical radius forms a weakly unitarily invariant norm, from the result mentioned in (P. 107 <cit.>), we can deduce the following inequalities ω_q ( [ A 0; 0 D ]) ≤ω_q ( [ A B; C D ]) and ω_q ( [ 0 B; C 0 ]) ≤ω_q ( [ A B; C D ]). The numerical radius of diagonal operator matrices enjoy the following equality (<cit.>) ω( [ A 0; 0 D ]) = max{ω(A), ω(D)}. A similar conclusion is not true for q-numerical radius. This is seen in the Example <ref> that follows. The following lemma on the q-numerical radius of Hermitian matrices is required for this. If T is an n × n Hermitian matrix with eigenvalues λ_1 ≥λ_2 ≥⋯≥λ_n and |q|≤ 1, then the numerical range W_q(T) equals the (closed) elliptic disc with foci qλ_1 and q λ_n and minor axis of length √(1 - |q|^2)(λ_1 - λ_n). Let q ∈ [0,1]. For any 2 × 2 matrix T=[ a 0; 0 d ] with a,d > 0 and a ≠ d, the q-numerical range of T is given by W_q(T) = { (x,y) : ( x - q/2 (a+d))^2/1/4(a-d)^2 + y^2/1/4(1 - q^2)(a-d)^2≤ 1 }. The q-numerical radius of T=[ a 0; 0 d ] is given by the following maximization problem max√(x^2+y^2), ( x - q/2 (a+d))^2/1/4(a-d)^2 + y^2/1/4(1 - q^2)(a-d)^2 = 1. Solving this we get max√(x^2 + y^2) = ( q/2(a+d) + 1/2|a-d| ) ( q/2(a+d) + 1/2|a-d|, 0 ). Hence ω_q( [ a 0; 0 d ]) =( q/2(a+d) + 1/2|a-d| ) > q max{a,d } q ≠ 1. The above equation implies that when q ≠ 1, the q-numerical range does not satisfy the relation ω_q ( [ A 0; 0 D ]) = max{ω(A)_q, ω_q(D)} similar to the relation mentioned in (<ref>). In this regard, the following result provides the upper and lower bound of the q-numerical radius. Let and are Hilbert spaces and let A ∈ℬ(), B ∈ℬ(,), C ∈ℬ(, ), D ∈ℬ() and q ∈ [0,1]. Then the following inequalities hold (i) max{ω_q(A), ω_q(D), ω_q ( [ 0 B; C 0 ])}≤ω_q ( [ A B; C D ]), (ii) ω_q ( [ A B; C D ]) ≤max{A, D} + (1 - 3q^2/4 + q )^1/2 (B + C), (iii) ω_q ( [ A B; C D ]) ≤(A^2 + B^2 + C^2 + D^2 )^1/2 + q ( max{ω(A), ω(D)} + B + C/2). (i) The case q=1 follows directly from relation (<ref>). Let q ∈ [0,1) and x_1, y_1 ∈ℋ_1 such that x_1 = y_1 = 1 with x_1, y_1 = q. Then ω_q ( [ A B; C D ]) ≥ | [ A B; C D ][ x_1; 0 ], [ y_1; 0 ]| = | Ax_1, y_1 |. Taking supremum over all such x_1 and y_1 with x_1 = y_1 = 1 and x_1, y_1 = q, it follows that ω_q ( [ A B; C D ]) ≥ω_q(A). In a similar way, it can be proved that ω_q ( [ A B; C D ]) ≥ω_q(D). Finally from the relations (<ref>), (<ref>), and (<ref>) we get max{ω_q(A), ω_q(D), ω_q ( [ 0 B; C 0 ])}≤ω_q ( [ A B; C D ]). (ii) Note that the q-numeical range forms a semi-norm and the following relation holds ω_q ( [ A B; C D ]) ≤ω_q ( [ A 0; 0 D ]) + ω_q ( [ 0 B; 0 0 ]) + ω_q ( [ 0 0; C 0 ]). Since [ 0 B; 0 0 ]^2 = [ 0 0; C 0 ]^2 = [ 0 0; 0 0 ], Theorem 2.5 of <cit.> and (<ref>) imply that ω_q ( [ A B; C D ]) ≤ω_q ( [ A 0; 0 D ]) + (1 - 3q^2/4 + q )^1/2 (B + C). Let [ x_1; x_2 ] , [ y_1; y_2 ]∈, with [ x_1; x_2 ] = [ y_1; y_2 ] = 1, [ x_1; x_2 ] , [ y_1; y_2 ] = q. Observe that | [ A 0; 0 D ][ x_1; x_2 ], [ y_1; y_2 ]| ≤ | Ax_1, y_1 | + | Dx_2, y_2 | ≤ Ax_1y_1 + Dx_2y_2 ≤ max{A, D}. Hence we have the following result. ω_q ( [ A 0; 0 D ]) ≤max{A, D}. The required upper bound of ω_q ( [ A B; C D ]) follows from the inequalities (<ref>) and (<ref>). (iii) To prove the last inequality let [ x_1; x_2 ] , [ y_1; y_2 ]∈, with [ x_1; x_2 ] = [ y_1; y_2 ] = 1, [ x_1; x_2 ] , [ y_1; y_2 ] = q. In this setting, we can take y_1 = qx_1 + z_1, y_2 = qx_2 + z_2 where z_1 ∈, z_2 ∈ with [ z_1; z_2 ] = 1, [ x_1; x_2 ] , [ z_1; z_2 ] = 0. We assume that x_1 = cosθ, x_2 = sinθ, z_1 = cosϕ, z_2 = sinϕ where θ, ϕ∈ [0, π/2]. Also, we use the fact that for any a,b ∈ℝ, (1) max_θ (a cosθ + b sinθ) = √(a^2 + b^2), (2) max_θ (a cos^2 θ + b sin^2 θ) = max{a,b}. In this setting we have | [ A B; C D ][ x_1; x_2 ], [ y_1; y_2 ]| ≤ | Ax_1, y_1 | + | Bx_2, y_1 | + | Cx_1, y_2 | + | Dx_2, y_2 |. The subsequent computations are mentioned below. | Ax_1, y_1 | + | Bx_2, y_1 | + | Cx_1, y_2 | + | Dx_2, y_2 | = | Ax_1, qx_1 + z_1 | + | Bx_2, qx_1 + z_1 | + | Cx_1, qx_2 + z_2 | + | Dx_2, qx_2 + z_2 | ≤ q (| Ax_1, x_1 | + | Bx_2, x_1 | + | Cx_1, x_2 | + | Dx_2, x_2 |) + (| Ax_1, z_1 | + | Bx_2, z_1 | + | Cx_1, z_2 | + | Dx_2, z_2 |) ≤ q ( ω(A) cos^2 θ + ω(D) sin^2 θ + 1/2(B+C) sin 2θ) + ( Acosθcosϕ. . + Bsinθcosϕ + Ccosθsinϕ + Dsinθsinϕ) ≤ q ( max{ω(A), ω(D)} + B + C/2) + ( A^2 + B^2 + C^2 + D^2)^1/2. The required result follows from the above inequality and relation (<ref>). Below are a few highlights of the aforementioned theorem. (i) If B=C=0 in Theorem <ref>(i) and (ii), it follows max{ω_q(A), ω_q(D)}≤ω_q ( [ A 0; 0 D ]) ≤max{A, D}. The above relation implies that max{ω_q(A), ω_q(D)} actually provides a lower bound of ω_q ( [ A 0; 0 D ]) where it is proved that q-numerical radius fails to satisfy an analogous relation as of equation (<ref>). (ii) If q = 1 then first and third relations of Theorem <ref> reduces to existing lower and upper bounds of ω( [ A B; C D ]) mentioned in <cit.>. (iii) The upper bounds obtained in (ii) and (iii) of Theorem <ref> are noncomparable. Let T=[ a b; c d ]=[ 1.5442 + 1.4193i 0.0859 + 0.2916i; -1.4916 + 0.1978i -0.7423 + 1.5877i ], randomly generated by MATLAB command . Figure <ref>, demonstrates the comparison of the upper bounds (ii) and (iii) of Theorem <ref> for the matrix T. § DISCLOSURE STATEMENT No potential conflict of interest is reported by the authors. tfnlm
http://arxiv.org/abs/2306.06203v1
20230609191051
FLSL: Feature-level Self-supervised Learning
[ "Qing Su", "Anton Netchaev", "Hai Li", "Shihao Ji" ]
cs.LG
[ "cs.LG", "cs.CV" ]
=3pt =2pt =3pt =2pt Spectrahedral Geometry of Graph Sparsifiers [ July 31, 2023 =========================================== Current self-supervised learning (SSL) methods (, SimCLR, DINO, VICReg, MOCOv3) target primarily on representations at instance level and do not generalize well to dense prediction tasks, such as object detection and segmentation. Towards aligning SSL with dense predictions, this paper demonstrates for the first time the underlying mean-shift clustering process of Vision Transformers (ViT), which aligns well with natural image semantics (, a world of objects and stuffs). By employing transformer for joint embedding and clustering, we propose a two-level feature clustering SSL method, coined Feature-Level Self-supervised Learning (FLSL). We present the formal definition of the FLSL problem and construct the objectives from the mean-shift and k-means perspectives. We show that FLSL promotes remarkable semantic cluster representations and learns an embedding scheme amenable to intra-view and inter-view feature clustering. Experiments show that FLSL yields significant improvements in dense prediction tasks, achieving 44.9 (+2.8)% AP and 46.5% AP in object detection, as well as 40.8 (+2.3)% AP and 42.1% AP in instance segmentation on MS-COCO, using Mask R-CNN with ViT-S/16 and ViT-S/8 as backbone, respectively. FLSL consistently outperforms existing SSL methods across additional benchmarks, including UAV object detection on UAVDT, and video instance segmentation on DAVIS 2017. We conclude by presenting visualization and various ablation studies to better understand the success of FLSL. § INTRODUCTION Following its success in natural language processing (NLP) <cit.>, self-supervised learning (SSL) with transformer <cit.> has emerged as a highly effective strategy and a popular model choice over the CNN-based counterparts in vision tasks. The remarkable performance achieved by SSL have been demonstrated by SimCLR <cit.>, MOCOv3 <cit.>, DINO <cit.>, VICReg <cit.>, SwAV <cit.>, BYOL <cit.>, and among others. Without relying on manual supervision, a successful paradigm of SSL promotes semantic representations conducive to the downstream tasks, , classification, detection and segmentation. However, most existing SSL methods operate at the instance-level, where an encoder is trained to maximize the agreement of the representations of multiple augmented views of an image. Though demonstrating strong performance on the classification tasks <cit.>, the instance-level SSL is inherently misaligned with the dense prediction tasks, such as object detection, where the lower level semantic information plays a bigger role than the instance-level semantic information. This leads to inferior transferability to those dense prediction tasks. Recent attempts to bridge the semantic gap are mainly based on region <cit.>, patch <cit.>, or pixel (, dense feature) matching tasks <cit.> with optional instance-level objectives. However, learning of distinct representation for each image patch or region still mismatches the natural semantics within an image (referred to as local semantics), where features of the same semantics should be highly correlated other than being distinct. Semantics can range from features of high similarity, features of the same object, to more complex semantic structures. Methods such as SoCo <cit.> and ORL <cit.> leverage the off-the-shelf selective search <cit.> to impose the semantic constraint to the contrastive learning pipeline. Nonetheless, the inclusion of a non-trainable region proposal module in both methods impedes the learning of distinct representations of RoIs among each other and from the rest of the image, which is the desired property of locally semantic representations for object detection. Existing SSL methods targeting dense prediction primarily focus on learning globally semantic representations of image sub-regions as RoIs, patches, or pixels with limited consideration for the alignment of those representations with local semantics. This observation leads us to ask the following question: Can we learn a representation that is both locally and globally semantic for a group of features (, representing an object) in an end-to-end trainable SSL approach? To this end, we propose the Feature Level Self-supervised Learning (FLSL) that leverages the mean-shift clustering process inherent in the transformer to extract the representatives of feature clusters as representations and incorporates k-means based SSL approach to induce the learned representations both locally and globally semantic. Figure. <ref> illustrates the main idea of FLSL with details to be discussed in Sec. <ref>. Contributions This paper takes a step forward to bridge the gap between the current SSL methods and downstream dense prediction tasks. Our contributions are summarized as follows: * We demonstrate for the first time the connection between the attention mechanism and mean-shift clustering, and reinterpret vision transformer from the perspective of mean-shift. * By employing transformer for joint embedding and feature clustering, we propose FLSL, an end-to-end trainable SSL method that promotes the representations of feature clusters to be semantic at two levels: (i) intra-view clusters within an image, and (ii) inter-view clusters over an entire dataset. * The derivation and construction of the FLSL objectves is rooted in mean-shift and the non-empty k-means clustering. The first-level semantic representation is encouraged by optimizing the intra-cluster feature affinity with a self-attention layer, while the second-level semantic representation is encouraged through the non-empty k-means clustering with positive samples retrieved through a cross-attention layer. * We validate the synergy between FLSL and ViT, and show significant improvement in transferability of learnt features to dense prediction tasks, including object detection and segmentation. FLSL-pretrained ViT on ImageNet-1k (IN1k) demonstrates superior performance compared to the state-of-the-art ADCLR-IN1k <cit.> and MAE <cit.> pretrained counterparts. Moreover, it consistently outperforms existing SSL methods across additional benchmarks, including UAV object detection on UAVDT, and video instance segmentation on DAVIS 2017. § RELATED WORK SSL for dense prediction Recent attempts to bridge the gap between common SSL and dense prediction tasks focus primarily on sub-region matching tricks. For example, DenseCL <cit.> applies contrastive learning on pairs of patches with highest similarity. However, the patch-matching trick leads to distinct representations with low correlation among patches, which is ill-posed for the semantics of a natural image. Along with the instance-level objective, PixPro <cit.> and LC-loss <cit.> factor in agreement between positive pixel pairs which are assigned through thresholded-distance in PixPro and position projection in LC-loss. DetCo <cit.> further incorporates instance-patch level contrastive losses along with instance level and patch level losses. To learn representations at object level, SoCo <cit.> and ORL <cit.> employ selective search to crop out RoIs. ORL further enables inter-object contrastive learning via top-ranked RoI pair retrieval. In contrast, SCRL <cit.> relaxes the semantic constraint using random crops within the intersection area of augmented views as RoIs. As discussed in Sec. <ref>, all of these methods focus on learning globally semantic representations for image sub-regions, and they do not touch on local semantics that are necessary for dense prediction. Self-supervised vision transformer In pioneering works, self-supervised training of transformer for vision tasks generally follow the paradigm of masked autoencoder in NLP <cit.>. For instance, iGPT <cit.> features reconstruction of masked pixels as one of its objectives. In general, SSL for ViT can be classified into two categories: the joint-embedding strategy epitomized by DINO <cit.> and MoCov3 <cit.>, and the generative approaches represented by MAE <cit.>. The crossover of the two strategies is demonstrated by iBOT <cit.>. Regarding dense prediction, EsViT <cit.>, designed for Swin Transformer <cit.>, follows the region-matching strategy and applies the DINO loss to the probabilities of positive pairs determined by highest similarity. Instead of finding the best-matching patch, SelfPatch <cit.> considers the direct neighbors as its positive patches. However, with limited semantics contained in a fixed small area (, 8-connected neighbors), the method still suffers from semantic misalignment. To address the sub-region mismatch issue of DINO, ADCLR <cit.> constructs query tokens from random sub-regions and treats them as extra class tokens in the DINO objective. This promotes region-aware semantic representations that better aligned with the local semantics, and leads to substantial improvement in dense prediction. § INTUITION: THE CONNECTION BETWEEN MEAN-SHIFT AND ATTENTION As discussed in Sec. <ref>, the misalignment between the current SSL methods and dense prediction tasks lies in the clustering bias at the semantic level. Instead of setting a fixed granularity, such as instance-level or fix-sized patch-level, a desired semantic representation scheme should be able to represent from a single patch to a cluster of patches or even an entire image. The representation space of an image can be considered as an empirical probability density function of features, and the modes (local maxima) therefore can be regarded as the representatives of clusters <cit.>. These modes can then be readily retrieved via clustering algorithms, particularly, non-parametric kernel density estimation (KDE) methods <cit.> when the image composition (, number of objects and stuffs) is unknown. One typical KDE-based method is the mean-shift clustering <cit.>. In the following, we first give an overview of self-attention (SA) mechanism of transformer and the mean-shift algorithm. We then show that the mean-shift update rule conforms to the SA mechanism of transformer. Attention mechanism First introduced to recurrent neural networks as a context extractor for machine translation <cit.>, attention has premised major breakthroughs in NLP with the emergence of transformer that relies solely on the scaled dot-product attention mechanism <cit.> given by attention(Q,K,V)= V softmax(Q^⊤K/√(D_qk)), where Q, K and V denote query, key and value matrices packing together sets of query, key and value vectors, respectively, D_qk denotes the dimension of query and key vectors, and softmax(Z)_ij=exp(Z_ij)/∑_kexp(Z_ik). As a special case of attention, SA matches a sequence Z with itself to extract the semantic dependencies among its components, , Q=W_QZ, K=W_KZ, V=W_VZ, where the projections W_'s are the parameter matrices. Mean-shift clustering and attention Given N data points {z_i}_i=1^N⊂ IR^D, the kernel density estimate of p(z) with kernel K(t) can be defined as p(z)=∑^[-2pt]66N_[2pt]66i=1p(z_i)p(z|z_i) =∑^[-2pt]66N_[2pt]66i=1π_i1/T_iK(d(z, z_i; Σ_i)), where p(z_i)=π_i is the mixing proportion of point z_i, ∑_i=1^[-2pt]66Nπ_i=1, T_i denotes the normalization term dependent only on the covariance matrix Σ_i, , for a Gaussian kernel T_i=|2πΣ_i|^1/2 and d(z, z_i; Σ_i)=( z-z_i)^TΣ^-1_i( z - z_i) is the Mahalanobis distance. Finding the modes of p(z) is to seek stationary points by equating the gradient of p(z) to zero, ∂ p(z)∂z =0, which arrives at ẑ = f(z) = ∑^N_i=1p(z_i|z)z_i, with p(z_i|z) = π_i1/T_iK'(d(z,z_i; Σ_i))Σ^-1_i/∑^N_j=1π_j1/T_jK'(d(z,z_j; Σ_j))Σ^-1_j, where K'=dK/dt. The above fixed-point iterative scheme is the mean-shift algorithm. Practically, on ℓ_2-normalized vectors, for a homoscedastic Gaussian kernel with constant mixing proportion and isotropic covariances (, π_i = 1/N, 1/σ^2 = τ), Eq. <ref> further simplifies to ẑ = meanshift(z, τ) = ∑_i=1^Nexp(τz^⊤z_i)/∑_j=1^N exp(τz^⊤z_j)z_i ⟹Ẑ = Z softmax(τZ^⊤Z), which conforms to the attention function (Eq. <ref>) with identity projection matrices, , W_Q=W_K= W_V=I, and τ=1/√(D_qk). Conversely, the conventional SA mechanism can be viewed as a generalized mean-shift: Ẑ= SA(Z)=W_V Z softmax(1/√(D_qk)Z^⊤(W^⊤_QW_K)Z), with learnable distance measure Z^⊤(W^⊤_QW_K)Z and projection W_V. Unlike GMM and k-means, mean-shift is capable of modeling clusters of complex non-convex shape with cluster number automatically determined by local scale (proscribed by covariance) <cit.>. Hence, it is well-aligned with the semantics of natural images. ViT from the perspective of mean-shift In ViT <cit.>, images are initially tokenized and then processed through a sequence of transformer layers. Each transformer layer is comprised of a skip-connected multi-head SA (MHSA) and a skip-connected MLP. MHSA can be constructed from Eq. <ref> with m projections in parallel, , [W_Q^h, W_K^h, W^h_V], h =1,⋯, m. The m returned modes are then concatenated along channel dimension and reprojected to a single return through Ẑ = MHSA(Z) = W_O concat([[Ẑ^1], …, [Ẑ^m]]) + b_O. Note that the ℓ_2 normalization assumed in Eq. <ref> is moderately relaxed through layer normalization (LN) to incorporate the extra degree of freedom in the vector magnitude. With skip connection and the one-step mean-shift update described in Eqs. <ref>, <ref>, a transformer layer essentially finds the local centroid of each query z and drives them closer to the (projected) local centroids through z = ẑ + z, followed by an MLP processing step with skip connection. ViT iterates the process multiple times (, 12 or 24 layers) to capture the contextual and semantic information of an image. The clustering process above concords with one inductive bias of the attention mechanism represented by the sparse variable creation <cit.>, , an SA head learns a sparse function that only depends on a small subset of input coordinates. In the context of clustering, the subset of input corresponds to the modes of density p(z) of features. As the high-level semantic information is typically spatially sparse (, the feature vector for a RoI in object detection, a single label for a segment in segmentation, or a scene-graph, etc.), it is natural to leverage transformer for joint embedding and clustering to learn semantic representations at the feature level. § METHODOLOGY FLSL features a two-level clustering process (Figure <ref>), which is formally described as follows. Given a dataset X (e.g., a set of images), FLSL learns an embedding scheme f_θ:X→Z, ∀X∈𝒳, Z= f_θ(X). Z can be formulated as Z= ⋃_c^[-2pt]66N_cz^c, where z^c is a subset of Z forming a cluster, N_c is the number of clusters determined by a clustering scheme, , mean-shift, and N_c≤ |Z|. FLSL aims to encourage the following properties: (i) Intra-view: embeddings within a cluster, z∈z^c, are close to the cluster representative (mode) ẑ^c and far away from the embeddings of other clusters; (ii) Inter-view: the cluster representatives (modes) ẑs of the positive regions in Xs over 𝒳 are pushed closer to each other. The FLSL-extracted features should be well-aligned with dense prediction tasks, such as object detection, where the representation of an object or stuff (, cluster of features) are desired to be (i) well-separated from others in an image (locally semantic), and (ii) close to its positive samples in the dataset (globally semantic). In this section, we present the objectives for both levels of clustering, which are then combined to form the final objective. §.§ Intra-view clustering with mean-shift As discussed in Sec. <ref>, local semantics of an image can be captured by non-parametric clustering such as mean-shift. Hence, with mean-shift update rule Eq. <ref>, it can be proved that the posterior of z_j given point z_i, p(z_j|z_i) = [(τz_i^⊤Z)]_j, should satisfy: p(z_j|z_i)≥1((∑_k∈ c_i1.4e^(z_i^⊤z_k - z_i^⊤z_j)τ)+(N-|c_i|)1.4e^-Δ_ijτ), ∀ j∈ c_i where N=|Z|, c_i is the set of indices of points in the same cluster that point z_i belongs to, and Δ_ij is the degree of separability defined as Δ_ij =z_i^⊤z_j -max_k∈[N]∖ c_iz_i^⊤z_k, such that larger Δ_ij indicates better separation. For locally semantic embeddings, we desire the in-cluster points to be close to each other, or equivalently, to be close to its cluster representative, and stay far away from the out-cluster points, which indicates a large Δ value. As Δ becomes sufficiently large, the RHS of Eq. <ref> can be approximated as 1/∑_k∈ c_iexp((z_i^⊤z_k - z_i^⊤z_j)τ), and for out-cluster points, the posterior p(z_j∉ c_i|z_i) approaches to 0. This results in a semantics-aligned cluster representative via mean-shift – a weighted sum of only in-cluster points. Assuming the out-cluster points are fixed, we can promote the above property by simply driving the returned mode ẑ_i and query point z_i closer to each other, which leads to the intra-view clustering objective: min_f_θ∑_[1pt]66i=1^[-2pt]66Nz_i - ẑ_i^2_2. Proof of Eq. <ref> and detailed explanation is provided in Appendix A. §.§ Inter-view clustering with k-means To learn globally semantic representations, similar to the existing SSL methods, we formulate the problem as a variant of k-means clustering. In the space of cluster representatives ẑs extracted from an entire dataset, the k-means objective with generalized non-empty cluster constraint <cit.> can be expressed as min_M1/N'∑_[1pt]66ẑ∈Ẑ∑_[1pt]66k=1^[-1pt]66Kδ_kk(ẑ)ẑ-μ_k(ẑ)^2_2 + D_KL(p̅π), where M is a set of K centroids {μ_1,⋯,μ_K}, Ẑ is a set of cluster representatives over the entire dataset, N' = |Ẑ|, k(ẑ)=min_kμ_k - ẑ_2, δ_ij is the Kronecker delta, with δ_ij=1 iff i=j, and 0 otherwise, [p]_[i] = 1/N'∑_ẑδ_ik(ẑ), and π is the prior, , a vector of the preset proportion for each cluster. With positive pairs (ẑ^+, ẑ) via data augmentation, the objective can then be constructed as k-means clustering with an extra separation margin for ẑ^+: min_M1/N'∑_[1pt]66ẑ∈Ẑ(∑_[1pt]66k=1^[-1pt]66Kδ_kk(ẑ)ẑ-μ_k(ẑ)^2_2+(1-δ_k(ẑ^+)k(ẑ))ẑ^+ - μ_k(ẑ)^2_2) +D_KL(p̅π). A common approach to tackle the optimization problem above is to relax the hard cluster assignment constraint δ_ij∈{0,1} to [0,1] via a classification head with a small temperature (≪ 1) to ẑ. This relaxes Eq. <ref> to a more general Gaussian Mixture Model (GMM) formulation (cf. Appendix B). By rewriting 1-δ_k(z^+)k(z) in Eq. <ref> as ∑_k=1^[-1pt]66Kδ_kk(z^+)-δ_kk(z^+)δ_kk(z), with the relaxed hard cluster assignment via a classification head, the objective for the inter-view clustering can be expressed by min_M1/N'∑_[1pt]66ẑ∈ẐH(p(ẑ^+), p(ẑ)) + D_KL(p̅π) , where p(x)=(τ'W_C^⊤x), τ'≪1, W_C is a matrix of K orderly concatenated centroids, and H(x,y)=-xlog y (cf. Appendix C). Positive sample retrieval Unlike the common instance-level SSL, the positive samples in FLSL are amorphous clusters of features, (z^+, z), corresponding to the same local semantics in two views. In contrast to previous works assigning the best-matching patch <cit.> or thresholded vicinity <cit.>, we leverage the cluster assignment mechanism inherent in mean-shift, where a query z is automatically assigned to a cluster represented by the return ẑ. For query from another view, the mean-shift naturally manifests as a cross-attention (CA), ẑ^+ = Z^+ softmax(τz^⊤Z^+), For locally and globally semantic representations, the returned representative ẑ^+ of the cluster from the augmented view Z^+ should agree with representative ẑ of the cluster containing the query z. The process can be viewed as data retrieval in dense associative memory recognized in  <cit.> §.§ FLSL Objective By combining the objectives from the two clustering levels, we arrive at the objective of FLSL: min1/N'∑_Z∈Z∑_z∈Zυz-ẑ^2_2+η∑_z∈ZH(p(ẑ^+), p(ẑ)) + γ D_KL(p̅π), with ẑ = SA(z, Z, Z), ẑ^+ = CA(z, Z^+, Z^+), where υ, η and γ are the hyperparameters controlling the importance of each term, and both SA and CA are non-parametric mean shift. Figure <ref> illustrates the FLSL framework. We follow the common joint-embedding strategy of SSL, except that we simultaneously maximize the agreement between the probability vectors of the positive cluster representatives (p(ẑ^+), p(ẑ)) and the agreement between the cluster members and their representative (z, ẑ). The KL-divergence term in Eq. <ref> serves as a volume maximization regularizer. In our experiments, we use a uniform prior π = 1/K. Experiments show that the FLSL objective effectively promote locally and globally semantic representations, resulting in significantly improved transferability of learnt features to object detection and segmentation. Note that FLSL does not involve a class token in its objective (Eq. <ref>) since it is a self-supervised learning method for dense prediction tasks. § EXPERIMENTS In this section, we evaluate the performance of FLSL by conducting extensive experiments. Specifically, we compare FLSL to existing SSL approaches on multiple dense prediction benchmarks: (i) MS-COCO <cit.> object detection and instance segmentation, (ii) UAVDT <cit.> object detection from UAV platforms, and (iii) DAVIS video instance segmentation <cit.>. Moreover, we investigate the properties of FLSL features in terms of semantic alignment and feature separability in the embedding space. Detailed experimental setups are provided in the respective subsections and supplementary materials. All our experiments are performed on Nvidia RTX A6000. Our source code can be found at <https://github.com/ISL-CV/FLSL.git>. Implementation details The implementation of ViT in our experiments mostly follows DeiT <cit.> excluding the token. The configuration of the ViT variants utilized in this paper is summarized in Appendix D. The coefficients of Eq. <ref> in our experiments are υ=0.3, η=1 and γ=5 unless stated otherwise. We set the number of centroids K=4,096, and assume a uniform prior, , π_k = 1/K, ∀ k. Models are pretrained on ImageNet-1k <cit.> dataset using AdamW optimizer <cit.> with a batch size of 512. We follow the data augmentation from BYOL <cit.> (, color jittering of brightness, contrast, saturation and hue, Gaussian blur and solarization) with preceding random crops and resizing (to 224×224) and make them asymmetric. Contrasting among dense features can be computationally expensive. Therefore, we apply a random pooling in a 2×2 grid to the queries. All ViT models are pretrained for 300 epochs as in most baselines for a fair comparison. FLSL pseudo-code, complete training details, and settings of augmentation pipeline are provided in Appendix D. Baselines We compare FLSL with various existing SSL approaches that are based on the ResNet <cit.> and ViT <cit.> architectures: (a) self-supervised ResNet: MoCo-v2 <cit.>, DetCo <cit.>, DenseCL <cit.>, BYOL <cit.>, and SCRL <cit.>; and (b) self-supervised ViT: MoCo-v3 <cit.>, MoBY <cit.>, DINO <cit.>, MAE <cit.>, SelfPatch <cit.>, and ADCLR <cit.>. Protocol for hyperparameter tuning Standard instance-level SSL evaluation protocols typically utilize one of the two approaches: employing a k-NN classifier or training a linear classifier on fixed features. Since FLSL learns dense semantic representations rather than a single instance-level representation, both standard evaluation protocols are not suitable for evaluating FLSL in training. Moreover, fine-tuning on a downstream dense prediction tasks can be computationally expensive due to complex prediction heads, and may introduce task-specific biases during hyperparameter tuning. Therefore, we design a bbox-aligned k-NN classifier modified from <cit.> to evaluate the feature quality directly without additional network tuning. Here is an overview of the method. Features of the training data are first extracted with a fixed model. These features are then aligned with their corresponding bounding boxes provided by ILSVRC <cit.>. For each image, a certain number of representative features (, 9) are selected by a partition criterion and stored in memory. The k-NN classifier matches each selected features to its k-nearest stored features, which collectively vote for its label. A feature is considered successfully classified if any of the representative features match its class. This protocol is employed for hyperparameter tuning and ablation study of the FLSL pipeline. Appendix E provides further details on the choice of k, implementation specifics and evaluation results. §.§ MS-COCO Object Detection & Segmentation We adopt Mask R-CNN detection framework by incorporating three variants of ViT: (i) ViT-S/16 with FPN <cit.>, (ii) ViT-S/8 with FPN, and (iii) ViT-B/16 with simple feature pyramid (ViTDet) <cit.>. Models of (i) and (ii) are fine-tuned following the multi-scale training <cit.> under the standard 1× schedule for a fair comparison. For the model of (iii), we follow the training recipe of <cit.> and fine-tune the model for 100 epochs. Results. Table <ref> reports the detection and segmentation performance of ViT-S/16 and ViT-S/8 with Mask R-CNN <cit.> on COCO. Specifically, FLSL with ViT-S/16 outperforms ADCLR <cit.> by +0.6% and +1.1%, and substantially outperforms DINO+SelfPatch <cit.> by +2.8% and +2.4% on detection (AP^bbox) and segmentation (AP^mk), respectively. Both baseline methods feature patch-level contrastive learning. Unlike SelfPatch contrasting between patches within the adjacent neighborhood and ADCLR contrasting via learned queries of random crops, FLSL contrasts the representatives (modes) of semantic cluster of features, which aligns closer with the downstream tasks and thus leads to superior performance. Notably, FLSL with ViT-S/8 further improves the performance by a large margin of +4.4% in AP^bbox and +3.6% AP^mk over SelfPatch. Table <ref> summarizes the results of ViTDet. FLSL shows large performance gains over the DINO baseline by +4.2% AP^bbox and +3.3% AP^mk. FLSL also outperforms the SOTA generative approach, MAE, by +1.7% and +1.4% in the two tasks, respectively. §.§ Small Object Detection: UAVDT To assess the transferability of FLSL beyond the datasets of common images like COCO, we further investigate its performance on a UAV benchmark, UAVDT <cit.>, which exhibits significant domain shifts from common images (, images captured by ground-level cameras). We utilize Faster R-CNN framework <cit.> with the same ViT variants used in the COCO experiments and follow the training settings outlined in ClusDet <cit.>. All ViT-backboned models are trained with 1× schedule. Result Table <ref> presents the performance of ViT-S/16, ViT-S/8, and ViT-B/16 with Faster R-CNN for detection tasks on UAVDT under different pretrain schemes. We utilize the official evaluation method in <cit.>, which calculates the class-agnostic VOC AP exclusive of the predictions that falls in the ignored areas. FLSL consistently outperforms DINO (a typical instance-level SSL for ViT) across all three ViT variants by a significant margin. With smaller objects and an imbalanced foreground-background ratio, the significance of local semantics becomes evident. Models require local context to discover small objects and make accurate predictions rather than relying solely on the global semantics of the entire image. This situation aligns well with the strengths of FLSL. [22]R0.45 < g r a p h i c s > (a) < g r a p h i c s > Input l=12 l=8 l=4 l=0 (b) font=footnotesize (a) visualization of top-10% patches obtained by thresholding the self-attention maps of query patches (top) in the last layer of ViT-S/16 trained with FLSL (middle) and with DINO (bottom). FLSL encourages the model to learn semantic correlations among patches; (b) visualization of separability of the patch representations of an image throughout the transformer (ViT-S/16). §.§ DAVIS Segmentation To further assess the quality of frozen features learned by FLSL, we evaluate FLSL-pretrained ViT models on DAVIS2017 <cit.>, following the evaluation protocol in <cit.> that requires fixed representations with no extra training. Results Table <ref> shows that FLSL consistently outperforms DINO across all ViT variants in our experiments. The protocol evaluates the quality of learned dense features via segmenting scenes with k-nearest neighbors (k=5) within a fixed window (12×12) between consecutive frames. This requires dense features to be locally semantic, , features corresponding to the same semantics should be more correlated. Therefore, the improved performance confirms that FLSL encourages model to extract locally semantic representations. §.§ Alignment with Image Semantics To show that FLSL is better aligned with the semantic layout of an image than the common SSL methods, Figure <ref> compares the self-attention maps from the last layer of a ViT-S/16 trained with FLSL to those of DINO. The query tokens are the patches in the last ViT layer. The visualizations are obtained with 224^2 images. The attention segmentation is obtained by thresholding the self-attention map to keep the top-10% of the mass. As shown in the middle and bottom rows of Figure <ref>(a), DINO promotes object-centered attention (, class related content is dominating), while FLSL encourages attention to the regions of high semantic correlation with the query and results in masks consistent with the objects/stuff. §.§ Feature Distribution and Separability We demonstrate the qualitative results by visualizing the aggregated attention score (AAS) and the feature distribution in the embedding space through t-sne <cit.> in Figure <ref> and Figure <ref>(b), respectively. To generate the map of AAS, we sum up all the self-attention maps, normalize the resulting map with its maximum score and visualize it as a thermal image, , the brighter the pixel, the higher the score. For a semantically well-separated image, each patch only attends to the patches of its own semantic region, , a patch of an object has high attention scores only with the patches of that object and low scores with the rest. This results in an image with partitions of different brightness proportional to the area of that region, , the larger the size of an object/stuff, the brighter the color. As shown in Figure <ref>, as the layer goes deeper, the brightness partition of the AAS is more consistent with the objects and stuff in the images (, person, giraffes, motorcycles, greens, wall, and ground, etc.), which indicates the desired separation of the learned features. This is also reflected in the t-sne visualization of the embeddings in Figure <ref>(b), where the representations become more clustered and separated as the attention layer goes deeper. §.§ Ablation Study Due to limited space, we present two major ablation studies in this section to help understand the effectiveness of FLSL. The model considered for this entire study is ViT-S trained with 100 epochs. We refer the reader to Appendix G for the complete work. Impact of coefficients in the FLSL objective The FLSL objective (Eq. <ref>) contains three components: (1) similarity between ℓ_2-normalized z (features) and ẑ (modes), (2) cross-entropy of the probabilities of an augmented pair H(p(ẑ^+), p(ẑ)), and (3) the volume maximization regularizor D_KL(p̅π). It is computationally expensive to optimally determine the values of more than two coefficients by performing grid search, especially when the ratios among them are large. We tackle this problem by first fixing η = 1 and setting γ = 1 along with Sinkhorn normalization <cit.> to perform a grid search on the value of υ with the empirical base condition υ≤1 and γ≥1 <cit.>. With the fixed υ, we then perform another grid search on γ without Sinkhorn normalization. We implement Sinkhorn normalization as the softmax operation along the batch dimension. Table <ref> summerizes the score of bbox-aligned k-NN evaluation using different coefficient settings. Impact of number of centroids K FLSL is formulated as an explicit clustering problem, with the output dimension of the last fully-connected layer equal to the number of centroids K. Compared to its instance-level counterpart DINO <cit.>, FLSL enjoys a smaller output dimension (shown in Table <ref>). This is because images have higher feature variance compared to feature clusters. For example, an image in ImageNet may contain diverse content from different categories, requiring a large number of centroids to cover the distribution. In contrast, a semantic cluster contains highly correlated features, such as similar textures or objects from the same category, thus requiring fewer centroids. Experimentally, we find that a large number of centroids benefits performance, but is detrimental and costly when being too large. We pick K=4,096 for all our experiments as it strikes a good balance between performance and cost-effectiveness. Other ablations including the impact of batch size and random pooling window size are relegated to Appendix. § CONCLUSIONS This paper proposes FLSL, a feature-level self-supervised learning method that bridges the gap between the current SSL methods and downstream dense prediction tasks. We demonstrate for the first time the underlying mean-shift clustering process of ViT, which aligns well with natural image semantics. Facilitated by ViT for joint embedding and feature clustering, FLSL performs a two-level clustering: (i) intra-view clustering to extract the representatives for clusters of features within an image, and (ii) inter-view clustering to encourage the representatives to be globally semantic over the entire dataset. FLSL achieves a significant improvement over the SOTAs in the dense prediction tasks, including object detection and instance segmentation. Limitations and broader impacts FLSL does not have any significant limitations other than the method is more complex (due to its two-level clustering) than other SSL methods, and it currently only fits for ViT-based models on dense prediction tasks. Exploring ways to extend FLSL for tasks that necessitate a global representation while retaining its existing properties could be a potential future work. As far as we can foresee, there is no negative societal impact. § ACKNOWLEDGMENT This research was sponsored by the Army Research Laboratory under Cooperative Agreement #W911NF-22-2-0025. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. plain FLSL: Feature-level Self-supervised Learning Supplementary Materials § INTRA-VEIW CLUSTERING WITH MEAN-SHIFT An image can be represented as an empirical probability density function that comprises amorphous clusters of features. Given a dense representation of an image Z = {z_i}_i=1^N and the mean-shift clustering scheme, the posterior of z_j given z_i indicates the probability of feature z_i being assigned to the cluster of z_j, which is defined as follows: p(z_j|z_i) = [ softmax(τz_i^⊤Z)]_j = .e^τz_i^⊤z_j/(∑_k∈ c_ie^τz_i^⊤z_k + ∑_k∈[N]∖ c_ie^τz_i^⊤z_k). = . 1/ ((∑_k∈ c_ie^-(z_i^⊤z_j - z_i^⊤z_k)τ)+∑_k∈[N]∖ c_ie^-(z_i^⊤z_j - z_i^⊤z_k)τ). ≥. 1/ ((∑_k∈ c_ie^-(z_i^⊤z_j - z_i^⊤z_k)τ)+(N-|c_i|)e^-Δ_ijτ)., where τ is the inverse temperature, c_i is the set of indices of points contained in the cluster of z_i, [N]={1,…,N}, and Δ_ij is the cluster separation with respect to z_i, defined as Δ_ij = z_i^⊤z_j - m∈[N]∖ c_imaxz_i^⊤z_m, j∈ c_i, measuring the gain of similarity between z_i and an in-cluster point z_j over the similarity between z_i and the out-cluster point z_k that is closest to z_i. To achieve locally semantic representations, our objective is for the points within each cluster to be in close proximity to each other or, equivalently, close to their cluster representative. This proximity ensures consistency in encoded semantics. Additionally, we aim for these in-cluster points to be distinctly separated from the points outside the cluster. This separation encourages well-defined clusters to accurately reflect different semantics, i.e., a large Δ_ij and a small in-cluster variance. As Δ becomes sufficiently large (with a proper inverse temperature), the RHS of Eq. <ref> can be approximated as 1/∑_k∈ c_ie^-(z_i^⊤z_j - z_i^⊤z_k)τ for in-cluster points, i,j ∈ c_i. Meanwhile, the posterior for the out-cluster points, p(z_j∉ c_i|z_i), approaches 0 at the rate of p(z_j∉ c_i|z_i) ≤ . 1/ ((∑_k∈ c_ie^τmin_k∈ c_iΔ_ik)+(N-|c_i|)e^-τmax_k∉ c_i(z_i^⊤z_j - z_i^⊤z_k)).. The resulting return of a single mean-shift update becomes ẑ_i = Z(τz_i^⊤Z) = ∑_j ∈ [N] p(z_j|z_i)z_j ≈∑_j ∈ c_i1/∑_k∈ c_ie^-(z_i^⊤z_j - z_i^⊤z_k)τz_j + 0, which is essentially a weighted sum of the in-cluster points only. To promote the aforementioned property while maintaining low in-cluster variance, one approach is to drive the point closer to its cluster representative by optimizing min ∑^N_i=1z_i - ẑ_i_2^2, with ẑ_i = Z(τz_i^⊤Z). Notably, with a large inverse temperature τ≫ 1, a single mean-shift update becomes the single-step pattern retrieval mechanism in dense associative memory (DAM) <cit.>. § THE GMM FORMULATION OF THE CONSTRAINED K-MEANS OBJECTIVE The k-means objective with generalized non-empty cluster constraint <cit.> can be expressed as min_M1/N'∑_[1pt]66ẑ∈Ẑ∑_[1pt]66k=1^[-1pt]66Kδ_kk(ẑ)ẑ-μ_k(ẑ)^2_2 + D_KL(p̅π), where M is a set of K centroids {μ_1,⋯,μ_K}, Ẑ is a set of cluster representatives over the entire dataset, N' = |Ẑ|, k(ẑ)=min_kμ_k - ẑ_2, δ_ij is the Kronecker delta, with δ_ij=1 iff i=j, and 0 otherwise, [p]_[i] = 1/N'∑_ẑδ_ik(ẑ), and π is the prior, , a vector of the preset proportion for each cluster. As mentioned in the main paper, a common approach to tackle the optimization problem above is to relax the hard cluster assignment constraint δ_ij∈{0,1} to [0,1] with a classification head to ẑ. This relaxes Eq. <ref> to the more general Gaussian Mixture Model (GMM) formulation, allowing each point to have a partial membership of each cluster with a certain probability. The GMM ELBO can be expressed by the average term-by-term reconstruction and KL to prior as L(θ, M, Σ) = - 1/N'(∑_ẑ∈Ẑ∑_μ∈Mq(μ|ẑ) d(ẑ, μ;Σ_μ) + ∑_ẑ∈ẐD_KL(q(μ|ẑ)π)) + C, where d(z, μ; Σ_μ)=( z-μ)^⊤Σ^-1_μ( z - μ) is the Mahalanobis distance, C is a constant under the assumption of homoscedastic and isotropic Gaussian kernel. With a classification head, the posterior of ẑ belonging to cluster k is q(μ_k|ẑ)=[(τ'W_M^⊤ẑ+logπ-τ'(ẑ^⊤ẑ+diag(W^⊤_MW_M)))]_k, where τ' is the inverse temperature, and W_M is a matrix of K concatenated centroids with its kth column corresponding to μ_k. Particularly, we assume all vectors are ℓ_2-normalized. This further simplifies the posterior to q(μ|ẑ)=(τ'W_M^⊤ẑ + logπ), which conforms with the output of a classification head as a mixing proportion. The hard cluster assignment in Eq. <ref> can be recovered by sharpening the posterior with a small covariance, or equivalently, a large inverse temperature τ', , lim_τ'→∞ q_ϕ(μ_k|ẑ) = lim_τ'→∞[(τ'W_M^⊤ẑ + logπ)]_k = lim_τ'→∞[(τ'W_M^⊤ẑ)]_k = δ_kk(ẑ). With a sufficiently large inverse temperature, the KL-divergence term of Eq. <ref> becomes 1/N'∑_ẑ∈ẐD_KL(δ_kk(ẑ)π) = -∑_k=1^KN'_k/N'logπ_k, where N'_k=∑_ẑ∈Ẑ1_[k(ẑ) = k]. By defining [p]_k=N'_k/N' and adding back the non-empty constraint as the negative entropy of p, the resulting GMM ELBO recovers Eq. <ref> with d(ẑ, μ;Σ_μ) ∝ẑ-μ_k(ẑ)^2_2 . § THE CROSS-ENTROPY FORMULATION OF THE CONSTRAINED K-MEANS WITH POSITIVE SAMPLES With positive pairs (ẑ^+, ẑ) created via data augmentation, the constrained k-means objective in Eq. <ref> can be formulated as k-means clustering with an extra separation margin for ẑ^+. Here, we present the derivation of Eq. 11 in the main paper, considering a more general setting that involves multiple positive samples {ẑ^(a)}_a=1^A anchored on ẑ^(0) = ẑ through data augmentation. The objective in Eq. 10 from the main paper is essentially a special case of the following expression, where the number of positive pairs A equal to 1: min_M1/N'∑_ẑ∈Ẑ(∑_k=1^Kδ_kk(ẑ)ẑ-μ_k(ẑ)^2_2 +1/A∑^A_a=1(1-δ_k(ẑ^(a))k(ẑ))ẑ^(a)-μ_k(ẑ)^2_2) + D_KL(p̅π), which imposes that a point and its positive samples reside in the same cluster. The above optimization problem can be tackled by minimizing its upper bound with a relaxed hard assignment. Specifically, the term inside the parenthesis is bounded by ∑_k=1^Kδ_kk(ẑ^(0))ẑ^(0)-μ_k(ẑ^0)^2_2 + 1/A∑^A_a=1(1 - δ_k(ẑ^(a))k(ẑ^(0)))ẑ^(a) - μ_k(ẑ^(0))^2_2 ≤ẑ^(0)-μ_k(ẑ^(0))^2_2 + 1/Amax_a∈ Aẑ^(a) - μ_k(ẑ^(0))^2_2∑^A_a=1(1 - δ_k(ẑ^(a))k(ẑ^(0))). By rewriting 1-δ_k(ẑ^(a))k(ẑ^(0)) as ∑_k=1^K(δ_kk(z^(a)) - δ_kk(ẑ^(a))δ_kk(ẑ^(0))), the RHS of Eq. <ref> becomes ẑ^(0)-μ_k(ẑ^(0))^2_2 + 1/Amax_a∈ Aẑ^(a) - μ_k(ẑ^(0))^2_2∑^A_a=1(∑^K_k=1δ_kk(ẑ^(a)) - ∑^K_k=1δ_kk(ẑ^(a))δ_kk(ẑ^(0))) = ẑ^(0)-μ_k(ẑ^(0))^2_2 + 1/Amax_a∈ Aẑ^(a) - μ_k(ẑ^(0))^2_2∑^A_a=1∑^K_k=1(δ_kk(ẑ^(a))(1- δ_kk(ẑ^(0)))), which is bounded by ≤ẑ^(0)-μ_k(ẑ^(0))^2_2 + 1/Amax_a∈ Aẑ^(a) - μ_k(ẑ^(0))^2_2∑^A_a=1∑^K_k=1-δ_kk(ẑ^(a))log(δ_kk(ẑ^(0))+ϵ), with 0 < ϵ≪ 1. To our interest, we assume all vectors are ℓ_2-normalized. Thus, the bound in Eq. <ref> can be further simplified to 4 + 41/A∑^A_a=1∑^K_k=1-δ_kk(ẑ^(a))log(δ_kk(ẑ^(0))+ϵ). By relaxing the hard assignment δ_kk(ẑ)∈{0,1} to [0,1] using a classification head to ẑ as in the GMM formulation in Appendix <ref> with a sufficiently large inverse temperature τ'≫ 1, the optimization in Eq. <ref> can be approached by min_M1/AN'∑^A_a=1∑_ẑ∈ẐH(p(ẑ^(a)), p(ẑ)) + D_KL(p̅π), where p(ẑ) = q(μ|ẑ) = (τ'W_M^⊤ẑ), and H(x,y) = -x^⊤logy. When A=1, , only considering a single positive pair, the above objective degenerates to Eq. 11 in the main paper. § IMPLEMENTATION DETAILS §.§ Network configuration We follow the implementation used in DeiT <cit.> for all the ViT variants used in our experiments, and their configurations are summarized in Table <ref>. In the table, “#blocks” is the number of transformer blocks, “dim” is the channel dimension, “#heads” is the number of heads in multi-head attention, “#tokens” is the length of the token sequence when considering 224^2 resolution inputs, “#params” is the total number of parameters (without counting the projection head), and “im/s” is the inference speed on a NVIDIA V100 GPU with 128 samples per forward. §.§ Training details The implementation of ViT in our experiments mostly follows DeiT <cit.>, with the exception of excluding the token. During pretext training, we set the coefficients in the FLSL objective as follows: υ=0.3, η=1.0, and γ=5.0, and assume a uniform prior, , π_k = 1/K, ∀ k, with the number of centroids K=4096. We pretrain the models on ImageNet-1k dataset without labels using AdamW optimizer <cit.> and a batch size of 512. In line with DINO, the learning rate linearly ramps up during the first 10 epochs to the base value determined with the linear scaling rule <cit.>: lr=0.0005 with the reference batch_size=256. The warm-up is followed by the learning rate decay governed by cosine schedule <cit.> with the target learning rate 10^-6. The weight decay also governed by a cosine schedule from 0.05 to 0.5. The update rule for teacher network is θ_t ←λθ_t + (1- λ)θ_s, with λ following a cosine schedule from 0.996 to 1. The inverse temperature for student classification head, τ_s, is set to 1/0.1, while the inverse temperature for teacher classification head, τ_t, follows a linear warm-up from 1/0.04 to 1/0.07 during the first 30 epochs, while the inverse temperature for the non-parametric cross-attention is scheduled from 2.0 to 1.0. We employ the data augmentation method from DINO <cit.> (, color jittering of brightness, contrast, saturation and hue, Gaussian blur and solarization) with preceding random crops and resizing (to 224×224) and make them asymmetric. The exact settings of augmentation are provided in the next section. §.§ Data Augmentation The augmentation settings in FLSL are based on the augmentation pipeline of DINO <cit.> with one key modification: the random cropping operation is made asymmetric for the teacher and student networks. In our approach, we begin by sampling two random crops from the input image using a large ratio (, 0.8∼1.0) at the same location but with different pixel treatments. From each of the crops, we further sample a smaller crop using a ratio of (, 0.5∼1.0). The smaller crops are then assigned to the student network, while the larger crop are passed to the teacher network. This asymmetry ensures that the queries from the student exist within the teacher's view. Conversely, using symmetric random cropping for both networks adversely affects training performance and leads to collapse. Details of the data augmentation pipeline are listed below. The operations are performed sequentially to produce each view. * For Teacher network, random cropping an area uniformly sampled with a size ratio between 0.8 to 1.0, followed by resizing to 224^2. in PyTorch. * For Student network, random cropping the crops from teacher network with an area uniformly sampled with a size ratio between 0.5 to 1.0, followed by resizing to 224^2. This results in an effective scale ratio of (0.4, 1.0). in PyTorch. * Color jittering of brightness, contrast, saturation and hue, with a probability of 0.8. in PyTorch. * Grayscale with a probability of 0.2. in PyTorch. * Gaussian blur with a probability of 0.5 and uniform random radius from 0.1 to 2.0. * Solarization with a probability of 0.2. * Color normalization with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). §.§ PyTorch Pseudocode of FLSL 0.8 § PROTOCOL FOR HYPERPARAMETER TUNING As discussed in the main paper, we need a protocol to evaluate the quality of the learned dense features during the FLSL training for hyperparameter tuning. However, standard evaluation protocols, such as k-NN classifier or linear probing are not suitable. We therefore propose a bounding box-aligned k-NN classification by leveraging the bounding box information provided by ILSVRC <cit.>. As shown in Figure <ref>(a), we partition the bounding box into s× s grids and find the coordinates of the center for each grid (the green dots). We then locate the s^2 features in the feature map, Ẑ, from the nearest neighbor as shown in Figure <ref>(b), and store them into the memory bank with label information. For images with multiple bounding box annotations, we pick the largest one. An image is considered correctly classified as long as there is one of the s^2 features matching its true category with the prediction. We set s=3 for our training and inflate the number of the nearest neighbors k by a scale factor c_s as the memory bank increases 9 times. We set k=20 and c_s = 7 for the best performance. We present the evaluation results of the bounding box-aligned k-NN of FLSL with the standard instance-level k-NN of other methods in Table <ref>. These results provide insights into the global and local semantic coherence of the learned representations. As the bounding box-aligned k-NN results in representations with less noise, we mark our results with (*) symbol to indicate a biased comparison. Note that FLSL is designed for dense prediction tasks and not for instance-level image classification. This Bbox-aligned k-NN classification is employed only for hyperparameter tuning and ablation study of the FLSL pipeline. § TRANSFER LEARNING SETTINGS MS-COCO setup We evaluate the performance of the pretrained models on the MS-COCO object detection and instance segmentation tasks with different two-staged frameworks. For ViT-S/16 and ViT-S/8 with Mask R-CNN <cit.> and FPN <cit.>, we employ multi-scale training following <cit.> and resize the image to ensure the short side falls within the range of 480 to 800 pixels, while ensuring the long side does not exceed 1,333 pixels. For a fair comparison, we primarily adhere to the training setting utilized in <cit.>. Specifically, we employ the AdamW optimizer with a batch size of 16. Learning rate is linearly warmed up for the first 1,000 iterations to reach 5e-5 and subsequently decayed at step 8 and 11. Models are trained under 1x schedule. For ViT-B/16 with Mask R-CNN and a simple FPN, we follow the training methodology outlined in Li et al. (2022) <cit.>. Specifically, the input images are resized to 1,024×1,024 and augmented with large-scale color jitter ranging from 0.1 to 2.0. The model is fine-tuned for 100 epochs using the AdamW optimizer with a weight decay of 0.1. To adjust the learning rate, we employ a step-wise decay strategy. During the training, the base learning rate is set to 0.0001, which is gradually increased from 0.0 to the base rate for the first 250 iterations as a warm-up phase. Additionally, we apply a layer-wise learning rate decay of 0.7. UAVDT setup The UAVDT dataset contains 23,258 images for training and 15,069 images for test. The resolution of the images is about 1,080×540 pixels. The dataset is acquired with a UAV platform at a number of locations in urban areas. The categories of the annotated objects are car, bus, and truck. The training configuration is adapted from the original setting in <cit.>. The input size is rescaled to 1,072×528. The model is trained under 1x schedule. We adopt SGD optimizer with 0.9 momentum, 0.0001 weight decay and a batch size of 16. The base learning rate sets to 0.0005 with a linear warm-up for the first 300 iterations. The learning rate decreases at the 8th epoch. § ABLATION STUDY §.§ Impact of batch size We study the impact of the batch size on the features extracted by FLSL. Table <ref> shows that FLSL can achieve high performance with small batch sizes. Unlike the instance-level SSL methods that tend to focus on foreground contents (, objects), FLSL considers all the semantics in an image, , all the features zs find their own cluster representatives ẑs through the self-attention (mean-shift) update. This enriches feature diversity and improves the variance of a mini-batch and benefits the training with small batch sizes. §.§ Impact of random pooling In FLSL, contrasting among dense features can be computationally expensive, , 14^2 = 196 representations to be considered in the objective. Therefore, we apply a random pooling to the queries from the last ViT layer and study the impact of different window sizes of the random pooling. §.§ Impact of the number of centroids K We formulate FLSL as an explicit clustering problem. Therefore, the output dimension of the last fully-connected layer is equal to the number of centroids K. As shown in Table <ref>, FLSL enjoys a smaller output dimension compared to its instance-level counterpart, DINO (K=65,536) <cit.>. This is mainly due to the higher variance of features in an image than that of a feature cluster. Take ImageNet for instance, the content of an image may range from a single object and stuff to a melange of them from different categories. This requires a large number of centroids to cover the image distribution. While for a semantic cluster, it tends to contain features of high correlation, , features of similar texture, or multiple adjacent objects from the same category, hence requires less centroids to cover its distribution. From the experiment, we find that a large number of centriods improves the performance, but is detrimental and costly when being too large. We pick K=4,096 for all our experiments as it strikes a good balance between performance and cost-effectiveness. §.§ Ablation on the FLSL objective function The FLSL objective contains three components: (1) similarity between ℓ_2-normalized z (features) and ẑ (modes), (2) cross-entropy of the probabilities of an augmented pair H(p(ẑ^+), p(ẑ)), and (3) the non-empty constraint D_KL(p̅π): min1/N'∑_Z∈Z∑_z∈Zυz-ẑ^2_F+η∑_z∈ZH(p(ẑ^+), p(ẑ)) + γ D_KL(p̅π), with ẑ = SA(z, Z, Z), ẑ^+ = CA(z, Z^+, Z^+). It is computationally expensive to optimally determine the values of more than two coefficients by performing grid search, especially when the ratios among them are large. We tackle this problem by first fixing η = 1 and setting γ = 1 along with the Sinkhorn normalization <cit.> to perform a grid search on the value of υ with the empirical base condition υ≤1 and γ≥1 <cit.>. With the fixed υ, we then perform another grid search on γ without the Sinkhorn normalization. We implement Sinkhorn normalization <cit.> as the softmax operation along the batch dimension. Table <ref> summerizes the score of k-NN evaluation using different coefficient settings. We also visualize the impact of different ratios of the first and second level clustering υ/η of the FLSL objective in Figure <ref> by visualizing the aggregated attention score (AAS) map. As the ratio increases, the AAS map shifts from being clear and bright to becoming cluttered and dark. This change occurs because the self-attention for each query becomes more focused, attending to a smaller neighborhood. A smaller ratio leads to larger clusters, which aggregate more attention scores in the region, resulting in a brighter map, particularly in the background. Conversely, a large ratio leads to small, cluttered clusters with fewer attention scores aggregated, resulting in a darker map. A smaller ratio may smooth out small details, while a larger ratio causes the model to focus excessively on local features. From the results in Table <ref>, a ratio of 0.3 strikes a good balance in between.
http://arxiv.org/abs/2306.09313v1
20230615174741
Lexical Speaker Error Correction: Leveraging Language Models for Speaker Diarization Error Correction
[ "Rohit Paturi", "Sundararajan Srinivasan", "Xiang Li" ]
eess.AS
[ "eess.AS", "cs.AI", "cs.CL", "cs.LG" ]
Chip-firing and critical groups of signed graphs Bailee Zacovic July 31, 2023 ================================================ *These authors contributed equally to this workfootnote Speaker diarization (SD) is typically used with an automatic speech recognition (ASR) system to ascribe speaker labels to recognized words. The conventional approach reconciles outputs from independently optimized ASR and SD systems, where the SD system typically uses only acoustic information to identify the speakers in the audio stream. This approach can lead to speaker errors especially around speaker turns and regions of speaker overlap. In this paper, we propose a novel second-pass speaker error correction system using lexical information, leveraging the power of modern language models (LMs). Our experiments across multiple telephony datasets show that our approach is both effective and robust. Training and tuning only on the Fisher dataset, this error correction approach leads to relative word-level diarization error rate (WDER) reductions of 15-30% on three telephony datasets: RT03-CTS, Callhome American English and held-out portions of Fisher. Index Terms: Speaker Diarization, Large Language Models, Automatic Speech Recognition, Error Correction § INTRODUCTION Speech transcription systems have advanced significantly in the past decade but even with these remarkable advances, machines have difficulties understanding natural conversations with multiple speakers such as in broadcast interviews, meetings, telephone calls, videos or medical recordings. One of the first steps in understanding natural conversations is to recognize the words spoken and their corresponding speakers. Speaker Diarization (SD) is the process of determining "who spoke when" in a multi-speaker audio signal and is a key component in any speech transcription system. SD is used in conjunction with Automatic Speech Recognition (ASR) to assign a speaker label to each transcribed speaker turn and has widespread applications in generating meeting/interview transcripts, medical notes, automated subtitling and dubbing, downstream speaker analytics, among others (we refer to this combined system as SD-ASR in this paper). This is typically performed in multiple steps that include (1) transcribing the words using an ASR system, (2) predicting “who spoke when” using a speaker diarization (SD) system, and, finally, (3) reconciling the output of those two systems. Recent advances in SD systems are outlined in <cit.> and the independent module optimized SD systems typically consists of the following main sub-tasks: (a) segment the input audio into speech segments using a Voice activity detector (VAD), (b) generate speaker segments from the speech segments by either using a uniform window size <cit.> or by detecting speaker turns <cit.>, (c) extract speaker embeddings <cit.> for each of the speaker segments and (d) cluster the resulting speaker embeddings using clustering algorithms like Spectral Clustering <cit.>, Agglomerative Hierarchical Clustering <cit.> among others. These sub-tasks of most of the diarization systems in literature rely only acoustic information and can thus lead to speaker errors, mainly around the speaker turns. This can happen in uniform speaker segmentation as long segments very likely contain speaker turn boundaries, while short segments carry insufficient speaker information. It is also shown that detecting speaker turns using only acoustic information is also error-prone <cit.>. In addition to the SD errors, speakers can be attributed to the wrong words in the SD-ASR reconciliation phase due to errors in ASR word timings. Reconciliation errors can also occur in regions of speech overlap as SD can identify one of the speakers while ASR can identify words corresponding to a different speaker. Lexical information can contain complementary information which can be very useful in accurately predicting speaker turns <cit.>. For instance, analyzing only the written transcript of a conversation such as "how are you i am good", enables us to infer that there is likely a speaker change between the utterances "how are you" and "i am good". There have been a handful of works <cit.> which leverage the ASR transcripts to infuse lexical information in the SD module. In [7], lexical cues are used to estimate the speaker turns for diarization. <cit.> made use of turn probabilities from lexical cues in the clustering stage by enhancing the adjacency matrix. Though these approaches showed good SD improvements, these systems can still produce errors around speaker turns due to ASR and Diarization errors in overlapped speech as well are sensitive to ASR word timings as they rely on ASR timings in the diarization sub-tasks as well as in the Reconciliation phase. <cit.> modeled SD and ASR jointly but is confined to 2 speakers with specific distinct roles. In this paper, we propose a Speaker Error Correction (SEC) module which can correct speaker errors at the word level without modifying the underlying ASR or the acoustic SD system. This SEC module makes use of the any of the readily available pre-trained LMs <cit.> to infuse the lexical knowledge to correct speaker errors while also leveraging speaker scores from the SD system to prevent over-corrections. The reliance on LMs also significantly reduces the amount of speaker labelled text data needed to train the system. Our approach has components which are modular and don’t need paired audio, text data to train while only needing a small amount of paired data for fine-tuning. This approach is also easier to integrate with existing systems than other lexical-based diarization approaches, since the first-pass acoustic SD system can be run independently of the ASR system. Using experiments across three telephony datasets, we demonstrate that the proposed system is both effective as well as capable of generalization. § SPEAKER ERROR CORRECTOR The overall pipeline of the proposed two-pass Speaker Error Corrector (SEC) framework is shown in Fig 1a. The conventional Speaker Transcription system consists of an ASR module, a SD module and a reconciliation stage. The SEC follows the reconciliation stage and takes in two streams of inputs: acoustic features from the SD module and lexical features from the ASR module. The ASR and acoustic SD models can continue to run in parallel, making it easier to integrate with existing systems. The core component of the SEC is the Lexical Correction module which takes in the transcribed words from ASR along with the speaker labels from the SD module. These are explained in more detail in the following sub-sections. §.§ Lexical Diarization Corrector While lexical features have complementary information to the acoustic features and can be leveraged to correct some of the errors from a naïve reconciliation of ASR and SD, lexical features alone can’t accurately predict the speaker labels especially in realistic conversations. So, we propose a simple yet efficient way to correct the speakers based on both the decisions from the 1st pass diarizer and the ASR transcriptions. Our proposed Lexical Speaker Error Corrector consists of two main components: a backbone language model (LM) and a Transformer Encoder Front-end to predict the speaker labels. After reconciling ASR and diarization outputs, we have speaker labels {S_i}_i=1^N, S_i∈ℝ^1× K for every word {W_i}_i=1^N, where N is the number of words in the sequence and K is the number of speakers the SEC is trained to handle. The words W_i are tokenized and passed to the backbone LM to obtain contextual word embeddings {E_j}_j=1^M, E_j∈ℝ^1× W where M is the number of tokens in the word sequence and W is the word embedding dimension. The word level speaker labels S_i are mapped to token level by mapping the speaker ID corresponding to the word to its first token if the word has more than 2 tokens and assigning a special “don’t care” token to any of the subsequent tokens of the word. These token level embeddings E_j are concatenated with the speaker IDs S_j to form the fused features for the Front-end Transformer Encoder as shown in Figure 1b. The posteriors from the Front-end Encoder {L_ij}_j=1^K, L_i∈ℝ are used to optimize the classification loss on the ground-truth speaker labels. §.§ Training Methodology The SEC model can be trained only using speaker turn transcripts and doesn't require paired audio data and we show that training the lexical corrector on just the transcripts also improves the performance of the baseline. Since the relatively smaller number of speaker errors produced by 1st pass diarizer system limits the training of the error corrector, we train the corrector by simulating speaker errors based on the ground truth as well by simulating ASR substitution errors. We define the probability of ASR errors as P_ASR and the probability of speaker errors as P_Spk. Setting P_ASR=1 implies that all the words in the training transcripts are substituted with random words and P_ASR=0 implies the original ground-truth transcripts. Similarly, P_Spk=1 implies all the speaker labels are randomly substituted whereas P_Spk=0 implies the ground-truth speaker labels. We simulate ASR, Speaker errors using a curriculum learning paradigm <cit.> to make sure that we don’t under or over correct the speakers and balance the information flow from the SD labels and ASR word lexical information. We start the curriculum for P_Spk at a low value and increase P_Spk as the training progresses. Conversely, P_ASR starts at a high value at the first epoch and decreases as the training progress. The intuition for this curriculum with P_ASR being higher and P_Spk being lower in the initial epochs is to train the model without any meaningful lexical information and to train the model to at least copy the 1st pass speaker labels in the initial epochs. More meaningful lexical information with a smaller P_ASR is used in the later epochs along with a higher P_Spk to train the model on more complex speaker errors as the training progresses. In addition to the errors simulated text data, we also use paired audio data to train, fine-tune the model on real data. For this, we generate speaker labels using the baseline 1st SD and use the ground-truth speaker labels as the targets. In this work, we train the SEC on two speaker cases, i.e., K=2 §.§ Inference Setup During inference, we perform error correction on sliding windows with a fixed number of ASR transcribed words as shown in Fig 1c. Though the lexical corrector is trained to only correct two speakers locally, we can still handle use-cases where more than two speakers are detected globally in the audio. We achieve this by only correcting sliding windows comprising of two speakers and by bypassing the remaining windows as shown in Figure 1c. The size of the sliding window is a parameter we tune on a validation set. § EXPERIMENTS §.§ Data and Metrics In this work, we use the full Fisher dataset <cit.> to train the Speaker Corrector system. We split the Fisher data into train, validation and test splits as defined in <cit.>. We also the Fisher train set to fine-tune the backbone LM model as well as to train, fine-tune the Corrector model. We only use the Fisher validation split for tuning our model. For evaluation, in addition to Fisher test split, we use the standard dev, test splits of CALLHOME American English (CHAE) <cit.> and RT03-CTS <cit.> which are majorly two speaker calls. We also evaluate on the two-speaker only set of CHAE, the CH-109 dataset <cit.> by fixing the number of clusters to 2 as well as automatically determining the number of speakers in the 1st pass SD system. In order to evaluate the full ASR, SD system, we use the Word Diarization Error Rate (WDER) proposed in <cit.> as it aptly captures both ASR and SD errors at the word level. We also account for words transcribed in regions of speech overlap in the WDER metric. This is achieved by using asclite <cit.> as it can align multiple speaker hypotheses against multiple reference speaker transcriptions and can also efficiently handle words in regions of speaker overlaps. §.§ Baseline System Our baseline SD system follows the pipeline in <cit.> and consists of a speaker embedding model followed by Spectral Clustering and the number of speakers is identified using the maximum eigengap of the Spectral Clustering. The speaker embedding model is based on a ResNet-34 architecture trained with a combination of classification, metric loss <cit.> and channel loss <cit.> on about 12k speakers and 4k hours of CTS data . We use a uniform speaker segmentation <cit.> with a duration of 500ms to extract the speaker embeddings followed by the Clustering phase for the SD system. Our baseline SD system is comparable to state-of-the-art diarization systems across several datasets and achieves a a DER of 3.72 and SER of 1.1 on CHAE test set which is a stronger baseline than the one reported in <cit.>. We use a hybrid ASR system <cit.> with a Conformer Acoustic model <cit.> and a n-gram Language model trained on several tens of thousands of audio, text data. For the reconciliation phase, the SD system provides speaker turns with time boundaries and these labels are mapped to recognized words using the associated word boundaries from the ASR system. When the speaker turn boundary falls in the middle of a word, we assign the word to the speaker with the largest overlap with the word similar to the baseline system in <cit.>. We use a neural-network based Speech Activity detector (SAD) similar to <cit.> as a front end for both SD-ASR systems above. §.§ SEC System For the SEC model, we use a pre-trained Roberta-base model <cit.> as the backbone LM and a Transformer Encoder of size 128 hidden states for the Front-end model. The curriculum for P_ASR starts at 1 at the 1st epoch and decreases to 0.08 in the 10th and subsequent epochs in uniform steps. The curriculum for P_Spk starts at 0 in the 1st epoch and increase to 0.14 in the 10th and subsequent epochs in uniform steps. The model is trained with Adam Optimizer with a batch size of 32 and an average sequence length of 30 words per batch. We use a learning rate of 1e-4 and train the model for 30 epochs on a machine with 8 GPUs. We use the SEC as a 2nd pass post-processing step to the baseline SD-ASR system in Section 3.2. In order to determine the number of simulated errors needed to effectively train the lexical SEC to correct the speaker errors, we follow the error curricula mentioned in Section 2.4 and pick the checkpoint with P_ASR, P_Spk that achieves the lowest WDER on the Fisher validation set. The values that achieve the best validation WDER are 0.1 for both P_ASR and P_Spk. In addition to the training parameters, we also tune an inference parameter, the sliding window size as mentioned in Section 2.1 also on the Fisher validation set. §.§ Results We tune for the sliding window size on our Fisher validation subset and plot the WDER with the corresponding values as shown in Figure 2. From the plot, we see that WDER decreases as the window size increases up to 30 due to increased lexical context for the backbone LM as well as the corrector model. The WDER further increases beyond the window size of 30 likely due to the corrector model being trained with an average seq length of 30 and more sliding windows with greater than 2 speakers being bypassed with a larger window size. We have also tried training with larger average sequence lengths but that did not show any additional gains compared to the sequence length of 30 words. So, we use the sliding window size as 30 words for the remainder of the experiments. We also show some qualitative examples of the correction performance on the Fisher test set using the SEC model with the best sliding window size in Figure 3. Figure 3a shows that the correction model is able to effectively correct errors due to overlapping speech when the SD hypothesizes one of the overlapping speakers and ASR hypothesizes the words of the other speaker. The model is also effective in correcting the lexically implausible errors around speaker turns which is one of the major error-prone scenario <cit.> for SD systems as seen in Figure 3b. The quantitative WDER improvements of the correction models on the held out validation and test sets are outlined in Table 1. We call the model trained on ground truth transcripts with simulated speaker, ASR errors as the "SimSEC" model. SimSEC_v2 is the "SimSEC" model with a Fisher tuned backbone Roberta and trained with a custom curriculum as mentioned in Section 3.3 SimSEC_v1 is similar to SimSEC_v2 but with the Roberta-base as a backbone without any further fine-tuning. We evaluate SimSEC_v1 to quantify the gains attributed to fine-tuning of the backbone LM on conversational datasets. "SimSEC init + RealSEC Tuning" model is the paired data tuned model initialized with SimSEC_v2 and tuned using the 1st pass acoustic SD labels instead of the simulated speaker errors. RealSEC model is similar to "SimSEC_v2 init + RealSEC Tuning" but is only trained by flat-starting the model on real paired data. From Table 1, we can see that almost all of the corrector models produce considerable WDER gains over the Baseline SD-ASR reconciled system across all the datasets, except from SimSEC_v1 on CH-109 with unknown speakers. It can be observed that tuning the backbone Roberta LM in SimSEC_v2 can produce moderate WDER gains over the pretrained Roberta-base LM, especially on CHAE validation set and CH-109 with unknown speakers. The model trained on Paired data, either by tuning the SimSEC_v2 model or by flat-start training (RealSEC) produces further gains over the models train with errors simulated (SimSEC_v1 and SimSEC_v2). The performance improvement of the models on CH-109 without fixing the speakers to 2 in the Clustering phase is comparatively limited due to hypothesising more than 2 speakers on few of the audio files leading to smaller average WDER gains. With the best model "SimSEC_v2 init + RealSEC Tuning", we observe relative WDER gains in the range 15-30% across all the datasets. To further analyze the importance of dataset sizes needed to train or tune the models, we perform an ablation by only using a fraction of the Fisher train data as shown in Table2. We evaluate The models SimSEC_v2 and "SimSEC init + RealSEC Tuning" with different fractions of ground truth text and paired data respectively. We see that the WDER of the SimSEC_v2 model and "SimSEC init + RealSEC Tuning" model only improves moderately and saturates at a point as the amount of text data and paired data increases respectively. This shows that the corrector model can be trained purely on small amounts of ground truth transcripts by simulating speaker, ASR errors and can also be fine-tuned on a small amount of paired data to achieve significant WDER gains. § CONCLUSION AND FUTURE WORK In this work, we propose a novel Speaker Error Corrector (SEC) to correct word-level speaker label errors from a conventional audio-only speaker diarization system. We achieve this using a language model over the ASR transcriptions to correct the speaker labels. The proposed lexical SEC can be trained effectively using only text data by simulating speaker errors without the need for any paired audio-text data. A small amount of paired data can further improve model performance, leading to overall relative reduction of WDER by over 15% across three telephony datasets. The proposed SEC framework is also lightweight and is easy to integrate as a post-processing module over existing systems. One limitation of our current work is that it has been applied only to conversations in English. One future work can include training a multi-lingual SEC to make the system language-agnostic. To increase the robustness of this approach, in addition to the first-pass SD labels, we can leverage additional complementary acoustic cues to further improve the performance. Also, the current SEC model can only handle 2 speakers in a sliding window, which we plan to generalize to handle more number of speakers. We will also explore leveraging large generative models to synthesize conversational transcripts across multiple domains using curated prompts <cit.>. ieeetr
http://arxiv.org/abs/2306.16422v1
20230619174240
Neural networks can detect model-free static arbitrage strategies
[ "Ariel Neufeld", "Julian Sester" ]
q-fin.CP
[ "q-fin.CP", "cs.LG", "math.OC", "q-fin.MF", "stat.ML" ]
Quantum gravity effects may accelerate black hole evaporation R. A. Konoplya July 31, 2023 ============================================================= July 31, 2023 ^1NTU Singapore, Division of Mathematical Sciences, 21 Nanyang Link, Singapore 637371. ^2National University of Singapore, Department of Mathematics, 21 Lower Kent Ridge Road, 119077.   In this paper we demonstrate both theoretically as well as numerically that neural networks can detect model-free static arbitrage opportunities whenever the market admits some. Due to the use of neural networks, our method can be applied to financial markets with a high number of traded securities and ensures almost immediate execution of the corresponding trading strategies. To demonstrate its tractability, effectiveness, and robustness we provide examples using real financial data. From a technical point of view, we prove that a single neural network can approximately solve a class of convex semi-infinite programs, which is the key result in order to derive our theoretical results that neural networks can detect model-free static arbitrage strategies whenever the financial market admits such opportunities. Keywords: Static Arbitrage, Model-Free Finance, Deep Learning, Convex Optimization § INTRODUCTION Detecting arbitrage opportunities in financial markets and efficiently implementing them numerically is an intricate and demanding task, both in theory and practice. In recent academic papers, researchers have extensively tackled this problem for various types of assets, highlighting its significance and complexity. The authors from <cit.> and <cit.> focus their studies on the foreign exchange market, establish conditions that eliminate triangular opportunities and propose computational approaches to detect arbitrage opportunities. <cit.> propose a binary integer programming model for the detection of arbitrage in currency exchange markets, while <cit.> focus on arbitrage in multi-asset markets under the assumptions that the risk-neutral marginal distributions are known. Also assuming knowledge of risk-neutral marginals in multi-asset markets, <cit.> provides a copula-based approach to characterize the absence of arbitrage. <cit.> study arbitrage opportunities in markets where vanilla options are traded and propose an efficient procedure to change the option prices minimally (w.r.t. the ł^1 distance) such that the market becomes arbitrage-free. <cit.> develop cutting-plane based algorithms to calculate model free upper and lower price bounds whose sub-optimality can be chosen to be arbitrarily small, and use them to detect model-free arbitrage strategies. Furthermore, by observing call option prices <cit.> train neural networks to detect financial asset bubbles. In this paper we study the detection of model-free static arbitrage in potentially high-dimensional financial markets, i.e., in markets where a large number of securities are traded. A trading strategy is called static if the strategy consists of buying or selling financial derivatives as well as the corresponding underlying securities in the market only at initial time (with corresponding bid and ask prices) and then holding the positions till maturity without any readjustment. Therefore, one says that a market admits static arbitrage if there exists a static trading strategy which provides a guaranteed risk-free profit at maturity. We aim to detect static arbitrage opportunities in a model-free way, i.e. purely based on observable market data without imposing any (probabilistic) model assumptions on the underlying financial market. We also refer to <cit.> for more details on model-free arbitrage and its characterization. The goal of this paper is to demonstrate both theoretically as well as numerically using real-market data that neural networks can detect model-free static arbitrage whenever the market admits some. The motivation of using neural networks is their known ability to efficiently deal with high-dimensional problems in various fields. There are several algorithms that can detect (static) arbitrage strategies in a financial market under fixed market conditions, for example in a market with fixed options with corresponding strikes as well as fixed corresponding bid and ask prices. However directly applying these algorithms in real financial market scenarios to exploit arbitrage is challenging due to well-known issue that market conditions changes extremely fast and high-frequency trading often cause these opportunities to vanish rapidly. This associated risk is commonly known as execution risk, as discussed, e.g., in <cit.>. The speed of investment execution therefore becomes crucial in capitalizing on arbitrage opportunities. By training neural networks according to our algorithm purely based on observed market data, we obtain detectors that allow, given any market conditions, to detect not only the existence of static arbitrage but also to determine a proper applicable arbitrage strategy. Our algorithm therefore provides to financial agents an instruction how to trade and to exploit the arbitrage strategy while the opportunity persists. In contrast to other numerical methods which need to be executed entirely each time the market is scanned for arbitrage or the market conditions are changing, our proposed method only needs one neural network to be trained offline. After training, the neural network is then able to detect arbitrage and can be executed extremely fast, allowing to invest in the resultant strategies in every new market situation that one faces. We refer to Section <ref> for detailed description of our algorithm as well as our numerical results evaluated on real market data. We justify the use of neural networks by proving that neural networks can detect model-free static arbitrage strategies whenever the market admits some. We refer to Theorem <ref> and Theorem <ref> for our main theoretical results regarding arbitrage detection. The main idea is to relate arbitrage with the superhedging of the zero-payoff function. We prove in Proposition <ref> that there exists a single neural network that provides a corresponding ε-optimal superhedging strategy for any given market conditions. In fact, we show for a certain class of convex semi-infinite programs (CSIP), which includes the superhedging problem of the zero-payoff function as special case, that a single neural network can provide for each of the (CSIP) within this class a corresponding feasible and ε-optimal solution, see Theorem <ref>. The remainder of this paper is as follows. In Section <ref>, we introduce the setting of the financial market as well as the corresponding (static) trading strategies, and provide our main theoretical results ensuring that model-free static arbitrage can be detected by neural networks if existent. Section <ref> focuses on the presentation and numerical implementation of our neural networks based algorithm to detect static arbitrage, featuring experiments conducted on real financial data to showcase the feasibility and robustness of our method. In Section <ref>, we introduce a class of convex semi-infinite programs and provide our main technical result that a single neural network can approximately solve this class of (CSIPs). Finally, all proofs are presented in Section <ref>. § DETECTION OF STATIC ARBITRAGE STRATEGIES In this section, we study a financial market in which a financial agent can trade statically in various types of options and which may admit the opportunity of static arbitrage profits. In such a setting, the natural difficulty for a trader is first to decide whether such arbitrage exists and second to identify potential strategies that exploit arbitrage profits. Our goal is to show that for each financial market in which an agent can trade statically in options, the corresponding market admits static arbitrage if and only if there exists a neural network that detects the existence of model-free static arbitrage by outputting a corresponding arbitrage strategy. §.§ Setting In this paper, we consider a market in which a financial agent can trade statically in options. To introduce the market under consideration, let S=(S_1,…,S_d) denote the underlying d∈ stocks at some future time t=1. We only consider values S ∈𝒮⊆ [0,∞)^d for some predefined set 𝒮, which can be interpreted as prediction set[We refer to, e.g., <cit.> for further literature on prediction sets in financial markets.] where the financial agent may allow to exclude values which she considers to be impossible to model future stock prices S=(S_1,…,S_d) at time 1. Let N_Ψ∈ denote the number of different types[We say two options are of the same type if the payoffs only differ with respect to the specification of a strike. Also note that trading in the underlying securities itself can be considered as an option, e.g., a call option with strike 0.] of traded options Ψ_i: 𝒮× [0,K] → [0, ∞), i=1,…,N_Ψ, written on S. For each option type i∈{1,…,N_Ψ} let n_i∈ denote the corresponding amount of different strikes under consideration (K_i,j)_j=1,…,n_i⊆ [0,K], where the strikes are contained in [0,K] for some K<∞, and denote by N:= ∑_i=1^N_Ψn_i the total number of traded options. Moreover, we denote by π=(π_i,j)_i=1,…,N_Ψ, j=1,…,n_i =(π_i,j^+,π_i,j^-)_i=1,…,N_Ψ, j=1,…,n_i∈ [0,π]^2N the bid and ask prices of the traded options respectively, where we assume that all the bid and ask prices are bounded by some π>0. The financial agent then can trade in the market by buying and selling the options described above. More precisely, we first fix the minimal initial cash position of a trading strategy to be given by a∈, and we assume that the maximal amount of shares of options one can buy or sell is capped by some constant 0<H< ∞. This allows to consider the payoff of a static trading strategy by the function ℐ_S: [0,K]^N × [a,∞) × [0, H]^2N → (K,a,h) ↦ a+∑_i=1^N_Ψ∑_j=1^n_i(h_i,j^+-h_i,j^-) ·Ψ_i(S,K_i,j), where we use the notation h= (h_i,j^+,h_i,j^-)_i =1,…,N_Ψ, j = 1,…,n_i∈ [0, H]^2N to denote long and short positions in the traded options, respectively, as well as K=(K_i,j)_i=1,…,N_Ψ, j=1,…,n_i to denote all strikes. The corresponding pricing functional is then defined by f:[0, π]^2N× [a,∞)× [0,H]^2N → (π,a,h ) ↦ a+ ∑_i=1^N_Ψ∑_j=1^n_i(h_i,j^+π_i,j^+-h_i,j^-π_i,j^- ) determining the price of a corresponding trading strategy with respect to the corresponding bid and ask prices of the options. Moreover, we define the set-valued map which maps a set of strikes K=(K_i,j)_i=1,…,N_Ψ, j=1,…,n_i to the corresponding strategies leading to a greater payoff than 0 for each possible value S∈𝒮 by Γ: [0,K]^N ∋ K ↠Γ(K):= { (a,h) ∈ [a,∞) × [0,H]^2N | ℐ_S(K,a,h) ≥ 0 for all S ∈𝒮}. Then, the minimal price of a trading strategy that leads to a greater price than 0 for each possible value S∈𝒮, in dependence of strikes K=(K_i,j)_i=1,…,N_Ψ, j=1,…,n_i and option prices π =(π_i,j^+,π_i,j^-)_i=1,…,N_Ψ, j=1,…,n_i, is given by V:[0, K]^N× [0, π]^2N → (K,π) ↦inf_(a,h) ∈Γ(K) f(π,a,h). In this paper, we consider the following type of model-free[It is called model-free since no probabilistic assumptions on the financial market has been imposed] static arbitrage. We refer to <cit.> for several notions of model-free arbitrage. Let (K,π) ∈ [0, K]^N× [0, π]^2N. Then, we call a static trading strategy (a,h) ∈ [a,∞) × [0, H]^2N a model-free static arbitrage strategy if the following two conditions hold. (i) (a,h)∈Γ(K), (ii) f(π,a,h)<0. Moreover, for any ε>0 we call a model-free static arbitrage strategy to be of magnitude ε if f(π,a,h)≤ -ε. This means according to Definition <ref> that the market with parameters (K,π) ∈ [0, K]^N× [0, π]^2N admits no model-free static arbitrage strategy if and only if V(K,π) ≥ 0. §.§ Neural Networks By neural networks with input dimension d_in∈, output dimension d_out∈, and number of layers l ∈ we refer to functions of the form ^d_in →^d_out x ↦A_l∘φ_l ∘A_l-1∘⋯∘φ_1 ∘A_0(x), where (A_i)_i=0,…,l are affine[This means for all i=0,…,l, the function A_i is assumed to have an affine structure of the form A_i(x)=M_ix + b_i for some matrix M_i∈^ h_i+1× h_i and some vector b_i∈^h_i+1, where h_0:=d_in and h_l+1:=d_out. ] functions of the form A_0: ^d_in→^h_1, A_i:^h_i→^h_i+1 for i =1,…,l-1, (if l>1), and A_l : ^h_l→^d_out, and where the function φ_i is applied componentwise, i.e., for i=1,…,l we have φ_i(x_1,…,x_h_i)=(φ(x_1),…,φ(x_h_i)). The function φ:→ is called activation function and assumed to be continuous and non-polynomial. We say a neural network is deep if l≥ 2. Here h=(h_1,…,h_l) ∈^l denotes the dimensions (the number of neurons) of the hidden layers, also called hidden dimension. Then, we denote by 𝔑_d_in,d_out^l,h the set of all neural networks with input dimension d_in, output dimension d_out, l hidden layers, and hidden dimension h, whereas the set of all neural networks from ^d_in to ^d_out (i.e. without specifying the number of hidden layers and hidden dimension) is denoted by 𝔑_d_in,d_out:=⋃_l ∈⋃_h∈^l𝔑_d_in,d_out^l,h. It is well-known that the set of neural networks possess the so-called universal approximation property, see, e.g., <cit.>. For any compact set ⊂^d_in the set 𝔑_d_in,d_out|_ is dense in C(,^d_out) with respect to the topology of uniform convergence on C(,^d_out). §.§ Main results To formulate our main result we first impose the following mild assumptions.   (i) There exists some L_Ψ>0 such that for all S ∈𝒮 and for all i=1,…, N_Ψ the map [0,K] ∋ K_i,j↦Ψ_i(S,K_i,j) is L_Ψ-Lipschitz. (ii) There exists some by C_Ψ>0 such that the map Ψ_i is bounded by C_Ψ on 𝒮× [0,K] for all i=1,…, N_Ψ. First, note that we do not impose any topological or geometric conditions on the prediction set 𝒮⊂ [0,∞)^d. However, a sufficient criterion for Assumption <ref> (ii) to hold would be that, e.g., 𝒮⊂ [0,∞)^d is bounded and that [0,∞)^d × [0,K] ∋ (S,K) ↦Ψ_i(S,K) is continuous for each i=1,…,N_Ψ. Moreover, note that Assumption <ref> (i) is satisfied for example for any payoff function which is continuous and piece-wise affine (CPWA), which includes most relevant payoff functions in finance. We refer to <cit.> for a detailed list of examples of (CPWA) payoff functions. In our first result, we conclude that the financial market described in Section <ref> admits model-free static arbitrage if and only if there exists a neural network that detects the existence of model-free static arbitrage by outputting a corresponding arbitrage strategy. Let Assumption <ref> hold true, and let (K,π)∈ [0, K]^N× [0, π]^2N. Then, there exists model-free static arbitrage if and only if there exists a neural network ∈𝔑_3N,1+2N with (i) (K,π):=(_a(K,π),_h(K,π)) ∈Γ(K), (ii) f(π,_a(K,π),_h(K,π))<0. In our second result, we show that for any given ε>0 and 0< δ< ε there exists a single neural network such that for any given strikes K=(K_i,j)_i=1,…,N_Ψ, j=1,…,n_i and option prices π =(π_i,j^+,π_i,j^-)_i=1,…,N_Ψ, j=1,…,n_i the neural network can detect model-free static arbitrage of magnitude δ if the financial market with corresponding market conditions (K,π) admits static arbitrage of magnitude ε. From a practical point of view, this is crucial, since it allows the financial trader to only train one single neural network which can then, once trained, instantaneously detect corresponding static arbitrage opportunities if the current market conditions (K,π) admit such opportunities. On the other hand, a trader applying the trained neural network to a financial market which admits no static arbitrage opportunities pays at most ε-δ for the trading strategy, i.e., if ε≈δ, the risk of paying for trading strategies which are no static arbitrage strategies can be reduced to an arbitrarily small amount. Let ε>0 and 0<δ<ε. Then, there exists a neural ∈𝔑_3N,1+2N such that for every (K,π)∈ [0, K]^N× [0, π]^2N the following holds. (i) If the financial market with respect to (K,π) admits model-free static arbitrage of magnitude ε, then the neural network outputs a trading strategy (_a(K,π),_h(K,π)) which is a model-free static arbitrage of magnitude δ. (ii) If the financial market with respect to (K,π) admits no model-free static arbitrage, then the neural network outputs a trading strategy (_a(K,π),_h(K,π)) ∈Γ(K) which has a price of at most ε-δ. The main idea to derive Theorem <ref> and Theorem <ref> relies on the relation between arbitrage and supehedging of the 0-payoff function. The following result establishes that for any prescribed ε>0 there exists a single neural network such that for any given strikes K=(K_i,j)_i=1,…,N_Ψ, j=1,…,n_i and option prices π =(π_i,j^+,π_i,j^-)_i=1,…,N_Ψ, j=1,…,n_i defining the market, the neural network produces a static trading strategy which superhedges the 0-payoff for all possible values S ∈𝒮 whose price is ε-optimal. Let Assumption <ref> hold true. Then for all ε>0 there exists a neural network ∈𝔑_3N,1+2N such that (i) (K,π):=(_a(K,π),_h(K,π)) ∈Γ(K) for all (K,π) ∈ [0, K]^N× [0, π]^2N, (ii) f(π,_a(K,π),_h(K,π))-V(K,π) ≤ε for all (K,π) ∈ [0, K]^N× [0, π]^2N. In fact, we will use Proposition <ref> to prove our main results Theorem <ref> and Theorem <ref> on detecting static arbitrage strategies. To prove Proposition <ref>, we interpret (<ref>) as a class of linear semi-infinite optimization problem (LSIP), where each (K,π) ∈ [0, K]^N× [0, π]^2N determines a single (LSIP). In Section 3, we introduce a (much more) general class of convex semi-infinite optimization problem (CSIP) which covers (<ref>) as special case. Then we show that a single neural network can approximately solve all (CSIP) of this class simultaneously. We refer to Theorem <ref> for the precise statement. The proofs of all our main results are provided in Section <ref>. § THE NUMERICS OF STATIC ARBITRAGE DETECTION IN FINANCIAL MARKETS The results from Section <ref> prove, with non-constructive arguments, the existence of neural networks that can detect model-free arbitrage strategies. These results therefore immediately raise the question how to construct neural networks that are capable to learn these strategies. To this end, we present with Algorithm <ref> an approach that combines a supervised learning approach in the spirit of <cit.> with an unsupervised learning approach as presented for example in <cit.>, <cit.>, and <cit.>. Algorithm <ref> uses the fact that in many situations there exists an applicable algorithm to compute model-free price bounds and corresponding trading strategies that approximate these bounds arbitrarily well. We exploit this fact by training a neural network offline to approximate the outcomes of such algorithms. To compute the strategies that approximately attain these bounds, we suggest employing the algorithm presented in <cit.>. The motivation of our methodology is the following. While the offline training of the neural network might take some time, once trained, the neural network is able to detect immediately static arbitrage and the corresponding trading strategies in the market, provided it exists. This is crucial as stock prices and corresponding option prices move quickly in real financial markets and therefore having an algorithm which can adjust fast to new market parameters is desired. This can be realized, e.g., by tanh and sigmoid activation functions multiplied with the corresponding bounds. Algorithm <ref> is designed to minimize the price function f (π_i,b,_a(K_i,b,π_i,b),_h(K_i,b,π_i,b)) by incorporating two specific penalization terms. These terms are carefully crafted to facilitate the learning of the key characteristics associated with model-free arbitrage strategies. The first penalization term [We denote by x^+ = max{x,0} the positive part of a real number x ∈.] γ·1/S_B∑_j=1^S_B((-ℐ_S_i,b,j(K_i,b,_a(K_i,b,π_i,b),_h(K_i,b,π_i,b))^+)^2 incentivizes the feasibility (see (<ref>)) of learned strategies by penalizing negative payoffs in proportion to the degree of violation of the positivity constraint. This encourages the strategies to have positive payoffs. The second penalization term γ·(-(Y_i,b+0.5)· f (π_i,b,_a(K_i,b,π_i,b),_h(K_i,b,π_i,b)))^+ vanishes if and only if the price f (π_i,b,_a(K_i,b,π_i,b),_h(K_i,b,π_i,b)) of the strategy expressed by the neural network and the pre-computed price Y_i,b are either both non-negative, or both negative. Since the pre-computed price Y_i,b is negative if and only if the market (under the current market parameters (K_i,b,π_i,b)) admits some static arbitrage, the second penalization term vanishes if and only if the trading strategy expressed by the neural network correctly identifies if the markets admits static arbitrage, or not. It is worth mentioning that the design of the penalization terms does not guarantee the feasibility of strategies in the sense of (<ref>) or the correct sign of prices. However, due to the penalty imposed on constraint violations, as demonstrated in Example <ref>, in practice, violations happen frequently but are typically only marginal in magnitude. §.§ Application to real financial data In the following we apply Algorithm <ref> to real financial data in order to detect model-free static arbitrage in the trading of financial derivatives. For convenience of the reader, we provide under https://github.com/juliansester/Deep-Arbitragehttps://github.com/juliansester/Deep-Arbitrage the used Python-code. §.§.§ Training with data of the S&P 500 We consider trading in a financial market that consists of d=5 assets and corresponding 10 vanilla call options (i.e. 10 different strikes) written on each of the assets. This means we consider N_Ψ = 5 different types of options with n_i = 11 for i=1,…,5 referring to the number of call options plus the underlying assets (which can be considered as a call option with strike 0) so that in total N = ∑_i=1^N_Ψ n_i = 55 different securities are considered. To create a training set we consider for each of the 500 constituents of the S & P 500 the 10 most liquidly traded["Most liquidly traded" refers to the strikes with the highest trading volume.] call options with maturity T = 19 May 2023. The data was downloaded on 25 April 2023 via Yahoo Finance. We then use this data to create 50 000 samples by combining the call options of 5 randomly chosen constituents in each sample. The spot values of the underlying assets are scaled to 1, therefore the strikes and corresponding prices are included as percentage values w.r.t. the spot value of the underlying asset. We assume 𝒮= [0,2]^5, i.e, we assume that the underlying assets at maturity only attain values between 0% and 200% of its current spot value. This assumption can be regarded as a restriction imposed on the space of possible outcomes to a prediction set as mentioned in the beginning of Section <ref>. Relying on these samples, we compute, using the LSIP algorithm from <cit.>, minimal super-replication strategies of the 0-payoff for each of the 50 000 samples. Of these 50 000 samples, we regard 5000 samples as a test set on which the neural network is not trained. To demonstrate the performance of our approach, we apply Algorithm <ref> with N_iter = 20 000 iterations, a penalization parameter[Following the empirical experiments from <cit.> and <cit.>, in the implementation, we let γ increase with the number of iterations so that in the first iteration γ equal 1, and after 20 000 iterations γ is 10 000.] γ = 10 000, and batch sizes S_B = 32 and B=512 to train a neural network with 1024 neurons and 5 hidden layers and with a ReLU activation function in each of the hidden layers. The used learning rate for training with the Adam optimizer (<cit.>) is 0.0001. To train the neural network, we assume a= -1, H=1, i.e., the maximal investment is 1 in each position[Note that in practice these bounds impose not a severe restriction as the resultant strategies can be scaled arbitrarily large if desired.]. The training set of 45 000 samples contains 34 146 cases in which the market admits model-free arbitrage while the test set contains 3 787 cases of model-free arbitrage. After training on the 45 000 samples, the neural network assigns to 39 570 out of 45 000 correctly the correct sign of the price of the strategy learned by the neural network, i.e., in 87.93 % of cases on the training set, the neural network can correctly decide whether the market admits arbitrage or not. On the test set we have 4164 out of 5000 correct identifications which corresponds to 83.28 %. However, it is important to emphasize that wrong identifications of the sign of the resultant strategy does not mean that the resultant strategy incur huge losses, as the magnitude of the predicted prices turns out to be on a small scale for the majority strategies with wrongly predicted sign. To showcase this, we evaluate the net profit ℐ_S_i,j(K_i,a_i,h_i)-f(π_i,a_i,h_i) for i = 1,…,5000, j =1,…, 200, i.e., each of the 5000 samples of the test set is evaluated on 200 realizations of S ∈𝒮 that are denoted by S_i,j (uniformly sampled from [0,2]^5) and we show the results in Table <ref>. The results verify that on the test set the net profit is in the vast majority of the 1 000 000 evaluated cases positive, compare also the histogram provided in Figure <ref>. §.§.§ Backtesting with historical option prices We backtest the strategy trained in Section <ref> on the stocks of Apple, Alphabet, Microsoft, Google, and Meta. To this end, we consider for each of the companies call options with maturity 24 March 2023 for ten different strikes. The bid and ask prices of these call options and the underlying securities were observed on 33 trading days ranging from 2 February 2023 until 22 March 2023. We apply the strategy trained in Section <ref> to the prices observed on each of the 33 trading days and evaluate it on the realized values of the 5 underlying securities at maturity. In Table <ref> and Figure <ref> we summarize the net profits of the 33 strategies. Note that to apply the trained neural network from Section <ref>, we first scale all the financial instruments such that the spot values of the underlying securities equal 1, as described in Section <ref>. Then, after applying the strategies to the scaled inputs, we rescale the values of the involved quantities back to unnormalized values, and we report in Table <ref> and Figure <ref> the net profits for both cases: after rescaling the values of the underlying securities, options, and strikes to unnormalized values, as well as without scaling back. The results of the backtesting study reveal that even though the neural network from Section <ref> was trained on data extracted at a different day (25 April 2025) involving call options with a different maturity written on other assets, the resultant strategy still allows to trade profitably in the majority of cases, showcasing the robustness of our algorithm. § APPROXIMATION OF OPTIMAL SOLUTIONS OF GENERAL CONVEX SEMI-INFINITE PROGRAMS BY NEURAL NETWORKS In this section we show for a certain class of convex semi-infinite optimization problems (CSIP) that each of them can be approximately solved by a single neural networks. More precisely, for every prescribed accuracy ε>0 we show that there exists a single neural network which outputs a feasible solution which is ε-optimal. This class of convex semi-infinite problems covers the setting of static arbitrage detection introduced in Section <ref> as special case. We leave further applications for future research. §.§ Setting Let a∈, let _x ⊂^n_x be compact for some n_x ∈, and let _y ⊂^n_y be compact and convex for some n_y ∈. We consider some function f: _x × [a,∞) ×_y ∋ (x,a,y) ↦ f(x,a,y)∈, which we aim to minimize under suitable constraints. To define these constraints we consider some (possibly uncountable infinite) index set 𝒮 as well as for all s ∈𝒮 a function _x × [a,∞) ×_y ∋ (x,a,y) ↦ℐ_s(x,a,y) ∈. Further, let _x ∋ x ↠Γ(X)⊆ [a,∞) ×_y be the correspondence defined by Γ(x):= { (a,y) ∈ [a,∞) ×_y  |  -ℐ_s(x,a,y) ≤ 0 for all s ∈𝒮}, that defines the set of feasible elements from [a,∞) ×_y. To define our optimization problem, we now consider the function _x ∋ x ↦ V(x) ∈ defined by V(x):= inf_(a,y) ∈Γ(x) f(x,a,y). We impose the following assumptions on the above defined quantities. [Assumptions on f]  (i) There exists some L_f≥ 1 such that the function [a,∞) ×_y ∋ (a,y) ↦ f(x,a,y) is L_f-Lipschitz continuous for all x ∈_x. (ii) The function _x × [a,∞) ×_y ∋(x,a,y) ↦ f(x,a,y) is continuous. (iii) The function [a,∞)×_y ∋ (a,y) ↦ f(x,a,y) is convex for all x ∈_x. (iv) The function [a,∞)∋ a ↦ f(x,a,y) is increasing for all x ∈_x, y∈_y. (v) We have that ℒ_a,f:= inf_x ∈_x, y ∈_y a_1,a_2 ∈ [a,∞), a_1 ≠ a_2|f(x,a_1,y)-f(x,a_2,y)|/|a_1-a_2|>0. [Assumptions on ℐ_s]  (i) There exists some L_ℐ≥ 1 such that _x × [a,∞) ×_y ∋ (x,a,y) ↦ℐ_s(x,a,y) is L_ℐ-Lipschitz continuous for all s ∈𝒮. (ii)The function [a,∞) ×_y ∋ (a,y) ↦ℐ_s(x,a,y) is concave for all x ∈_x, s ∈𝒮. (iii) The function [a,∞) ∋ a ↦ℐ_s(x,a,y) is increasing for all x ∈_x, y ∈_y, s ∈𝒮. (iv) We have that ℒ_a,ℐ:=inf_s ∈𝒮inf_x ∈_x, y ∈_yinf_a_1 ≠ a_2, a_1,a_2 ∈ [a,∞)|ℐ_s(x,a_1,y)-ℐ_s(x,a_2,y)|/|a_1-a_2| >0. (v) We have that inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y)>-∞. [Assumptions on _y]  There exists 0 < r<1 and L_r ≥ 1 such that for all 0 < δ< r there exists some closed and convex set C_y,δ⊂_y such that for all y' ∈ C_y,δ, y ∈^n_y we have y'-y≤δ⇒ y∈_y, and max_y ∈_ymin_y' ∈ C_y,δ{y-y'}≤ L_r δ.   (i) Let a^UB:=a+1/ℒ_a,ℐ|inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y)|∈ [a,∞) Then, we have ℐ_s(x,a^UB,y) ≥ 0 for all x ∈_x, y ∈_y, s ∈𝒮. In particular, Γ(x) ≠∅ for all x ∈_x. Indeed, by using the definition of a^UB and ℒ_a,ℐ together with Assumption <ref> (v) we have for all x∈_x, y ∈_y, s ∈𝒮 that ℐ _s(x,a^UB,y) =ℐ _s(x,a^UB,y)-ℐ _s(x,a,y)+ℐ _s(x,a,y) ≥ℒ_a,ℐ·a^UB-ℒ_a,ℐ·a + inf_s ∈𝒮, x ∈_x, y ∈_yℐ _s(x,a,y) = ℒ_a,ℐ·( a+1/ℒ_a,ℐ·|inf_s ∈𝒮, x ∈_x, y ∈_yℐ _s(x,a,y)|)-ℒ_a,ℐ·a + inf_s ∈𝒮, x ∈_x, y ∈_yℐ _s(x,a,y) ≥ 0. (ii)Assumption <ref> (ii) and  (iv), and the assumption that _x and _y are compact ensure together with Remark <ref> (i) that V(x) ∈ for all x ∈_x. Indeed, for any x ∈_x and (a,y) ∈Γ(x), we have f(x,a,y) ≥ f(x,a,y)≥inf_x∈_x, y ∈_yf(x,a,y)>-∞. (iii) Assumption <ref> (iv) and  (v) ensure that the function f is strictly increasing in a∈[a,∞) uniformly in x∈_x, y∈_y. Analogously, Assumption <ref> (iii) and  (iv) ensure that the function ℐ is strictly increasing in a∈[a,∞) uniformly in x∈_x, y∈_y, s∈𝒮. (iv) Note that Assumption <ref> roughly speaking means that the geometry of _y⊆^n_y is similar to a box. Indeed, if _y=×_i=1^n_y [l_i,u_i] for some -∞<ł_i<u_i<∞, i=1,…,n_y, then one can choose 0<r<min_i (u_i-l_i/2)>0, C_y,δ:=×_i=1^n_y [l_i+δ,u_i-δ]⊆_y, and L_r:=√(n_y). Our main result of this section establishes the existence of a single neural network such that for any input x∈_x defining the (CSIP) in (<ref>) the neural network outputs a feasible solution which is ε-optimal. Let Assumptions <ref>, <ref>, and <ref> hold true. Then, for all ε>0 there exists a neural network ∈𝔑_n_x,1+n_y such that (i) (x) := (_a(x), _y(x)) ∈Γ(x) for all x ∈_x, (ii) f(x, _a(x), _y(x)) - V(x) ≤ε for all x ∈_x. The proof of Theorem <ref> is provided in the next section. § PROOFS AND AUXILIARY RESULTS In this section, we present the proofs of the main results from Section <ref> and  <ref>. §.§ Proofs of Section <ref> The proof of Proposition <ref> consists of verifying that the optimization problem (<ref>) is included in the general (CSIP) introduced in Section <ref>. Then, applying Proposition <ref> together with the universal approximation property of neural networks allows to conclude Theorem <ref> and Theorem <ref>. We verify that the conditions imposed in Theorem <ref> are satisfied under Assumption <ref> with x ← (K,π), a ← a, y ← h, V ← V in the notation of Theorem <ref>. To that end, note that Assumption <ref> holds with ℒ_a, f=1, and L_f = max{1, π}√(1+2N). Moreover, note that for all x ∈_x, (a,0) ∈([a,∞) ∩ [0,∞))× [0,H]^2N satisfies ℐ_S(x,a,0) ≥ 0 for all S ∈𝒮. Hence, Assumption <ref> holds with ℒ_a, ℐ = 1 and L_ℐ = max{1, 2HL_Ψ, C_Ψ}√(3N+1). Furthermore, for any 0<r<1 and any 0 < δ<1, Assumption <ref> is satisfied with C_y,δ=[δ,H-δ]^2N and L_r=√(2N). Therefore, the result follows by Theorem <ref>. Let (K,π)∈ [0, K]^N× [0, H]^2N. Assume first there exists model-free arbitrage, i.e., we have V(K,π)<0. Then, we choose ε∈ with 0<ε<-V(K,π) and obtain with Proposition <ref> the existence of a neural network =(_a,_h) ∈𝔑_3N,1+2N with (K,π) ∈Γ(K) and with f(π,_a(K,π),_h(K,π))-V(K,π) ≤ε which implies f(π,_a(K,π),_h(K,π)) ≤ε+V(K,π) <-V(K,π)+V(K,π) =0. Conversely, if conditions (i) and (ii) hold, then the output of the neural network constitutes a model-free arbitrage opportunity. Let ε>0. By Proposition <ref>, there exists a neural network ∈𝔑_3N,1+2N such that for every (K,π) ∈ [0, K]^N× [0, π]^2N (K,π):=(_a(K,π),_h(K,π)) ∈Γ(K) f(π,_a(K,π),_h(K,π))-V(K,π) ≤ε- δ. Moreover, for every (K,π) ∈ [0, K]^N× [0, π]^2N, if the market with respect to (K,π) admits model-free static arbitrage of magnitude ε, then by definition V(K,π)≤ -ε. This implies that (K,π):=(_a(K,π),_h(K,π)) provides a model-free static arbitrage strategy of magnitude δ. If the market with respect to (K,π) admits no model-free static arbitrage, then V(K,π)=0 and hence f(π,_a(K,π),_h(K,π)) ≤ V(K,π)+ε- δ =ε-δ. It remains to prove Theorem <ref>, which is our main technical result. Its proof is provided in the next subsection. §.§ Proofs of Section <ref> The main idea of the proof of Theorem <ref> is to show that the correspondence of feasible ε-optimizers of the convex semi-infinite program (CSIP) defined in (<ref>), as a function of the input x∈_x of the (CSIP), is non-empty, convex, closed, and lower hemicontinuous[We refer to, e.g., <cit.> as reference for the standard notions of lower/upper (hemi)continuity of correspondences.], where the major difficulty lies in the establishment of the lower hemicontinuity. This then allows us to apply Michael's continuous selection theorem (<cit.>), which together with the universal approximation property of neural networks leads to the existence of a single neural network which for any input x∈_x defining the (CSIP) in (<ref>) outputs a feasible solution which is ε-optimal. We highlight that no strict-convexity of the map (a,y)↦ f(x,a,y) for any fixed x is assumed in (<ref>), hence one cannot expect uniqueness of optimizers for the (CSIP), which in turn means that one cannot expect to have lower hemicontinuity of the correspondence of feasible true optimizers of the (CSIP) in (<ref>). §.§.§ Auxiliary Results Before reporting the proof of Theorem <ref>, we establish several auxiliary results which are necessary for the proof of the main result from Theorem <ref>. For all of the auxiliary results from Section <ref> we assume the validity of Assumption <ref>, Assumption <ref> and Assumption <ref>. Moreover, from now on, we define the following quantity a^UB:=a+1/ℒ_a,ℐ|inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y)|∈ [a,∞).   (i) Let a ∈ [a,∞) such that a≥a^UB. Then, we have that ℐ_s(x,a,y) ≥ 0 for all x ∈_x, y ∈_y, s ∈𝒮. (ii) Let a ∈ [a,∞) such that a ≥a^UB+1/ℒ_a,f. Then, for all x ∈_x and for all y ∈_y we have f(x,a,y)-V(x) ≥ 1.   (i) Let a ∈ [a,a] such that a≥a^UB. Further, let x∈_x, y ∈_y, s ∈𝒮. Then, we have by the monotonicity of ℐ_s on [a,a] (stated in Assumption <ref> (iii)) that ℐ_s(x,a,y) ≥ℐ_s(x,a^UB,y) = ℐ_s(x,a^UB,y)-ℐ_s(x,a,y)+ℐ_s(x,a,y). By using the above inequality (<ref>), Assumption <ref> (iv), and the definition of a^UB we then have ℐ_s(x,a,y) ≥ℒ_a,ℐ·(a^UB-a)+inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y) =ℒ_a,ℐ·a^UB-ℒ_a,ℐ·a+inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y) =ℒ_a,ℐ·(a+1/ℒ_a,ℐ| inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y)| )-ℒ_a,ℐ·a+inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y) = | inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y) | + inf_s ∈𝒮, x ∈_x, y ∈_yℐ_s(x,a,y) ≥ 0. (ii) First note that, by the assertion from (i), we have (a^UB,y) ∈Γ(x) for all y ∈_y and hence f(x,a^UB,y) ≥ V(x) for all x ∈_x, y ∈_y. Then, as by assumption a ≥a^UB+1/ℒ_a,f, we have for all x ∈_x and for all y ∈_y by Assumption <ref> (iv), by Assumption <ref> (v), and by (<ref>), that f(x,a,y)-V(x) =f(x,a,y) -f(x,a^UB,y)+f(x,a^UB,y)-V(x) ≥ℒ_a,f· (a- a^UB)+f(x,a^UB,y)-V(x) ≥ℒ_a,f· (a- a^UB) ≥ 1. From now on, let a:= a^UB+1/ℒ_a,f+2, where a^UB is defined in (<ref>). Moreover, we define the correspondence X_x ∋ x ↠Γ_a(x):= {(a,y) ∈Γ(x) | a ≤a}= {(a,y)∈ [a,a] ×_y | ℐ_s(x,a,y) ≥ 0 for all s ∈𝒮}. Let a be defined in (<ref>). Moreover, let _x ∋ x ↦Γ_a(x) be defined in (<ref>). Then, for all x ∈_x, Γ_a(x) is nonempty, and for all x ∈_x V_a(x):= inf_(a,y) ∈Γ_a(x) f(x,a,y)=inf_(a,y) ∈Γ (x) f(x,a,y)=V(x). By Remark <ref> (i) we see that Γ_a(x) ≠∅ for all x ∈_x. Moreover, as Γ_a(x) ⊆Γ(x), we have V_a(x)≥ V(x) for every x ∈_x. To see that V_a(x)≤ V(x) for every x ∈_x, fix any x ∈_x and let (a,y) ∈Γ(x). By Remark <ref> (i), we have (a^UB,y) ∈Γ_a(x). Hence, f(x,a,y) ≥ f(x,min{a,a^UB},y) ≥inf_(a, y) ∈Γ_a(x)f(x,a, y). Since (a,y) ∈Γ(x) was arbitrary we obtain the desired result. The map _x ∋ x ↠Γ_a(x) defined in (<ref>) is a non-empty, compact-valued, convex-valued, and continuous correspondence. The non-emptiness follows from Remark <ref>. Let x ∈_x. Consider a sequence (a^(n),y^(n))_n ∈⊆Γ_a(x). Then, by the compactness of [a,a] ×_y, there exists a subsequence (a^(n_k),y^(n_k))_k ∈⊆Γ_a(x) such that (a^(n_k),y^(n_k)) → (a,y) as k →∞ for some (a,y) ∈ [a,a] ×_y. The continuity of [a,a] ×_y ∋ (a,y) ↦ℐ_s(x,a,y), which is ensured by Assumption <ref> (i), then implies that 0 ≤lim_k →∞ℐ_s(x,a^(n_k),y^(n_k)) = ℐ_s(x,a,y). Hence, Γ_a(x) is compact. Let x ∈_x, and let (a,y), (a',y') ∈Γ_a(x). Then, it follows for all t ∈ [0,1] by Assumption <ref> (ii) that ℐ_s (x, t· a+(1-t)a', ty+(1-t)· y' )≥ t·ℐ_s(x,a,y)+(1-t) ·ℐ_s(x,a',y')≥ 0 for all s ∈𝒮. Hence, the convexity of (<ref>) follows. It remains to show the continuity, i.e., that the map from (<ref>) is lower hemicontinuous and upper hemicontinuous. Let (x^(n))_n ∈⊆_x with lim_n →∞ x^(n) = x ∈_x and let (a,y) ∈Γ_a(x) ⊆ [a,a] ×_y. To show the lower-hemicontinuity, according to the characterization provided, e.g., in <cit.>, we need to prove the existence of a subsequence (x^(n_k))_k ∈ and elements (a^(k),y^(k)) ∈Γ(x^(n_k)) for each k ∈ with lim_k →∞(a^(k),y^(k)) =(a,y). First assume that a ≤a^UB. Since lim_n →∞ x^(n) = x, there exists, by definition of a, some n_0 ∈ such that for all n ≥ n_0 we have a^(n):= a+L_ℐ/ℒ_a,ℐ·x^(n)-x≤a. Since by Assumption <ref> (iii) the map [a,a] ∋ a ↦ℐ_s(x,a,y) is monotone for all x ∈_x, y ∈_y, s ∈𝒮, with Assumption <ref>  (iv), and with the Lipschitz-property of ℐ_s from Assumption <ref> (i), we have for all s∈𝒮 and for all n ∈ that ℐ_s(x^(n), a^(n), y) = ℐ_s(x^(n), a, y) -ℐ_s(x^(n), a, y) +ℐ_s(x^(n), a+L_ℐ/ℒ_a,ℐ·x^(n)-x, y) ≥ℐ_s(x^(n), a, y)+ ℒ_a,ℐ·L_ℐ/ℒ_a,ℐ·x^(n)-x ≥ℐ_s(x^(n), a, y)-ℐ_s(x^(n), a, y)+ℐ_s(x, a, y) ≥ 0, where the last inequality follows since (a,y) ∈Γ_a(x). Thus, we have (a^(n),y) ∈Γ_a(x^(n)) for all n ≥ n_0 as well as by (<ref>) that lim_n →∞ (a^(n),y) = (a,y). Hence lower-hemicontinuity follows for the case a ≤a^UB. Now we consider the case that a > a^UB. Note that in this case ℐ _s(x,a,y) >0 for all s ∈𝒮 due to the strict monotonocity of ℐ _s and by Remark <ref> (i). Hence, by the continuity of ℐ _s, there exists some n_0 ∈ such that for all n ≥ n_0 we have ℐ _s(x^(n),a,y)>0 implying that (a,y) ∈Γ_a(x^(n)) for all n ≥ n_0. Thus, we conclude with <cit.> the lower hemicontinuity of the map from (<ref>) also for the case a > a^UB. It remains to show the upper hemicontinuity. To this end, let (x^(n),a^(n),y^(n)) ∈GrΓ_a with lim_n → x^(n) = x. We apply the characterization of upper hemicontinuity provided, e.g., in <cit.>, and therefore we need to show the existence of a subsequence (a^(n_k),y^(n_k))_k ∈ with lim_k →∞( a^(n_k),y^(n_k))=(a,y) ∈Γ_a(x). As (a^(n),y^(n))_n ∈⊆ [a,a] ×_y is a sequence defined on a compact space, there exists a subsequence (a^(n_k),y^(n_k))_k ∈ with lim_k →∞( a^(n_k),y^(n_k))=(a,y) ∈ [a,a] ×_y. Since ℐ_s(x^(n_k),a^(n_k),y^(n_k)) ≥ 0 for all k ∈ as (x^(n_k),a^(n_k),y^(n_k)) ∈GrΓ_a, we obtain by the continuity of ℐ_s that ℐ_s(x,a,y) ≥ 0. This means (a,y) ∈Γ_a(x). For all ε∈ (0,1) the correspondence _x ∋ x ↠ℳ_ε(x):={(a,y) ∈Γ_a(x) |  f(x,a,y)- V_a(x) < ε} is non-empty, convex-valued, and lower hemicontinuous. Let ε∈ (0,1). The non-emptiness of ℳ_ε(x) for each x ∈_x follows by definition and by Remark <ref>. To show the convexity of ℳ_ε(x) for each x ∈_x, fix any x ∈_x and let (y,a),  (y, a) ∈ℳ_ε(x) and t ∈ [0,1]. Then by Lemma <ref> implying that Γ_a(x) is convex, we have t· (a,y)+(1-t)· (y, a) ∈Γ_a(x). Moreover, by Assumption <ref> (iii) ensuring that [a,a] ×_y ∋ (a,y) ↦ f(x,a,y) is convex, we have f(x,t · a + (1-t) ·a, t · y + (1-t) ·y) - V_a(x) ≤ t ·( f(x, a, y) - V_a(x) )+ (1-t) ·(f(x, a, y) - V_a(x) ) ≤ t ·ε +(1-t) ·ε = ε, from which we conclude the convexity of ℳ_ε(x). To show the lower hemicontinuity of (<ref>) let (x^(n))_n ∈⊆_x with lim_n →∞x^(n) =x ∈_x, and let (a,y) ∈ℳ_ε(x). We apply the characterization of lower hemicontinuity from <cit.> and therefore aim at showing that there exists a subsequence (x^(n_k))_k ∈ and elements (a^(k),y^(k)) ∈ℳ_ε(x^(n_k)) for each k ∈ such that lim_k →∞ (a^(k),y^(k)) = (a,y). By Lemma <ref> the correspondence _x ∋ x ↠Γ_a(x) is non-empty, compact-valued, continuous, and by Assumption <ref> (ii), the map _x × [a,a] ×_y ∋(x,a,y) ↦ f(x,a,y) is continuous. Hence, Berge's maximum theorem (see <cit.> or <cit.>) is applicable. We then obtain by Berge's maximum theorem that the map _x ∋ x ↦ V_a(x):=inf_(a,y) ∈Γ_a(x) f(x,a,y) is continuous. Therefore, as (a,y) ∈ℳ_ε(x), and since both f and V_a are continuous, there exists some γ∈ (0,1) such that for all (x,'a',y') with (x,'a',y') ∈ℬ_γ(x,a,y) ⊆_x × [a,a] ×_y, it holds f(x',a',y') - V_a(x') < ε. Moreover, as lim_n →∞ x^(n) = x, there exist some n_0 ∈ such that for all n ≥ n_0 we have √(x^(n)-x^2+(L_ℐ/ℒ_a,ℐx^(n)-x)^2)≤γ. Moreover, since (a,y) ∈ℳ_ε(x) and ε∈ (0,1), we have by Lemma <ref> (ii) that a < a^UB+1ℒ_a,f. Hence, by (<ref>) and by definition of a we have for all n ≥ n_0 also that a+ L_ℐ/ℒ_a,ℐ x^(n)-x≤ a+ γ≤a. Note also that for n ≥ n_0 we have by Assumption <ref> (iv) and Assumption <ref> (i) for all s ∈𝒮 the following inequality ℐ_s (x^(n), a+ L_ℐ/ℒ_a,ℐ x^(n)-x, y) = ℐ_s (x^(n), a+ L_ℐ/ℒ_a,ℐ x^(n)-x, y)-ℐ_s (x^(n), a, y) +ℐ_s (x^(n), a, y) ≥ℒ_a,ℐL_ℐ/ℒ_a,ℐx^(n)-x+ℐ_s (x^(n), a, y) = L_ℐx^(n)-x+ℐ_s (x^(n), a, y)-ℐ_s (x, a, y)+ℐ_s (x, a, y) ≥ L_ℐx^(n)-x-L_ℐx^(n)-x +ℐ_s (x, a, y) ≥ 0, since (a,y) ∈Γ_a(x). Hence, (<ref>) and (<ref>) together show that (a+ L_ℐ/ℒ_a,ℐ x^(n)-x, y) ∈Γ_a(x^(n)) for all n ≥ n_0. By (<ref>) we have (x^(n),a+ L_ℐ/ℒ_a,ℐ x^(n)-x, y) ∈ℬ_γ(x,a,y) for all n ≥ n_0. Thus, it follows with (<ref>) and (<ref>) that (a+ L_ℐ/ℒ_a,ℐ x^(n)-x, y) ∈ℳ_ε(x^(n)) for all n ≥ n_0, proving the lower hemicontinuity of (<ref>), by applying the characterization of lower hemicontinuity from <cit.> to the subsequences (x^(n))_n∈, n≥ n_0 and (a+ L_ℐ/ℒ_a,ℐ x^(n)-x, y)_n∈, n≥ n_0. For all ε∈ (0,1) the correspondence _x ∋ x ↠ℳ_ε(x) := cl(ℳ_ε(x)) is nonempty, convex, closed, lower hemicontinuous, and satisfies ℳ_ε(x)⊆{(a,y) ∈Γ_a(x) |  f(x,a,y)-V_a(x) ≤ε}. The non-emptiness and convexity of the map defined in (<ref>) both follow from Lemma <ref>. That the map is closed is a consequence of the definition of a closure of a set. The lower-hemicontinuity also follows from Lemma <ref> and from <cit.> which ensures that the closure of a lower hemicontinuous map is again lower hemicontinuous. The relation (<ref>) follows as the map _x × [a,a] ×_y ∋(x,a,y) ↦ f(x,a,y) is continuous by Assumption <ref> (ii). For all ε∈ (0,1) there exists a continuous map _x ∋ x ↦(a^*,ε(x), y^*,ε(x)) ∈Γ_a(x) satisfying both (i) a^*,ε(x) ≤a^UB+1/ℒ_a,f for all x ∈_x, (ii) f(x, a^*,ε(x), y^*,ε)-V_a(x) ≤ε. Corollary <ref> ensures that the requirements for an application of the Michael selection theorem (see <cit.> or <cit.>) are fulfilled. By the Michael selection theorem we then obtain a continuous selector _x ∋ x ↦(a^*,ε(x), y^*,ε(x)) ∈cl(ℳ_ε(x)) ⊆Γ_a(x) implying, by definition of ℳ_ε(x), that (ii) is fulfilled. Assume now that (i) does not hold, i.e., that we have a^*,ε(x) > a^UB+1/ℒ_a,f. This, however, by Lemma <ref> (ii), contradicts (ii), which concludes the proof. Now, for any 0 < δ < r recall the definition of the set C_y,δ⊆_y from Assumption <ref>. For all δ∈ (0,r), the map _x ∋ x ↦(a_δ^*,ε(x), y_δ^*,ε(x)):= (a,y) ∈ [a+δ,a-δ] × C_y,δargmin(a,y)-(a^*,ε(x),y^*,ε(x))^2 is continuous. Note that (x,a,y) ↦(a,y)-(a^*,ε(x),y^*,ε(x))^2 is continuous by Corollary <ref>. Moreover, the single-valued map is well-defined as the projection of the point (a^*,ε(x),  y^*,ε(x)) onto the compact, convex set [a+δ,a-δ] × C_y,δ. The continuity follows now by, e.g., Berge's maximum theorem (<cit.>) and <cit.>. §.§.§ Proof of Theorem <ref> In Section <ref> we have established all auxiliary results that allow us now to report the proof of Theorem <ref>. Without loss of generality let ε∈ (0,1), else we substitute ε by ε:=ε/1+ε. By Corollary <ref>, for all x ∈_x there exists, by abuse of notation with ε←ε/2 in the notation of Corollary <ref>, some continuous map _x ∋ x ↦(a^*,ε(x), y^*,ε(x)) ∈Γ_a(x) satisfying for all x ∈_x that a^*,ε(x) ≤a^UB+1/ℒ_a,f and such that f(x, a^*,ε(x), y^*,ε)-V_a(x) ≤ε/2. We recall r ∈ (0,1) from Assumption <ref> and define δ_0 := εmin{ℒ_a,ℐ,1}/8 max{L_ℐ,ℒ_a,f}√(1+L_r^2) L_f· r ∈ (0,r). Note that by definition of the projection from Lemma <ref> with respect to [a-δ_0, a+δ_0] × C_y,δ_0, and by Assumption <ref> we have (x,a^*,ε(x), y^*,ε(x))- (x,a_δ_0^*,ε(x), y_δ_0^*,ε(x))≤√(δ_0^2+L_r^2 δ_0^2) = δ_0 √(1+L_r^2) for all x ∈_x. Hence, for all x ∈_x, by using the Lipschitz-continuity of f from Assumption <ref> (i), by (<ref>), and by the definition of δ_0 in (<ref>), we have |f (x,a^*,ε(x), y^*,ε(x))-f (x,a_δ_0^*,ε(x), y_δ_0^*,ε(x))| ≤ L_f (x,a^*,ε(x), y^*,ε(x))- (x,a_δ_0^*,ε(x), y_δ_0^*,ε(x)) ≤ L_f ·δ_0 √(1+L_r^2) = r·L_f/L_f√(1+L_r^2)/√(1+L_r^2)·εmin{ℒ_a,ℐ,1}/8 max{L_ℐ,ℒ_a,f}≤ε/8. By Corollary <ref>, we have (x,a^*,ε(x),y^*,ε(x)) ∈Γ_a(x), and in particular, ℐ_s(x,a^*,ε(x),y^*,ε(x)) ≥ 0 for all s ∈𝒮 and all x∈_x. This implies by the Lipschitz-continuity of ℐ_s (Assumption <ref> (i)), by using (<ref>), and the definition of δ_0, that ℐ_s(x, a_δ_0^*,ε(x), y_δ_0^*,ε(x)) =ℐ_s(x, a_δ_0^*,ε(x), y_δ_0^*,ε(x))-ℐ_s(x, a^*,ε(x), y^*,ε(x))+ ℐ_s(x, a^*,ε(x), y^*,ε(x)) ≥ - L_ℐ(x,a^*,ε(x), y^*,ε(x))- (x,a_δ_0^*,ε(x), y_δ_0^*,ε(x)) ≥ - L_ℐδ_0 √(1+L_r^2) = - r ·√(1+L_r^2)/√(1+L_r^2)·min{ℒ_a,ℐ,1}/L_f·L_ℐ/max{L_ℐ,ℒ_a,f}·ε/8 ≥ -ℒ_a,ℐ/L_fε/8. By the universal approximation theorem (Proposition <ref>) and Lemma <ref> there exists a neural network := (_a,_y)∈𝔑_n_x,1+n_y such that sup_x ∈_x(a_δ_0^*,ε(x), y_δ_0^*,ε(x))-(_a(x),_y(x)) < δ_0. Moreover, we have by (<ref>) and (<ref>) for all x∈_x, s ∈𝒮 that ℐ_s(x,_a(x),_y(x)) =ℐ_s(x,_a(x),_y(x))-ℐ_s(x,a_δ_0^*,ε(x), y_δ_0^*,ε(x))+ℐ_s(x,a_δ_0^*,ε(x), y_δ_0^*,ε(x)) ≥ -L_ℐδ_0 +ℐ_s(x,a_δ_0^*,ε(x), y_δ_0^*,ε(x)) ≥ -L_ℐεmin{ℒ_a,ℐ,1}/8 max{L_ℐ,ℒ_a,f}√(1+L_r^2) L_f· r -ℒ_a,ℐ/L_fε/8 ≥ -ℒ_a,ℐ/L_fε/8 -ℒ_a,ℐ/L_fε/8 = -ℒ_a,ℐ/L_fε/4. In addition, we have by (<ref>) and Assumption <ref> that (_a(x),_y(x)) ∈ [a,a]×_y for all x ∈_x. Furthermore, for all x ∈_x |f(x, _a(x),_y(x) )-f (x, a_δ_0^*,ε(x), y_δ_0^*,ε(x))| ≤ L_f δ_0 = r ·L_f/L_fεmin{ℒ_a,ℐ,1}/8 max{L_ℐ,ℒ_a,f}√(1+L_r^2)≤ε/8. Next, define a neural network := (_a,_y) ∈𝔑_n_x,1+n_y by (_a(x),_y(x)):=(_a(x)+1/L_fε/4,_y(x)), x ∈^n_x. Then, for all x∈_x, by using (<ref>), (<ref>), (<ref>), Corollary <ref>, and the definition of a in (<ref>), we obtain _a(x) = _a(x)+1/L_fε/4≤a_δ_0^*,ε(x)+ δ_0 +1/L_fε/4≤a^*,ε(x)+ δ_0 √(1+L_r^2)+ δ_0 +1/L_fε/4 ≤a^UB+1/ℒ_a,f+ δ_0 √(1+L_r^2)+ δ_0 +1/L_fε/4 = a^UB+1/ℒ_a,f+ εmin{ℒ_a,ℐ,1}/8 max{L_ℐ,ℒ_a,f}√(1+L_r^2) L_f· r √(1+L_r^2)+ δ_0 +1/L_fε/4 ≤a^UB+1/ℒ_a,f+ ε/8 + δ_0 +ε/4≤a^UB+1/ℒ_a,f+2 ≤a. Hence, we conclude by (<ref>) and (<ref>) that (_a(x),_y(x)) ∈ [a,a]×_y for all x ∈_x. Moreover, by (<ref>) and (<ref>) we have for all x ∈_x and s ∈𝒮 that ℐ_s(x,_a(x),_y(x)) =ℐ_s(x,_a(x),_y(x))-ℐ_s(x,_a(x),_y(x)) +ℐ_s(x,_a(x),_y(x)) ≥ℒ_a,ℐ/L_fε/4+ℐ_s(x,_a(x),_y(x)) ≥ℒ_a,ℐ/L_fε/4-ℒ_a,ℐ/L_fε/4=0. Hence, we see that (_a(x),_y(x)) ∈Γ_a(x) ⊆Γ(x) for all x ∈_x. Furthermore, by (<ref>), we have for all x ∈_x that |f(x, _a(x),_y(x) )-f(x, _a(x),_y(x) )| ≤ L_f 1/L_fε/4= ε/4. Therefore, we conclude by Lemma <ref>, (<ref>), (<ref>), (<ref>), and (<ref>) that for all x ∈_x f(x, _a(x),_y(x) ) -V(x) = f(x, _a(x),_y(x) ) -V_a(x) = (f(x, _a(x),_y(x) )-f(x, _a(x),_y(x) ) ) +(f(x, _a(x),_y(x) ) -f (x, a_δ_0^*,ε(x), y_δ_0^*,ε(x))) + (f (x, a_δ_0^*,ε(x), y_δ_0^*,ε(x))-f (x,a^*,ε(x), y^*,ε(x))) +(f (x,a^*,ε(x), y^*,ε(x))-V_a(x)) ≤ ε/4+ε/8+ε/8+ε/2 = ε. § ACKNOWLEDGMENTS Financial support by the Nanyang Assistant Professorship Grant (NAP Grant) Machine Learning based Algorithms in Finance and Insurance is gratefully acknowledged. ecta
http://arxiv.org/abs/2306.01416v1
20230602100520
Algorithmic realization of the solution to the sign conflict problem for hanging nodes on hp-hexahedral Nédélec elements
[ "Sebastian Kinnewig", "Thomas Wick", "Sven Beuchler" ]
math.NA
[ "math.NA", "cs.NA" ]
1,2]S. Kinnewig 1,2]T. Wick 1,2]S. Beuchler [1] Leibniz University Hannover, Institute of Applied Mathematics, Welfengarten 1, 30167 Hannover, Germany [2] Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering - Innovation Across Disciplines), Leibniz University Hannover, Germany Algorithmic realization of the solution to the sign conflict problem for hanging nodes on hp-hexahedral Nédélec elements [ ============================================================================================================================== While working with Nédélec elements on adaptively refined meshes with hanging nodes, the orientation of the hanging edges and faces must be taken into account. Indeed, for non-orientable meshes, there was no solution and implementation available to date. The problem statement and corresponding algorithms are described in great detail. As a model problem, the time-harmonic Maxwell's equations are adopted because Nédélec elements constitute their natural discretization. The implementation is performed within the finite element library deal.II. The algorithms and implementation are demonstrated through four numerical examples on different uniformly and adaptively refined meshes. § INTRODUCTION The system of Maxwell's equations <cit.> are fundamental to many fields of research and have numerous practical applications, from Magnetic Induction Tomography (MIT) in medicine <cit.>, geoelectromagnetic modeling in geophysics <cit.> to quantum computing <cit.>, and quantum communication <cit.> in optics. As this work is part of the cluster of excellence PhoenixD[<https://www.phoenixd.uni-hannover.de/en/>], we consider applications from the area of photonics and optics. As the designing process of optical components can be challenging, simulations are necessary for support. This involves the simulation of electromagnetic waves within the components, which is done by solving Maxwell's problem for which Nédélec elements form the natural basis. As we consider the time harmonic indefinite Maxwell's problem in this work, specialized techniques are required to solve these kinds of problems. In the literature, several solution techniques are proposed. There are overlapping domain decomposition, see, e.g. the recent publication <cit.> and the references therein, and nonoverlapping domain decomposition methods, <cit.>, or ℋ-matrices <cit.> which are designed for the time-harmonic case. Note that the system is highly indefinite. Therefore, it becomes very challenging to develop an efficient solver, <cit.>. Alternatives in the positive definite case are multigrid techniques, <cit.>, <cit.>, or FETI-DP-like algorithms, <cit.>, <cit.>. Even with these methods, it remains computationally expensive to solve Maxwell's problems. Therefore adaptive strategies, such as local grid refinement, that can keep computational costs reasonable, while increasing the accuracy are highly desirable. This can be achieved with heuristic error indicators, geometry-oriented refinement, residual-based error control, or goal-oriented error control. With this, adaptive grid refinement is one key component in numerical simulations that enables us to handle more complex problems, for example, multi-scale problems for the simulations of integrated optical components. Our choice for a suitable programming platform is motivated by modern available FEM libraries that include support for high-order Nédélec elements. Various open-source finite element libraries allow the use of Nédélec elements of polynomial degree p≥2. The library <cit.> can handle unstructured grids with a maximum of p=2, while <cit.> can support a maximum of p=3. <cit.> utilized the basis functions introduced by Schöberl and Zaglmayr <cit.> to implement high polynomial functions on unstructured grids. <cit.> implements the Nédélec functions based on the hierarchical polynomial basis from Demkowicz <cit.>. Also, the following libraries implement high polynomial Nédélec elements, <cit.> (unstructured), <cit.>, and <cit.> (unstructured). And <cit.>, an extension of that implements optimized Schwarz domain decomposition methods, which is a well-established method for solving ill-posed Maxwell's problems. We select <cit.> as it offers high-polynomial Nédélec basis functions based on Schöberl and Zaglmayr's basis function sets for the complete De-Rham sequence <cit.>. Also, is well established with a large user basis and good accessibility thanks to its comprehensive documentation, which are essential for sustainable software development and uses tensor product elements. Additionally, it is designed with adaptive mesh refinement in mind, providing a range of functionalities for the computation of error estimators. Due to the use of quadrilateral and hexahedral elements, the local mesh refinement requires the usage of hanging nodes. As a starting point for our implementation of hanging nodes, we use the work of Ledger and Kynch <cit.> in two dimensions. The key objective of this work is to address a long-standing open problem that concerns the design of algorithms and corresponding implementation in three dimensions on non-orientable meshes of the Nédélec basis functions on locally refined grids. As previously mentioned, the authors <cit.> considered high-polynomial Nédélec basis functions to capture skin effects that appear in the MIT problem. Therefore they described a procedure to overcome the sign conflict on hp-Nédélec elements. In , prior work already utilized hanging nodes for Nédélec elements, for example, the work of Bürg <cit.>. But there, the old implementation was used, which can only be applied to oriented grids. In this work, we extend the class , which can also be applied to non-orientable grids. The extension to three dimensions is non-trivial, as we shall see, and particularly an open problem in . The main work here relies upon the high number of possible configurations we have to cope with. To overcome the sign conflict in the case of hanging edges and faces, we need to adapt the associated constraint matrix that restricts the additional Degrees of Freedom (DoFs) introduced by the hanging edges and faces accordingly. In the three-dimensional case, we have to consider hanging faces. One face has 2^3 possible orientations and is refined into four child faces. Consequently, we have to deal with 2^15 possible configurations. As dealing with every case individually would be even more cumbersome, we perform intelligent grid modifications to reduce the number of cases beforehand significantly. Our goal is to resolve sign conflicts regardless of the polynomial degree involved. To achieve this, we need to comprehend the structure of the constraint matrix so we can develop algorithms that can deal with any given polynomial degree. As one of our aims is to make these results accessible, we provide the most crucial steps as pseudo-code. These accomplishments are exemplarily applied to the time-harmonic Maxwell's equations, which are solved for four different configurations. Therein, our primary purpose is to show that our algorithms work and our implementation is correct. This is demonstrated through qualitative comparisons and some quantitative results in terms of a computational error analysis. The outline of this work is as follows. To start our discussion, we will describe the basic operators and polynomials required for the Nédélec basis. Then, we will introduce the 𝐇_curl conforming basis functions for both two-dimensional and three-dimensional cases, i.e., the Nédélec elements. In section <ref>, we explain the sign conflict that arises for the Nédélec elements in detail and explain how to overcome the sign conflict, along with some pseudo-code examples. In section <ref>, we start with the motivation for using non-uniform grids and present the sign conflict that arises in the context of non-uniform grids for Nédélec elements. We also provide a detailed explanation of how to overcome this sign conflict, with some examples of pseudo-code. Section <ref> discusses the time-harmonic Maxwell's equations. Section <ref> showcases our implementation by presenting the numerical results of some benchmark problems. § 𝐇_CURL-CONFORMING ELEMENT SPACE We start our problem discussion by comprehensively describing the underlying mathematical spaces to describe the sign conflict. For the discretization of 𝐇_curl, one must ensure tangential continuity. The De-Rham cohomology (see figure <ref>) <cit.> tells us that the space with the corresponding properties is the Nédélec space, <cit.>, which is introduced in the following. Our discussion begins with the introduction of the fundamental operators. After that, we make a definition by case, one for the two-dimensional Nédélec elements and one for the three-dimensional Nédélec elements. We like to point the reader to <cit.> for more details. §.§ Fundamental operators For a comprehensive description of the mathematical spaces, we start our discussion by introducing the necessary operators to describe 𝐇_curl. Therefore, let us assume a scalar ψ:ℝ→ℝ and a⃗, b⃗, c⃗, v⃗∈ℝ^d, d ∈{2,3} to be d-dimensional vectors. Then the gradient of ψ is given by ∇ψ = ( ∂ψ/∂ x_1, …, ∂ψ/∂ x_d), and the divergence of v⃗ is given by div(v⃗) ∇·v⃗∑_i = 1^d∂ v_i /∂ x_i. Next, a⃗·b⃗∑_i=1^d a_i b_i denotes the scalar product. For the description of the cross-product, we need to perform a case analysis, one for the two-dimensional case and one for the three-dimensional case. [ d = 2: d = 3:; ([ a_1; a_2 ]) ×([ b_1; b_2 ]) = a_1 b_2 - a_2 b_1. ([ a_1; a_2; a_3 ]) ×([ b_1; b_2; b_3 ]) = ( [ a_2 b_3 - a_3 b_1; a_3 b_1 - a_1 b_3; a_1 b_2 - a_2 b_1 ]) ] with this, we can furthermore write down the description of the curl operator [ d = 2: d = 3:; curl(v⃗) = ∇×v⃗ = ∂ v_2/∂ x_1 - ∂ b_1/∂ x_2 curl(v⃗) = ∇×v⃗ = ( [ ∂ v_3/∂ x_2 - ∂ v_2/∂ x_3; ∂ v_1/∂ x_3 - ∂ v_3/∂ x_1; ∂ v_2/∂ x_1 - ∂ v_1/∂ x_2 ]). ] The double cross-product between three vectors is the next operator we need to describe 𝐇_curl later. The Graßmann identity gives the cross-product between these three vectors a⃗× ( b⃗×c⃗ ) = ( a⃗·c⃗ ) b⃗ - (a⃗·b⃗ ) c⃗. The Graßmann identity is essential for defining the cross-product between three vectors in the two-dimensional case. Based on this definition, we extend the definition of the curl operator to apply to scalar functions in the two-dimensional case curl(ψ) = ( [ ∂ψ/∂ x_2; - ∂ψ/∂ x_1 ]). §.§ Legendre and Integrated Legendre Polynomials We aim to construct linear independent curl-conforming shape functions with tensor products from one-dimensional orthogonal polynomials. For the polynomial basis, we choose Legendre <cit.> and integrated Legendre polynomials <cit.> as they will provide good sparsity in the involved element matrices <cit.>[Chapter 5.2.1]. In the following, we denote the Legendre polynomials by l_i(x)=1/2^i i!d^i/dx^i (x^2-1)^i ∈ℒ_2(-1,1), where i ∈{0,…,p} stands for the polynomial degree. With the following recursive formula, an efficient point evaluation of the Legendre polynomials is possible. For x ∈ [-1,1] let [ l_0(x) = 1; l_1(x) = x; (n + 1) l_n+1(x) = (2n + 1) l_n x - n l_n-1, for n ≥ 1. ] These polynomials span P^p([-1,1]), particularly because they fulfill the orthogonality property ∫_-1^1 l_i(x) l_j(x) x = 2/2 i + 1δ_ij. From the Legendre polynomials, we can define the integrated Legendre polynomials by L_n(x) ∫_-1^x l_n-1(ξ) ξ for x∈[-1,1]. Similar to before, we can define the integrated Legendre polynomials with a recursive formula, which allows for an efficient point evaluation. [ L_1(x) = x; L_2(x) = 1/2( x^2 - 1 ); (n + 1) L_n+1(x) = (2n - 1)x L_n(x) - (n - 2) L_n-1(x), for n ≥ 2 ] For the recursion formula to work, we included L_1 even though L_1(x) ≠∫_-1^x l_0(ξ) ξ. Above, we have gathered all the necessary tools to construct the Nédélec space. As the curl operator behaves quite differently between the two- and the three-dimensional case, we continue with a definition by cases. §.§ Two-dimensional Nédélec elements Based on the De-Rham cohomology, we must choose our basis functions out of the Nédélec space V_h. Therefore, we want to introduce the definition of the space V_h in the following. The concept to employ integrated Legendre polynomials as polynomial basis was introduced in <cit.>, for the notation we follow the work of S. Zaglmayr <cit.>. The enumeration of vertices and edges is based on the implementation in  <cit.>. We define the quadrilateral reference element as 𝒞^2 = [0, 1] × [0, 1] with the following parametrization of Figure <ref>. We continue by defining the set of all edges ℰ = { E_m }_0 ≤ m < 4 with local edge-ordering E_m = { v_i, v_j } where (i,j) ∈{(0,2), (1,3), (0,1), (2,3)}. We denote the cell itself with local vertex-ordering C = {v_0, v_1, v_2, v_3}. The polynomial order is given by p⃗ = ( { p_E }_E ∈ℰ, p_C ). Based on this, we construct the 𝐇_curl conforming basis function, where we choose a definition that will provide a good sparsity pattern for the resulting element matrices. 2|c| 𝐇_curl conforming basis function 2|l|Vertex-based shape functions 2|l|There are no DoFs on the vertices. 2|l|Edge-based shape functions 2|l|for 0 ≤ i < p_E, E ∈ℰ, where λ_α and σ_α, α∈{0,1,2,3} as defined in figure <ref> Lowest order φ_E_m^𝒩_0 = 1/2∇( σ_e_2 - σ_e_1) ( λ_e_1 + λ_e_2) Higher-order φ_i^E_m = ∇( L_i+2( σ_e_2 - σ_e_1) ( λ_e_1 + λ_e_2) ) 2|l|Cell-based functions 2|l|0 ≤ i,j < p_C, where e⃗_x and e⃗_y are the unit vectors in x- and y-direction correspondingly Type 1: φ_(i,j)^C,1 = ∇ ( L_i+2(ξ_F) L_j+2(η_F) ) Type 2: φ_(i,j)^C,2 = ∇( L_i+2(ξ_F) L_j+2(η_F) )   where ∇(a b) (a'   b - a   b') is the anti-gradient Type 3: φ_(0,j)^C,3 = L_j+2(2y-1) e⃗_x φ_(i,0)^C,3 = L_i+2(2x-1) e⃗_y With the help of the constructed 𝐇_curl conforming shape functions, we can define a local basis for the two-dimensional Nédélec space on the reference element. V_h(𝒞^2) V^𝒩_0_h(𝒞^2) ⊕_E ∈ℰ V^E_h(𝒞^2) ⊕ V^C_h(𝒞^2), with V^𝒩_0_h(𝒞^2) span{φ^𝒩_0_E : E ∈ℰ} V^E_h(𝒞^2) span{φ^E_i : 1 ≤ i ≤ p_E,  E ∈ℰ} V^C_h(𝒞^2) span{φ^C,t_(i,j) : 0 ≤ i,j < p_C,  1 ≤ t ≤ 2 } ⊕span{φ^C,3_(0,j) : 0 ≤ j < p_C }⊕span{φ^C,3_(i,0) : 0 ≤ i < p_C } where V^𝒩_0_h is the space of the lowest-order Nédélec function, V^E_h is the space of the edge bubbles, and V^C_h is the space of the cell bubbles. Visualizations of some edge-based basis functions are presented in Figure <ref>. For a discussion that focuses more on the two-dimensional case and provides additional visualizations of the two-dimensional base functions, we refer the reader to <cit.>. §.§ Three-dimensional Nédélec elements Similar to the previous case, our goal is to construct a basis for the three-dimensional Nédélec space that will lead to a good sparsity pattern of the resulting element matrices. We begin by defining the hexahedral reference element as 𝒞^3 = [0,1] × [0,1] × [0,1]. The enumeration of vertices, edges and faces is based on the implementation in <cit.>. The parameterization is defined as in Figure  <ref>. We continue by defining the set of all edges ℰ = { E_m }_0 ≤ m < 12 with local edge-ordering E_m = { v_i, v_j } as shown in figure <ref>. The local face order is given by [ ℱ = { F_m }_0 ≤ m < 6 = { {v_0, v_2, v_4, v_6 }, {v_1, v_3, v_5, v_7 }, {v_0, v_1, v_4, v_5 },; {v_2, v_3, v_6, v_7 }, {v_0, v_1, v_2, v_3 }, {v_4, v_5, v_6, v_7 } }. ] A more detailed description of the cell is given in the documentation of [<https://www.dealii.org/current/doxygen/deal.II/structGeometryInfo.html>]. The polynomial order is given by p⃗ = ( { p_E }_E ∈ℰ, { p_F }_F ∈ℱ, p_C ). 2|c| 𝐇_curl conforming basis function 2|l|Vertex-based shape functions 2|l|There are no DoFs on the vertices. 2|l|Edge-based shape functions 2|l| for 0 ≤ i < p_E, E ∈ℰ, where λ_α and σ_α, α∈{0, …, 7} as defined in figure <ref> Lowest order: φ_E^𝒩_0 = 1/2∇ ( σ_e_1 - σ_e_2 ) ( λ_e_1 + λ_e_2 ) Higher order: φ_i^E = ∇ ( L_i+2 (σ_e_1 - σ_e_2) ( λ_e_1 + λ_e_2 ) ) 2|l|Face-based 2|p13cm| For 0 ≤ i,j < p_F, F ∈ℱ as defined in equation (<ref>) 2|p13cm| we define λ_F ∑^7_α=0λ_f_α and (ξ_F, η_F) (σ_f_1 - σ_f_2, σ_f_1 - σ_f_4). Type 1: φ_(i,j)^F_m,1 = ∇ ( L_i+2(ξ_F) L_j+2(η_F) ) Type 2: φ_(i,j)^F_m,2 = ∇( L_i+2(ξ_F) L_j+2(η_F) )   where ∇(a b) (a'   b - a   b') is the anti-gradient Type 3: φ_(0,j)^F_m,3 = L_j+2(η_F) λ_F ∇ξ_F φ_(i,0)^F_m,3 = L_i+2(ξ_F) λ_F ∇η_F 2|l|Cell-based 2|l| 0 ≤ i,j,k < p_C, where e⃗_x, e⃗_y, e⃗_z are the basis vectors Type 1: φ_(i,j,k)^C,1 = ∇( L_i+2(2x-1) L_j+2(2y-1) L_k(2z-1) ) Type 2: φ_(i,j,k)^C,2 = diag(1,-1,1) φ_(i,j,k)^C,1 φ_(i,j,k)^C,2 = diag(1,-1,-1) φ_(i,j,k)^C,1 Type 3: φ_(0,j,k)^C,3 = L_j+2(2y - 1)L_k+2(2z - 1) e⃗_x φ_(i,0,k)^C,3 = L_i+2(2x - 1)L_k+2(2z - 1) e⃗_y φ_(i,j,0)^C,3 = L_i+2(2x - 1)L_j+2(2y - 1) e⃗_z With the help of the constructed 𝐇_curl conforming basis function, we can define a basis for the three-dimensional Nédélec space. The main difference compared to the two-dimensional is that cell bubbles from the two-dimensional case become the face bubbles in the three-dimensional case. Furthermore, we define an additional space for the cell bubbles. V_h(𝒞^3) V^𝒩_0_h(𝒞^3) ⊕⊕_E ∈ℰ V^E_h(𝒞^3) ⊕⊕_F ∈ℱ V^F_h(𝒞^3) ⊕ V^C_h(𝒞^3), with V^𝒩_0_h(𝒞^3) span{φ^𝒩_0_E : E ∈ℰ} V^E_h(𝒞^3) span{φ^E_i : 0 ≤ i ≤ p_E,  E ∈ℰ} V^F_h(𝒞^3) span{φ^F,t_(i,j) : 0 ≤ i,j ≤ p_F,  1≤ t ≤ 2,  F ∈ℱ} ⊕span{φ^F,3_(0,j) : 0 ≤ j ≤ p_F  F ∈ℱ}⊕span{φ^F,3_(i,0) : 0 ≤ i ≤ p_F  F ∈ℱ} V^C_h(𝒞^3) span{φ^C,t_(i,j,k) : 0 ≤ i,j,k ≤ p_C, 1≤ t ≤ 2 }⊕span{φ^C,3_(0,j,k) : 0 ≤ j,k ≤ p_C ⊕} ⊕span{φ^C,3_(i,0,k) : 0 ≤ i,k ≤ p_C }⊕span{φ^C,3_(i,j,0) : 0 ≤ i,j ≤ p_C } where V^𝒩_0_h is the space of the lowest-order Nédélec function, V^E_h is the space of the edge bubbles, V^F_h is the space of the face bubbles and V^C_h is the space of the cell bubbles. Visualizations of some edge-based basis functions are presented in Figure <ref>, and visualization of some face-based basis functions is presented in Figure <ref>. §.§ 𝐇_curl-conforming transformation In order to extend our definition from the reference element to the physical element, we introduce a 𝐇_curl-conforming transformation that maps the vectorial shape functions from the reference element 𝒞^d, d∈{2,3} onto the physical element C^d. The transformation has to preserve the degrees of freedom to be 𝐇_curl-conform. The transformation also has to map gradient fields from the reference element onto gradient fields on the physical element. In <cit.>, the Piola transformation is presented that satisfies these properties. Let us summarize this transformation shortly. Let Φ_C: 𝒞^d → C^d be a continuously differentiable, invertible and surjective map, û⃗∈𝐇_curl(𝒞^d). The transformation u⃗ F_C^-Tû⃗∘Φ_C^-1 implies u⃗∈𝐇_curl(C^d) with [ d = 2: d = 3:; curl_x u⃗ = J^-1_Ccurl_x̂û⃗∘Φ^-1_C curl_x u⃗ = J^-1_C F_Ccurl_x̂û⃗∘Φ^-1_C,; ] with J_C=det F_C. § PRINCIPAL PROBLEM OF THE SIGN-CONFLICT This section aims to construct the elements so that tangential continuity is ensured between elements. §.§ On the continuity requirements To ensure the continuity between two neighboring elements, the resulting polynomials on the edges in two dimensions and on the edges and the faces in three dimensions must match. In the previous section, we have defined local edge and face parametrizations. The parameterization we have chosen is either symmetric for even polynomial degrees or anti-symmetric for odd polynomial degrees, as visualized in Figure <ref>. To ensure that the polynomials between neighboring edges match, we need to ensure that the direction also matches. If the directions do not match, this results in the sign conflict as some polynomials are anti-symmetric see Figure <ref>. This problem does not arise for Lagrange-type elements, as, in that case, the degree of freedom belongs to point evaluations. §.§ Solutions and algorithms for treating the sign conflict The apparent solution for the sign conflict is to choose a particular direction for each edge and each face. For example, one could define each edge to point from left to right or correspondingly from bottom to top, but it is easy to find a counter-example where this approach will fail, and one will encounter the sign conflict. Therefore we consider the Algorithm <ref> and <ref>, which were proposed by Zaglmayr and Schöberl <cit.> and implemented into by Kynch and Ledger <cit.>. Algorithm <ref> is applicable for the two-dimensional case and computes a globally consistent orientation for all edges based on the global numeration of dofs. Algorithm <ref> computes a globally consistent orientation for all faces based on the global numeration of dofs. § SIGN CONFLICT ON NON-UNIFORM GRIDS §.§ Motivation for the extension to non-uniform grids In the finite element method context, adaptive grid refinement has proven to be a powerful technique as it allows an adjustment of the resolution of the computational mesh in different simulation regions. The goal is to archive a good balance between accuracy and computational cost by focusing on the more complex parts of the simulation by using local grid refinement in these areas. To decide which parts of the simulation need to be refined can be done either user-defined or automated. For example, automatic, i.e., adaptive, selection can be performed via an error estimator based on the solution's local behavior. The discussion of error estimators is outside the scope of this work, but we refer the reader to <cit.>. When an element is locally refined in an unstructured mesh, the neighbor elements will be refined to eliminate hanging nodes. This approach is unsuitable for structured meshes since a single local refinement would lead to a uniform global refinement. Therefore, when a structured mesh is locally refined, hanging nodes, edges, and faces in the three-dimensional case are introduced. This leads to a mismatch between the refined and coarse element's number of dofs. §.§ Overview of the implementation of hanging-nodes Additional constraints must be implemented to overcome the mismatch between refined and coarse elements. These constraints are necessary to ensure that the resulting linear system can be solved numerically. In the case of Nédélec elements, the constraints for non-conforming meshes require that the tangential components of the basis function on the hanging edges and faces match those of the corresponding basis functions on the neighboring unrefined element. Constraints containing weights can be developed by considering a reference setting where we match the tangential constraints. These constraints can be applied to more general shapes with the help of an affine coordinate transformation <cit.>. The computation of the weights is not in the scope of the work. We refer the reader to <cit.> for the computation of the weights. The implementation presented here was created using as a programming platform that provides the functionality to compute the weights numerically. Therefore, we focus on modifying the given weights to match the grid's orientation, described in section <ref>. The hanging edge and face constraints depend on the refined element's orientation and its unrefined neighbor's orientation. Therefore the constraints have to be computed during the runtime. In the previous implementation of the Nédélec elements in [<https://www.dealii.org/current/doxygen/deal.II/classFE__Nedelec.html>], this problem was overcome by assuming pre-assigned edge and face parameterizations, allowing for pre-computed constraints. §.§ Preparation of the mesh To greatly simplify the computation of the constraints during the runtime, we extend Algorithm <ref> and Algorithm <ref> so that the exterior edges and faces match those of the parent's neighbors. §.§.§ Preparation of the mesh for the two-dimensional case To gain more insight into the Algorithm <ref>, we consider Figure <ref>, which compares the direction of the edges of the unrefined parent element with the direction of the edges of the refined child elements. To visually differentiate between those two, the parent element is depicted in black, while the hanging vertices and edges of the child elements are highlighted in blue. The parent element consists of the two vertices v_0 and v_1 and the edge E^P_0 that points from v_0 to v_1. The left child element consists of the vertices v_0 and v_1 and the edge E^C_0 between them, and the right child element consists of the vertices v_2 and v_1 and edge E^C_1. Suppose we apply Algorithm <ref> in to a refined element, the edges will always point to the hanging vertex, the hanging vertex as a higher global dof index as the outer vertices. As long as the neighbor element has the same refinement level, the global orientation stays consistent. However, when the neighbor is coarser, one has to apply the Algorithm <ref>; see Figure <ref>. §.§.§ Preparation of the mesh for the three-dimensional case In Algorithm <ref>, we have focused on the orientation of hanging edges, which applies to two-dimensional cases. However, we also have to deal with hanging faces in the three-dimensional case. Therefore we introduce the Algorithm <ref>. To gain more insight into the Algorithm <ref>, we visualize the orientation of the refined face in Figure <ref>. As before, the parent element is depicted in black, and the direction of its children is in blue. In addition, the face direction is indicated here. §.§.§ Challenging refinement cases In the three-dimensional case, there are specific configurations where the element has an edge that neighbors a coarser element, even though the neighbors of all faces of that element are of the same refinement level as the element itself. For an example of such a configuration, see Figure <ref>. To greatly simplify the computation of the hanging edges and hanging face constraints later on, we provide Algorithm <ref>, which deals with these specific configurations. §.§ Modification of the constraint matrix In the previous section, we introduced several algorithms to prepare the orientation of the grid to make it easier to adapt the hanging node constraints to general grids. Based on that work, we now modify the constraint matrix. Here we like to point out that the considered enumeration of DoFs is based on the ordering of the edges and vertices as in . The basic concept is the same to extend this work to other FEM software, but one must consider the edges and vertices enumeration of that specific software. §.§.§ Constraints for hanging edges The two most prominent approaches to deal with the additional DoFs originating from the hanging edges and faces are the following. First, one can apply suitable projections and use iterative solvers <cit.>. The second method on which we focus in this work is to impose constraints on the additional DoFs of the refined element by expressing them as a linear combination of the coarse's DoFs in the following way: φ_r = [α_i,j]_i,j^n,m·φ_c, where φ_r is the vector of the basis function on the refined element, φ_c is the vector of basis functions on the coarse element, and α_i,j are the weights between the corresponding basis functions. Figure <ref> considers the most simple example. Here one takes into account that the Nédélec functions are edge-based. Therefore we obtain two DoFs on the coarse element and four DoFs on the refinement element. §.§.§ Resolving the sign conflict on hanging edges We have just introduced the constraint matrix for oriented meshes so far. To extend the implementation of Nédélec elements in , for working with non-oriented meshes, we need to modify the constraint matrix accordingly to the orientation of the mesh. Therefore we must compare the refined element's vertex order with the coarse neighbor's vertex order, similar to the Algorithm <ref>. If the vertex order between the refined and coarse neighbors does not match, we must adapt the constraint matrix accordingly. Furthermore, we need to consider to which underlying base function each entry of the constraint matrix belongs. We introduced the base function for the Nédélec elements in detail in section <ref>. Here we need to consider if the underlying basis function is symmetric or anti-symmetric. Since entries that map an anti-symmetric shape-function to a symmetric shape-function and vice versa are multiplied by -1. Entries that map from symmetric shape functions to symmetric shape functions do not change. Also, entries that map from anti-symmetric shape functions to anti-symmetric do not change, as both sign changes cancel each other out. This is summarised in the Tabular <ref>. Given this information, we can formulate the following Algorithm to resolve the sign conflict on hanging edges. Again we want to consider the smallest example possible. Here we have to choose polynomial degree p=2 for the underlying base function. As for p=1, the constraint matrix would not change at all. See Figure <ref>. §.§.§ Constraints for hanging faces So far, we have focused solely on the orientation of hanging edges, which applies to two-dimensional cases. For the extension to three dimensions, we need to consider hanging faces. Hanging faces consist of eight external and four internal lines and four faces, as visualized in Figure <ref>. The coarse element consists of four external lines and one face. Therefore the size of the constraint matrix increases accordingly. As the constraint matrix increases significantly in size for hanging faces, particularly in the first non-trivial case where the polynomial degree is p=2, we only visualize the structure of the constraint matrix in Figure <ref>. It is worth noting that for p=1, there are no DoFs on the faces, rendering this case unsuitable for our study on the structure of hanging faces. §.§.§ Resolving the sign conflict on hanging faces Due to the complexity of the structure of the constraint matrix, we consider the different sub-constraint matrices, i.e., the C_(i,j) in Figure <ref>, independently as this is the natural decomposition of the problem. Thereby we consider each hanging edge and face independently. Based on our prior study of the problem, we can determine which coarse edge and face directions we must consider depending on the current hanging edge or face we want to constrain. This information is also shown in Figure <ref>. Constraints for the outer edges We begin by adapting the signs of sub-constraint matrices that describe the map edges on the coarse element to outer edges, i.e., edges E_4,…, E_11 in Figure <ref>, on the refined element. This is most straightforward as it is analogous to the two-dimensional case discussed in section <ref>. Based on the vertex order, we determine the direction of the edges and then adapt the signs of the corresponding entries in the constraint matrix. Constraints for the faces Next, we discuss how to adapt the constraint matrix for that map to the refined faces F_0, …, F_3. For an edge, there are only two possible configurations (pointing from the left to the right or vice versa). However, in the three-dimensional case, we must consider the x-direction and the y-direction and which direction is prioritized. This results in 2^3=8 possible orientations. These are visualized in Figure <ref>. The diagonal arrow denotes whether the x or the y-direction is prioritized. If the diagonal arrow points to the upper left vertex, the x-direction is prioritized. If the diagonal arrow points to the lower right vertex, the y-direction is prioritized. We must modify the constraint matrix accordingly based on the given configuration of the coarse and refined faces. We can geometrically interpret the necessary operations as x-axis inversion, y-axis inversion, and x- and y-axis exchange. These operations are visualized in Figure <ref>. 0.45 0.45 Because of the more complex nature of these operations, we provide them here as high-level pseudo-code. In Algorithm <ref>, we present how to perform an x-inversion on the constraint matrix for a given cell 𝒦. The y-inversion is analogous to the x-inversion. The Algorithm <ref> explains the x- and y-axis exchange. Given a particular configuration of the coarse face, we can now create a look-up table, which of these operations have to be applied to the sub-constraint matrix to map to a particular refined face correctly. For an example of such a look-up table, see Tabular <ref>. Constraints for the inner edges At last, we describe the process of adapting the constraint matrix for the inner edges E_0, …, E_3. We treat this case last, as this configuration is the most complex, requiring extensive modifications of the constraint matrix. As shown in Figure <ref>, the refined interior edges are constrained by all four coarse edges and the coarse face. For the sub-constraint matrices that map from the coarse edges parallel to the refined edge, we are considering. We employ the same approach for the outer edges. Next, we need to apply a similar approach as for the faces, taking into account the direction of the internal edge, which can be either in the x- or y-direction. We must apply the corresponding axis inversion as described above according to the orientation of the internal edge we are currently considering. However, we must deal with one additional case for the inner edges: the sub-constraint matrix mapping from the coarse edges orthogonal to the refined internal edge. This works again similarly to the case of the outer edges. The corresponding sub-constraint matrices must also be adapted according to the direction of the coarse edges parallel to the refined edge. This is shown in the Algorithm <ref>. § MODEL PROBLEM: TIME-HARMONIC MAXWELL'S EQUATIONS AND NUMERICAL SOLUTION Let Ω⊂ℝ^d, d∈{2,3} be a bounded modelling domain with sufficiently smooth boundary Γ = Γ^inc∪Γ^∞, where Γ^∞ is an absorbing boundary condition and Γ^inc is the boundary condition for some given incident electric field. Find the electric field u⃗∈𝐇_curl(Ω){v⃗∈ℒ^2(Ω),  curl(v⃗) ∈ℒ^2(Ω) } such that, {[ curl( μ^-1curl(u⃗) ) - εω^2 u⃗ = 0⃗ in Ω; μ^-1γ^t ( curl( u⃗) ) - i κωγ^T ( u⃗) = 0⃗ on Γ^∞; μ^-1γ^t ( curl( u⃗) ) - i κωγ^T ( u⃗) = 2 i ωγ^T ( u⃗^inc) on Γ^inc, ]. where u⃗^inc:ℝ^d→ℂ^d, d∈{2,3} is some given incident electric field, μ∈ℝ^+ is the relative magnetic permeability, κ = √(ε), ε∈ℂ relative permittivity, ω = 2 π/λ is the wavenumber and λ∈ℝ^+ is the wavelength. For the traces we define the space of well-defined surface divergence fields 𝐇_div^-1/2(Γ) {v⃗∈𝐇^-1/2(Γ) : v⃗·n⃗=0,  div_Γv⃗∈𝐇^-1/2(Γ) } and the space of well-defined surface curls 𝐇_curl^-1/2(Γ) {v⃗∈𝐇^-1/2 : v⃗·n⃗=0, curl_Γv⃗∈𝐇^-1/2(Γ) }, then the traces are given by [ γ^t: 𝐇_curl(Ω) →𝐇_div^-1/2(Γ), γ^t(v⃗) = n⃗×v⃗ and; γ^T: 𝐇_curl(Ω) →𝐇_curl^-1/2(Γ), γ^T(v⃗) = n⃗× (v⃗×n⃗) ] where n⃗ denotes the outward normal to Ω. System (<ref>) is called time-harmonic, because the time dependence can be expressed by e^i ωτ, where τ≥ 0 denotes the time. For the derivation of the time-harmonic Maxwell's equations we refer the reader to <cit.>. Before we derive the weak formulation let us recapitulate, that with integration by parts, we can reformulate an integral in the following way ∫_Ωcurl(v⃗) w⃗  x = ∫_Ωv⃗curl( w⃗)   x + ∫_∂Ω( v⃗×w⃗) w⃗  s. Next, we want to derive the weak formulation of equation (<ref>). ∫_Ωcurl(μ^-1curl( u⃗) ) φ⃗  x - εω^2 ∫_Ωu⃗φ⃗  x = 0⃗ (<ref>)⇒ ∫_Ωμ^-1curl( E⃗) curl( φ⃗)   x - εω^2 ∫_ΩE⃗φ⃗  x + ∫_∂Ωμ^-1γ^t ( curl( E⃗) ) φ⃗  s = 0⃗. Finally we apply the definition of the boundaries Γ^∞ and Γ^inc and obtain the weak form, which is given by: Find u⃗∈𝐇_curl(Ω) such that for all φ⃗∈𝐇_curl(Ω) ∫_Ω( μ^-1curl( u⃗) curl( φ⃗) - εω^2 u⃗φ⃗)   x + i κω∫_Γ^∞γ^T ( u⃗) γ^T ( φ⃗)   s = ∫_Γ^incγ^T ( u⃗^inc) γ^T ( φ⃗)   s. Notice that we have chosen the plane wave injection for the incident field <cit.>. The numerical solution of the resulting linear system is rather challenging, as it is ill-posed. So specialized methods have to be employed. A well-known approach to address the time-harmonic Maxwell's equation is based on combining direct solvers and domain decomposition methods <cit.>. Here the basic idea is to divide the problem into small enough sub-problem so that a direct solver can handle each sub-problem. Another approach is to find suitable preconditioners for iterative solvers, for example, with the help of H-matrices. As the computation of such preconditioners is quite challenging, these methods often have to be combined with a domain decomposition method <cit.>. § NUMERICAL TESTS In this section, we present some numerical examples, to demonstrate our implementation of hanging nodes for Nédélec elements especially on non-orientable geometries. Therefore, we consider four examples. §.§ Minimal test case: simple cube As a first proof of concept, we want to compare the results of the new implementation of hanging nodes for Nédélec elements with the existing implementation of hanging nodes for Nédélec elements. We have to consider that the existing implementation only works for orientable grids. We use a simple cube refined once globally as a minimal test case. So it consists of eight cells. Then, one of these cells is refined adaptively. The resulting grid is orientable. Therefore we can use this test case as a first proof of the concept and compare our results with the existing implementation of hanging nodes in . Comparing the results of the existing implementation and our new implementation of hanging nodes for Nédélec elements show no difference. It holds |E_existing - E_new|_∞ < 1e-16. Therefore, we continue with more complicated applications in the following. §.§ Quantitative computational analysis on a simple cube To further validate our new implementation of the hanging nodes, we consider different goal functionals on different (adaptive-)refinement levels, where we use the finest level with 2 080 944 DoFs as numerical reference. As a benchmark problem, we consider a cylindrical fiber made from SiO_2 with a refractive index of n_SiO_2=2.0257 surrounded by air n_air=1.0000 and an incident wave with a wavelength of λ=375 nm. We evaluate the following three goal functionals: the point value J_P(u) = u(x), the face integral J_F(u) = ∫_f u(s) s, and the domain integral J_D(u) = ∫_ω u(x) x. The results are presented in Table <ref>. In this test, we employ the polynomial degree of the underlying base functions high enough so that all features of the base functions are tested. Therefore, we choose a polynomial of degree p=3. The errors resulting from the sign conflict are visible in the intensity plot. Consequently, we compare in Figure <ref> the intensity plots resulting from the existing implementation of the Nédélec elements in . The intensity plot is computed in the first column with the FE_Nedelec[<https://www.dealii.org/current/doxygen/deal.II/classFE__Nedelec.html>] class, which does not support non-oriented meshes. The resulting intensity distribution differs from the correct solution. The results computed with the existing implementation of the FE_NedelecSZ[<https://www.dealii.org/current/doxygen/deal.II/classFE__NedelecSZ.html>] class are presented in the second column. Here the solution on the uniform refined grid is correct, but on the isotropic refined grid, the solution differs from the correct solution. The result from the here presented extension of the FE_NedelecSZ class is shown in the third column. Here is also the solution on the isotropic refined grid correct. §.§ Silver ball in vacuum To validate our implementation for non-orientable grids, we compute the scattering of a planar electromagnetic wave on a silver ball in a vacuum once via FEM and once with Mie's theory <cit.>, a well-established method for calculating the scattering of electromagnetic waves by spherical particles. For our simulation, we assume a silver ball with a radius of 100nm and a complex refractive index of r_Ag=0.0+4.0i that is hit by an incident planar wave with a wavelength of λ=500 nm and linear polarisation in the x-direction. To compute the scattering of the electric field in three dimensions via FEM, we use Nédélec elements with polynomial order p=2 as basis functions and solve the time-harmonic Maxwell's equations, as presented in chapter <ref>. Also, we employ a domain decomposition of the computational grid by decomposing the grid into four concentric shells. Each shell is further divided into two half-shells, resulting in eight subdomains. Here, using hanging nodes allows us to use adaptive mesh refinement around the silver ball, where the electric field is expected to vary significantly. Thereby we can increase the accuracy of our simulation without adding too many additional DoFs. For the computation of the electric field via Mie's theory, we employed the library scattnlay <cit.>. In Figure <ref>, we are comparing the result obtained by Mie's theory, and once obtained with FEM, we obtain a generally good agreement between the results. However, the computation of the scattered electric field of the silver ball with FEM proves quite challenging. Here we can observe some numerical artifacts at the north and south positions of the nano particle. The computation of the scattering field from nano particles provides a challenging benchmark for FEM, as the results can be validated by comparison with the results from the well-established Mie Theory. Therefore further studies of the nano particle are of interest. §.§ Laser-written waveguide To test our implementation of hanging nodes for Nédélec elements on non-orientable grids in a practical application in optics simulations, we consider a waveguide created by writing six modifications into a carrier substrate with a laser, causing the substrate to compress in the center. To simulate the behavior of a laser in that waveguide, we use the FEM method, as discussed above again. The geometry is quite complex, so we employ domain decomposition and adaptive mesh refinement. For the simulation, we assume the carrier material to have a refractive index of r_cladding=1.48995 and the compressed center to have a refractive index of r_center=1.4906. The modifications have a distance of 3mum, and the incident laser light has a wavelength of λ=660 nm and is linearly polarised in the x-direction. § CONCLUSION In this work, we considered the sign conflict, specifically in scenarios where hanging nodes are present. We provide a comprehensive guide in terms of mathematical derivations and algorithmic designs for resolving this sign conflict. These concepts can be applied to any software package that supports Nédélec elements and locally refined meshes on quadrilaterals or hexahedra with hanging nodes. Our choice is as a programming platform that is highly accessible and user-friendly. The new implementation was demonstrated for four numerical experiments that include qualitative comparisons in two and three spatial dimensions as well as brief computational convergence studies. Finally, a current practical example from optics simulations showing a laser-written wave-guide is presented. § ACKNOWLEDGMENTS This work is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Furthermore, we would like to thank Tim Haubold and Philipp König for many fruitful discussions and Clemens Pechstein for tips on how to find mistakes in the implementation.
http://arxiv.org/abs/2306.02716v1
20230605090908
Current status and operation of the H.E.S.S. array of imaging atmospheric Cherenkov telescopes
[ "S. Ohm", "S. Wagner" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE" ]
1]Stefan Ohm [email protected] 2]Stefan Wagner [email protected] [1]organization=Deutsches Elektronen-Synchrotron DESY, addressline=Platanenallee 6, city=Zeuthen, postcode=15738, country=Germany [2]organization=Landessternwarte, Universität Heidelberg, addressline=Königstuhl, city=Heidelberg, postcode=69117, country=Germany The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes (IACTs) to study gamma-ray emission from astrophysical objects in the Southern hemisphere. It is the only hybrid array of IACTs, composed of telescopes with different collection area and footprint, individually optimised for a specific energy range. Collectively, the array is most sensitive to gamma rays in the range of 100 GeV to 100 TeV. The array has been in operation since 2002 and has been upgraded with new telescopes and cameras multiple times. Recent hardware upgrades and changes in the operational procedures increased the amount of observing time, which is of key importance for time-domain science. H.E.S.S. operations saw record data taking in 2020 and 2021 and we describe the current operations with specific emphasis on system performance, operational processes and workflows, quality control, and (near) real-time extraction of science results. In light of this, we will briefly discuss the early detection of gamma-ray emission from the recurrent nova RS Oph and alert distribution to the astrophysics community. Gamma rays Cherenkov telescopes Telescope operation § INTRODUCTION The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes (IACTs) operating in the Khomas Highland in Namibia. In October 2019, the H.E.S.S. collaboration entered a first extension phase that lasted until the end of September 2022. The main operations goals for this extension phase were an increase in telescope and instrumental reliability as well as a significant increase in total observing time. This contribution describes the main technical activities and changes in operation procedures that were implemented in the first extension phase. They led to a ∼50% increase in yearly total observation time as well as a significant improvement in telescope up-time, reaching 98% per telescope, and overall data quality. The three main elements to achieve these goals are: 1) Hardware upgrades and maintenance; 2) Observations under moderate moonlight and twilight; 3) Observation procedures and data quality monitoring. § HARDWARE UPGRADES AND AVAILABILITY Ever since the installation of the first telescope in 2002, the H.E.S.S. system underwent frequent hardware upgrades and maintenance efforts to improve or maintain telescope performance. Figure <ref> shows a timeline with major hardware upgrades conducted throughout the 20-year history of H.E.S.S. The most elaborate upgrade was the installation of the large CT 5 telescope in 2012, making H.E.S.S. the only operational hybrid IACT system to date. In 2015/2016 the four Cherenkov cameras of the smaller H.E.S.S. telescopes were upgraded (HESS-IU) <cit.>. At the beginning of the first extension phase, the camera of the CT 5 telescope was replaced with a Cherenkov Telescope Array (CTA) prototype FlashCam camera <cit.>. The installation and integration of the camera went very smoothly with detection of the Crab Nebula on the first night of observations <cit.>. Since installation end of 2019, FlashCam operation is very stable with an up-time exceeding 99% and fulfilling CTA requirements <cit.>. In preparation for the upcoming CT 5 camera upgrade, and expected longer-term H.E.S.S. operation, the data acquisition system (DAQ) as well as the on-site computing cluster underwent a major upgrade in early 2019. This upgrade was finalised within 3 months after the integration of the new CT 5 camera in the array <cit.>. The overall downtime due to DAQ problems could be halved to ∼0.5% in the first extension phase. The increased hardware availability of the cameras in particular led to a more stable overall system and reduced downtime of other components as well. With further improvements made to other sub-systems like the HESS-IU cameras, or the tracking control system, we estimate the improvement in additional observation time as 100-150 hours per year. Figure <ref> shows the long-term data-taking efficiency since the installation of CT 5 in 2012. A significant increase in average efficiency (on-target data taking, weighted by participating telescopes, excluding bad weather) from ∼(60-80)% before October 2019 to ∼90% after is clearly visible. Around (5-7)% of the total available observation time is spent on telescope slewing between different targets in the sky. § OBSERVATIONS UNDER MODERATE MOONLIGHT AND TWILIGHT Another major goal for the first H.E.S.S. extension phase was to increase total observation time by extending routine observations into periods of moderate amounts of moonlight. Observations with imaging atmospheric Cerenkov telescopes are conducted with very sensitive detectors that were traditionally only operated under dark-sky conditions. Initial tests of observations under moonlight were conducted in 2019 and motivated by the discovery of GRB 190114C with the MAGIC telescopes in moonlight observations <cit.>. The camera hardware settings were fixed in March 2020 and the system basically prepared for continuous moonlight observations. To increase the lifetime of the HESS-IU cameras, one single gain setting was defined for both, observation in astronomical darkness as well as under moderate moonlight <cit.>. The full implementation of regular moonlight observations, including adaptations in e.g. the scheduling, or transient follow-up system was finalised in January 2021. Moonlight observations are conducted up to a moon phase of 40%, a target-to-moon separation between 45^∘ and 145^∘, and a maximum predicted night-sky-background (NSB) level of 3.5 times the dark NSB of 100 MHz per pixel in the HESS-IU cameras. Observations with the H.E.S.S. array can now be conducted for around 250 hours extra each year without a significant loss in sensitivity or performance during periods of moderate moonlight. A further increase in available observation time resulted from a widening of the observing window with an earlier start and a later end of observations each night. Historically, H.E.S.S. observations were conducted in astronomical darkness, starting and ending observations at sun elevation angles of -18^∘. Careful testing of the system behaviour was performed for observations under astronomical twilight in mid and end of 2019. Figure <ref> shows the CT1-5 array trigger rates as a function of sun elevation angle for different telescope pointing directions with respect to the sun position. Only a slow increase in trigger rates can be seen with increasing sun elevation angles, which confirms that it is safe to operate the telescopes in astronomical twilight. The new setting of -16^∘ for the maximum sun elevation angle has been implemented in January 2021 and results in an additional observation time of about 70 hours per year for the above moonlight settings. § OBSERVATION PROCEDURES AND DATA QUALITY MONITORING With the outbreak of the Covid-19 pandemic, a significant change in the H.E.S.S. observation procedure had to be implemented. Before Corona, two on-site experts trained a team of off-site personnel from H.E.S.S. partner institutes that typically conducted one observing shift between two full-moon periods. While travel was severely impacted after March 2020, observations were performed by on-site shift experts, supported by newly hired local shifters, as well as students and members of the University of Namibia, who were still allowed to travel to/from the site. This strong support and change in shift operation allowed H.E.S.S. to conduct science observations throughout the entire pandemic. Since travel restrictions have been eased, H.E.S.S. is operating in a mode where at least one professional local shifter conducts observations throughout an observing shift and is supported by shifters from H.E.S.S. partner institutes. This mode of operation guarantees reliable and stable operation through experienced local shift personnel while being able to train junior researchers from abroad in the operation of the H.E.S.S. telescopes. Furthermore, a remote H.E.S.S. operations room has been established at DESY in Zeuthen and is shown in Fig. <ref>[The remote control room was featured in the H.E.S.S. Source-of-the-Month of https://www.mpi-hd.mpg.de/hfm/HESS/pages/home/som/2022/10/October 2022.]. The remote control room increases flexibility in telescope operation, reduces CO_2 footprint stemming from international travel, as well as allows for training of technical experts before e.g. conducting upgrade or maintenance campaigns on site. The remote control room has been successfully used for all these purposes and further remote control rooms are currently being set up at other H.E.S.S. member institutes. Another major effort went into the documentation of telescope operation, a compilation of How-To's, and troubleshooting guidelines for system or sub-system errors and failures. This activity was particularly important given the changes in operational procedures and extended observations under moonlight and twilight. In particular, the problem troubleshooting guidelines are continuously updated and assure that known problems are identified and resolved as fast as possible during data taking. A newly established off-site data quality team, which rotates and typically consists of two day shifters per observing period monitors the quality of the data taken in the previous night. Data quality from the lowest (e.g. camera, trigger) to the highest (shower images, real-time-analysis sky maps) are checked and errors are flagged. The data quality team also monitors the long-term (sub-)system behaviour such as the muon efficiency, or the number of broken/deactivated pixels in the Cherenkov cameras. Problems encountered during the night and discovered the next day are documented in Shift Workbooks that serve as a central hub to collect the monthly shift schedule as well as target-of-opportunity (ToO) observations. Discussions between on-site and remote shift crew, day shifters, and sub-system experts are mainly conducted via Slack messenger. Weekly virtual meetings between sub-system experts and the shift crews summarise data taking and provide a forum for more detailed discussions. Monthly Operations calls are held in between observation periods and discuss longer-term operational activities like the implementation of new observing modes or prepare and inform about maintenance campaigns. These platforms have been found to be critical for information exchange and troubleshooting of issues. We estimate that through the implementation of these procedures and additional 50 hours of observations per year could be achieved. A revision of the calibration strategy resulted in another 15 hours of extra observation time per year. Monthly summary mails about technical activities on- and off-site, telescope operations, and data-taking efficiency are prepared and sent to the collaboration. § H.E.S.S. OPERATION AND OPTIMISATION FOR TRANSIENT SCIENCE The efforts described above resulted in a significant increase in operational efficiency and record-breaking data-taking in 2020 and 2021. In particular, 2021 saw a total observation time exceeding 1500 hours (or 17% duty cycle) also thanks to favourable weather conditions. That 2021 was not an exception can also be seen in Fig. <ref>. Compared to previous years more than 300 hours of extra time are now available for science observations. While downtime due to weather and transitions can hardly be reduced, the downtime due to hardware problems was reduced considerably to the few percent level. Another advantage of the much more stable operation is the homogeneity of the data that is taken. Since 2020, the vast majority of data is taken with the full 5-telescope array (cf. Fig. <ref>). Throughout the first extension phase, H.E.S.S. operations have been optimised to maximise telescope availability for ToO observations. Continued improvements have been made to the H.E.S.S. transients alert system, real-time-analysis, and next-day on-site analysis capabilities <cit.> as well as data transfer off-site for final calibration and data analysis. All these activities have ultimately allowed H.E.S.S. to reduce the time between data taking and final analysis to <2 days. Furthermore, continued improvements in the atmosphere monitoring and treatment of observation conditions in Monte Carlo simulations and instrument response functions are being implemented. The discovery of the first Galactic transient in very high energy gamma rays, the recurrent nova RS Oph was the culmination of all these activities <cit.>. Data were taken during moon time, astronomical twilight, and under strongly varying atmospheric conditions (which were corrected for in the final analysis). Furthermore, the fast real-time and next-day analysis informed shifters and experts early about the detection and source properties, which allowed H.E.S.S. to inform the community via Astronomers Telegrams <cit.>. This is just an example demonstrating how the developments implemented in the first H.E.S.S. extension phase and before put the experiment in an ideal role for time-domain multi-messenger and multi-wavelength astronomy. H.E.S.S. and its various sub-systems are undergoing a phase to prepare a low-maintenance mode in which no further major upgrades to hardware components or changes in settings are envisaged. This is aimed at minimising the maintenance effort and provide stability in data-taking efficiency during the transition period towards CTA installation and buildup. Recognition of the (mostly) junior researchers working on technical tasks and maintaining the various sub-systems is elevated through newly established public and citeable internal notes that are published via Zenodo, and on the official H.E.S.S. webpages, including appropriate advertisement through the various social media channels. Lessons learnt are communicated through these notes internally and to the community to maximise knowledge transfer from H.E.S.S. as the only hybrid IACT system to CTA. § ACKNOWLEDGEMENT Full H.E.S.S. acknowledgements can be found https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/HESS-Acknowledgements-2021.htmlhere. elsarticle-num
http://arxiv.org/abs/2306.12090v1
20230621080936
Stochastic fluctuations of diluted pedestrian dynamics along curved paths
[ "Geert G. M. van der Vleuten", "Federico Toschi", "Wil H. A. Schilders", "Alessandro Corbetta" ]
physics.soc-ph
[ "physics.soc-ph", "physics.data-an" ]
Department of Applied Physics and Science Education, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands Department of Applied Physics and Science Education, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands Consiglio Nazionale della Ricerche-IAC, Rome, Italy Department of Mathematics and Computer Science, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands [email protected] Department of Applied Physics and Science Education, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands As we walk towards our destinations, our trajectories are constantly influenced by the presence of obstacles and infrastructural elements: even in absence of crowding our paths are often curved. Over the last two decades pedestrian dynamics have been extensively studied aiming at quantitative models with both fundamental and technological relevance. Walking kinematics along straight paths have been experimentally investigated and quantitatively modeled in the diluted limit (i.e. in absence of pedestrian-pedestrian interactions). It is natural to expect that models for straight paths may be an accurate approximations of the dynamics even for paths with curvature radii much larger than the size of a single person. Conversely, as paths curvature increase one may expect larger and larger deviations. As no clear experimental consensus has been reached yet in the literature, here we accurately and systematically investigate the effect of paths curvature on diluted pedestrian dynamics. Thanks to a extensive and highly accurate set of real-life measurements campaign, we derive and validate via a Langevin-like social-force model capable of quantitatively describing both averages and fluctuations. Leveraging on the differential geometric notion of covariant derivative, we generalize previous work by some of the authors, effectively casting a Langevin social-force model for the straight walking dynamics in a curved geometric setting. We deem this the necessary first step to understand and model the more general and ubiquitous case of pedestrians following curved paths in the presence of crowd traffic. Stochastic fluctuations of diluted pedestrian dynamics along curved paths Alessandro Corbetta July 31, 2023 ========================================================================= § INTRODUCTION As we walk towards our destinations, indoor or in open spaces, we typically prefer to follow the most direct (typically straight) path. Yet, obstacles, infrastructural elements, or crowd traffic <cit.>, make our preferred paths unavoidably curved (cf. Fig. <ref>). Additionally, trajectories invariably exhibit fluctuations associated with sway and inter-subject variability. Over the last two decades, pedestrian kinematics has been extensively investigated experimentally <cit.>, and the motion of pedestrians walking along straight paths has been thoroughly analyzed and modeled (e.g. <cit.>). Especially in diluted conditions, i.e. in absence of pedestrian-pedestrian interactions, these analyses were capable of successfully modeling the dynamics, including the stochastic fluctuations around average motions <cit.>. In the case of paths having curvature radii much larger than the scale of a single person, we expect models for straight dynamics to hold locally. In fact, under these conditions, paths can be reasonably well approximated as being locally straight. One may thus wonder under which conditions and how the known model for straight paths can be adapted to generic curved paths. Indeed, as paths curvatures increase, one may expect larger and larger deviations from the the assumption of a locally straight dynamics. No experimental consensus has yet been reached on how paths curvature affect pedestrians dynamics. Only few, and partially contradictory, studies are available on the topic. These report anti-correlation between velocity and curvature (with linear <cit.> or power law trend <cit.>) or, even, an apparent absence of curvature effects <cit.>. The aim of this work is to understand and to quantitatively model the dynamics of pedestrians walking along curved preferred paths, including averages and stochastic fluctuations, considering a broad spectrum of curvature radii even as small as few pedestrian diameters. We opt to address this outstanding issue restricting to crowd scenarios in the diluted limit. Thus, the environment is the only reason pedestrians opt for curved paths. We deem this setting the necessary first step towards the goal of understanding the generic case in which curved paths appear in combination with and as a consequence of the overall crowd traffic. Understanding the kinematics of pedestrians is part of a challenging and broad multidisciplinary scientific effort with outstanding societal importance due to implications in crowd management <cit.> and urban design <cit.>, and sharing deep fundamental challenges connected with active flowing matter and statistical physics <cit.>. One of the main obstacles in fully understanding crowd flows is the inherent technical challenge of obtaining measurements with sufficient spatio-temporal accuracy and statistical resolution, fully capturing the large variability and complexity of pedestrian kinematics. Over the past years, experimental evidence on pedestrian behavior has been collected mostly in laboratory scenarios, allowing to probe average behavior, typically studied as a function of the pedestrian density (e.g.  <cit.>). Average behavior are usually encoded in so-called fundamental diagrams, connecting, e.g., pedestrian density with average velocity or fluxes <cit.>. Only more recently, accurate and privacy-respectful large-scale measurements in real-life conditions have become a possibility, either via custom setups developed in research environments <cit.> and via commercial products <cit.>. Key have been three-dimensional computer vision approaches based on stereoscopic vision or LiDar-like approaches <cit.>. Data acquisition with a 24/7 schedule in public locations has enabled the collection of highly-resolved, high statistics datasets (millions of trajectories), allowing statistical analyses up to rare events and opening new possibilities of model validation <cit.>. In this paper, we use for the first time high-resolution tracking to collect wide trajectory datasets to investigate the diluted dynamics of pedestrians walking along curved paths. We have performed large-scale data acquisition campaigns in Dutch train stations (Eindhoven, Amsterdam South) and laboratory experiments (in the Eindhoven University of Technology campus, NL). On these bases, we identify the effect of increasing curvature levels on walking velocities, presenting a curvature-velocity fundamental diagram, which we enrich with measurements of the typical fluctuations. This enables us to present a Langevin-like model reproducing quantitatively the complete statistics of position and velocity as curvature changes. Our work generalizes the social force-like <cit.> model presented in <cit.>, which quantitatively reproduces the diluted walking dynamics along straight paths. We effectively cast such a model to a curved geometry: even in absence of (social)-forces, pedestrians could follow curved trajectories. For this, we employ the language of differential geometry (in particular, through the notion of covariant derivative). On the basis of our data analysis, we extend the social-force terms to integrate curvature-dependent effects (with radii down to 0.6 m). This paper is structured as follows: in Sect. <ref> we introduce the geometric context of tubular neighborhoods of trajectories, central for the forthcoming analyses. In Sect. <ref>, we present the experimental data that we collected for our analyses, together with relevant technical references on data acquisition. Based on the measurements, in Sect. <ref>, we present a curvature-velocity fundamental diagram, comparing a simple analytic model with measurements. In Sect. <ref>, we present our quantitative Langevin-like model, whose comparison with measurements is reported in the results Sect. <ref>. A final discussion closes the paper. We opt to postpone most of the technical and formal details connected with differential geometry to the appendices. § KINEMATICS OF CURVED WALKING PATHS IN TUBULAR NEIGHBORHOODS We focus on bundles (i.e. sets containing similarly shaped trajectories) of real-life pedestrian trajectories on the plane x=(x,y): {t↦x_ν(t) = x_ν(t)_x + y_ν(t)_y, ν=1,2,…}, where ν=1,2,… serves as a trajectory index, x_ν(t), y_ν(t) are the horizontal and vertical components of trajectory ν at time t, and (_x,_y) is the (fixed) orthonormal base associated with the (x,y) coordinates (cf. examples in Fig. <ref>). These trajectories connect predefined origin and destination, which are separated by, e.g., obstacles or architectural fixtures. The need of bypassing these elements makes typical trajectories, and thus the whole bundle, non-rectilinear. Due to sway and inter-subject variability, trajectories exhibit fluctuations. We analyze such fluctuations in reference with the average path of the bundle, = = (s)_x+ (s)_y, where the variable s denotes a smooth monotonic parametrization. We identify with the individual preferred path, i.e. the trajectory that each pedestrian aims at following. Examples of such average paths are reported as thick lines in Fig. <ref>. We postpone the technicalities of the formal definition of the average path, (Eq. (<ref>)), as a function of the trajectory bundle to Appendix <ref>. We study fluctuations around considering its neighborhood. We employ coordinate lines parallel and normal to (Fig. <ref>), parametrized by the variables s and h, respectively. As mentioned, s increases as we move along , whereas h increases as we move in the orthogonal direction (towards the local curvature center). We name (_, _⊥) the local orthonormal base parallel to these directions. Note that curves defined by h=const wrap around while remaining, in a sense, parallel to it. As such the (s,h) parametrization of the neighborhood is usually named tubular. For smooth and limited h, (s,h) uniquely parameterize the tubular neighborhood (e.g. <cit.>). We unambiguously decompose velocities, = ẋ_x + ẏ_y, applied at a point in the neighborhood of , in a transversal, v^⊥, and a longitudinal component, v^, respectively perpendicular and parallel to a local coordinate line (h=const). In formulas = v^_ +v^⊥_⊥. Further details on the parametrization of the tubular neighborhood are given in Appendix <ref>. Our analysis targets kinematic implications on pedestrian trajectories of the curvature of the preferred path. We consider the local curvature of , k(s). By definition, k(s) is the reciprocal of the radius of the circle osculating and reads (e.g. <cit.>) k(s) = ^'(s) ^''(s)-^''(s) ^'(s)/[(^'(s))^2+(^'(s))^2]^3 / 2, where ^' denotes the first derivative of the x component of with respect to s (the second derivative and operations on the y component are written accordingly). § MEASUREMENTS Our study leverages on trajectory datasets acquired via three large-scale pedestrian tracking campaigns all performed in The Netherlands. Our campaigns specifically took place in Amsterdam south train station (AMS), Eindhoven train station (EHV) and on the university campus in Eindhoven (TUE). All our data has been acquired in naturalistic condition (with the exception of the TUE campaign in which pedestrians have been instructed to roughly follow a given path) and in a fully privacy respectful manner. Commercial or research-grade overhead tracking sensors have been employed. Since we are interested in the dynamics of undisturbed pedestrians, we consider trajectories in low density conditions (i.e. in absence of other neighboring pedestrians). In the following we provide a brief description of the datasets (for technical details about the average paths and the selection procedures, see Appendices <ref>-<ref>). Amsterdam south train station (AMS). At this measurement location on platform 2.1, we consider high-resolution data in the vicinity of the staircase (Fig. <ref>(a)) for the period spanning April 2020 to December 2020 (196 days). Pedestrians arriving by train normally leave the platform via the staircase depicted in the middle. Thus we select some of the many trajectories of pedestrians turning from the platform towards the staircase. The strict selection criteria (Appendix <ref>) result in a selection of 2,700 measured trajectories in Amsterdam south train station. The average path has a gradually increasing curvature and consequently a broad curvature spectrum with a radius of curvature ranging from 5 to 0.9 meters. The length of the average path is approximately 2 meters. Eindhoven train station (EHV). At the measurement domain within Eindhoven train station platform 2.1, measurements have been performed between April 2021 and September 2021 with a sample frequency of 10 Hz. We have chosen five winding paths in this train station as preferred paths as these are walked by many pedestrians. Additionally, all paths span wide curvature ranges. A top view of the platform with three preferred paths is shown in Fig. <ref>(c). Totally 2,700 measured trajectories are selected in Eindhoven train station. The average paths in Eindhoven train station have lengths ranging from 4 to 10 meters. The minimum radius of curvature reached by the preferred paths in this station is 2.1 meters. Eindhoven University of Technology (TUE). This measurement campaign is conducted as an experiment at a large public area within the University campus in Eindhoven, the Netherlands in February 2019. During one minute, seven participants were asked to walk around two traffic cones, 3 meters apart, resulting in elliptical-like trajectories (Fig. <ref>(b)). The pedestrians kept their distance to create diluted conditions. The average path has a broad curvature spectrum with a minimal radius of curvature around 0.6 meters. The measured trajectories are sampled with a frequency of 30 Hz (further technical information on this experimental setup based on overhead depth sensors are in <cit.>). § CURVATURE-VELOCITY FUNDAMENTAL DIAGRAM AND FLUCTUATIONS We report here on the effect of the preferred path curvature on the average velocity in the diluted flow limit. We compare a closed-form theoretical model with high statistics measurements. These enable us to derive a fundamental diagram-like relation for average velocity and path curvature. Consistently with previous research <cit.>, we observe that the walking velocity decreases with the curvature of the path. We assume that body rotation, necessary to adopt a curved trajectory, is the key reason for velocity reduction. Let denote the velocity pedestrians adopt when walking along straight paths (also Straight-Path Velocity, SPV, henceforth). In our datasets ∈ [1.10,1.36]m/s holds, in agreement with literature velocity measurements in the diluted limit (e.g. <cit.>). Suppose a pedestrian with body radius δ (half body width) walking along a curved path with radius R=1/k (as in Fig. <ref>). We expect the velocity of the body parts following the outer bend to remain equal to the straight-path velocity. Assuming a rigid body with shoulder line directed toward the curvature center, the Body Center Velocity (BCV), v_ BC, satisfies v_ BC < v_ SP, and, the following relation between v_ SP, v_ BC, δ and R hold v_ BC/R = v_ SP/R+δ. Eq. (<ref>) expresses the physical consequence that under our rigid body and shoulder alignment assumptions, the angular velocity is constant. Linearizing Eq. (<ref>) around k=0 returns a more familiar fundamental diagram-like expression v_ BC(k)=v_ SP(1-kδ). In Fig. <ref> we compare our model with our experimental measurements. We factor out the context-dependency of the velocity, by scaling the BCV to the SPV, i.e. we consider the following dimensionless longitudinal velocity at varying curvature v̂^(k) = ⟨v_ BC(k)/v_ SP⟩_k, where the average is taken among measurements having the same k value (where a binning in k is considered). For each measurement domain, the SPV is determined separately by extrapolating the longitudinal velocity versus curvature relation towards k=0. We report the relation in Eq. (<ref>) with a solid blue line, with body radius δ fitted to δ≈ 0.23 m. The pink area represents a margin of error obtained by fitting Eq. (<ref>) with 100 random partitions of the data, which are compatible with body radii δ∈[19,27]cm consistently with expectations. We report in solid red the linearized relation in Eq. (<ref>) (δ=0.19). Within the curvature range explored (k∈ [0,1.6]m^-1), the complete (Eq. (<ref>)) and linearized relation (Eq. (<ref>)) appear equally compatible with the data. For technical simplicity, in our Langevin-like model proposed in Section <ref> we will employ the linearized model. Velocity fluctuations. We conclude this section reporting on fluctuations beside the curvature-dependent averages (Eq. (<ref>)). Due to statistics reasons we focus on our richest dataset, AMS. In Fig. <ref> we report the probability density function of longitudinal (v^) and transversal (v^⊥) velocity fluctuations. In line with the fundamental diagram (Fig. <ref>), the means of the longitudinal velocity decrease for higher curvature levels. Compensating for this shift considering v^_ shifted:=v^-v_ BC(k) with δ=19 cm (cf. Fig. <ref>), it can be seen that fluctuations in the (shifted) longitudinal velocity are curvature independent and have a Gaussian fluctuation structure with standard deviation σ_v^ = 0.19 m/s. Similarly, fluctuations in transversal velocity do not depend on the curvature and have Gaussian fluctuation, σ_v^⊥ = 0.15 m/s. These measurements, after velocity shifts are compatible with experimental campaigns focusing on straight paths <cit.>. Similarly to <cit.>, the Gaussian behavior of velocity fluctuations will be crucial in modeling perspective, posing the bases to our Langevin-like structure. § LANGEVIN-LIKE MODEL FOR CURVED TUBULAR NEIGHBORHOOD In this section we show that the walking dynamics around preferred paths can be modeled quantitatively with a Langevin-like model defined on the tubular neighborhood of . Fluctuations around straight paths. The model introduced here extends the Langevin-like model previously proposed by some of the authors and that addresses the case in which is a straight trajectory <cit.>. In <cit.>, the fluctuating motions of pedestrians have been modeled as a superposition of social forces determining the individual acceleration, ẍ. Assuming for simplicity a coordinate system (x, y) in which is the path y=0, thus x identifies the position along , and y is the transversal coordinate (i.e. = (s, 0), (_, _⊥) ≡(_x, _y), (ẋ, ẏ) = (v^, v^⊥)), individual accelerations read ẍ = f(ẋ,)𝐞_x + (-2 β y -2μẏ)𝐞_y + σẆ = f(v^,)𝐞_ + (-2 β y -2μ v^⊥)𝐞_⊥ + σẆ. The previous equation models the following effects E1 - self-propulsion along driven by f(v^). At first-order Taylor expansion f(v^, ) is a relaxation term towards a desired walking speed for the body center , i.e. f(v^, ) = -2α(v^- ), where α is inversely proportional to the time-scale τ = (2α)^-1 for relaxation towards the desired velocity. Note that this term can be interpreted as an active viscous term with quadratic velocity potential Φ_(v^, ) = α(v^ - )^2. E2 - transversal confinement in the neighborhood, and transversal velocity damping, which is modeled as a damped harmonic oscillator. This is parametrized by a linear stiffness coefficient β and a linear friction coefficient μ. E3 - random noise, Ẇ := (Ẇ^, Ẇ^⊥), to generate fluctuations and recover randomness in behavior. For simplicity, this is assumed to be δ-correlated in time, isotropic, with components mutually uncorrelated Gaussian distributed (σ is a scale parameter). This hypothesis quantitatively agrees with the observed fluctuations in terms of correlation structure and probability density of velocities and positions. Note that in <cit.>, pair-wise interactions to reproduce the statistics of the avoidance behavior have been included in this model. Parallel dynamics in a tubular neighborhood: geometric setting. Here we extend model in Eq. (<ref>) to include curvature effect. When pedestrians follow a path with small curvature, we do not expect effects due to curvature: path appears locally straight. Pedestrians in these conditions would walk following their curved, preferred path. We incorporate this aspect in the left-hand-side of the equation of motion (<ref>). Heuristically, we opt to vary the underlying geometry. First, in absence of forces and noise, Eq. (<ref>) describes a pedestrian conserving their initial momentum: ẍ = 0 ⇒ ẋ = const. This translates into a rectilinear motion (depicted by the black arrows in Fig. <ref>). We generalize the left-hand-side of Eq. (<ref>), considering broader possibilities of force-free curves (typically addressed as geodesic curves) as solutions of := x + C( x, ẋ, ) = 0. Here, we adopt the notation for the covariant derivative of ẋ. In the field of differential geometry, the covariant derivative is commonly used to express the change of vectors when transporting them in a (curved) geometry <cit.>. Additionally, the correction term, C( x, ẋ, ), is usually expressed by so-called Christoffel symbols of the second kind: C( x, ẋ, ) := ∑_i,j,k=1,2Γ^i_k jẋ^kẋ^j_i (where the indexed notation satisfies (x^1,x^2):=(x,y), (𝐞_1,𝐞_2):= (𝐞_x,𝐞_y)). Technical properties of the covariant derivative and Christoffel symbols are postponed to Appendix <ref>. Our tailored correction term is constructed such that geodesic curves (i.e. solutions of Eq. (<ref>), cf. example blue arrows in Fig. <ref>) respect the following physical properties: * geodesic curves conserve the (Euclidean) kinetic energy, i.e. = 0 ⇒d/dtẋ^2 = d/dt(ẋ^2+ẏ^2) = 0. * geodesic curves initially parallel to , i.e. with zero initial orthogonal velocity, remain parallel to at all times. In formulas v^⊥(t = 0) = 0 = 0 ⇒ v^⊥(t) = 0, ∀ t > 0 or, equivalently, in components ḣ(t=0) = 0 = 0 ⇒ḣ(t) = 0, ∀ t > 0. This means that if is not straight, also geodesics will not be. Two examples of geodesics are shown in Fig. <ref> (curve 𝐚 and 𝐛). It can be seen that the properties of remaining parallel and conserving mechanical energy are satisfied, which is ensured by the centripetal-like acceleration C(, , ) - (note that this is not a covariant derivative of Levi-Civita type for the Euclidean metric). In our forthcoming simulations we opt to generate trajectories in the physical (x,y) coordinates. This allows to easily account for forcing terms and possibly generalize our work to include interactions. On the other hand, the correction term remain defined via an implicit system of equations. To prevent this section from becoming needlessly technical, we opt to postpone our derivation of the expression of the correction term following the two hypotheses above as well as their transformation in (x,y) coordinates in Appendix <ref>. Pedestrian fluctuations in a tubular neighborhood. To model the fluctuating behavior of pedestrians walking along curved paths, we perturb the force-free dynamics described by Eq. (<ref>), including counterparts of the effects E1-E3. We additionally hypothesize, consistently with the fundamental diagram in Sect. <ref>, that the body center velocity depends on the instantaneous curvature following Eq. (<ref>). We assume that pedestrians (in absence of stochastic fluctuations) can adjust instantaneously to such velocity as the curvature changes along the path (i.e. when k̇≠ 0). In other terms, the combination of Eq. (<ref>) and Eq. (<ref>) would provide a curvature dependent propulsion force f(v^,(k)). Yet, a propulsion force f(v^,(k)) built by bare combination of the two terms, would take a time τ >0 to relax to changes in desired velocity due to changed curvatures. We instead assume that pedestrians are instantaneously capable of adjusting to variations in curvature. This can be modeled by correcting the propulsion term including a contribution of the curvature time gradient k̇. This yields a corrected propulsion f̂(v^,(k),k̇) which reads f̂(v^,(k),k̇) = f(v^,(k)) - δk̇ = -2α(v^ - (1-δ k)+δ/2αk̇). Note that f̂≡ f whenever the curvature gradient is zero, e.g. on straight paths. Combining our geodesic flow parallel to the curved preferred path (Eq. (<ref>)), the effects E1-E3, abd the corrected propulsion term in Eq. (<ref>) yields the following force balance ∇_ x x = f̂(v^)𝐞_ + (-2 β h -2μ v^⊥)𝐞_⊥ + σẆ. An example of a trajectory generated by this model is in Fig. <ref> (curve 𝐜), where the modeling forces confine the trajectory around the preferred path . In the next section we show that Eq. (<ref>) describes quantitatively the statistics of the fluctuations of pedestrians walking about curved paths. § RESULTS In this section we compare the stochastic dynamics modeled by Eq. (<ref>) with experimental data. We focus on trajectories following the curved path at Amsterdam South station (AMS dataset), as it is the richest in amount of trajectories allowing to fully resolve and compare statistical fluctuations. We consider the SPV, , and body size radius δ determined in Appendix <ref>. We estimate the scale parameters (α, β, μ, σ) by considering Langevin potentials in longitudinal velocity (shifted as in Sect. <ref>) Φ_v^_ shifted∼ -log (v^_ shifted), (v^_ shifted) here indicates the probability density of v^_ shifted, and lateral deviation Φ_h and transversal velocity Φ_v^⊥. The fitting procedure follows the approach in <cit.>, and technical details are in Appendix <ref>. We report the values of the model parameters in <ref>. With the estimated parameters from <ref> and the simulation procedure explained in Appendix <ref>, we perform simulations of 2,700 trajectories with a discretization step size of 0.1 seconds, comparable to dataset AMS. Fig. <ref> displays a collection of simulated trajectories, qualitatively indistinguishable from the measurements. Next, we consider stochastic properties by comparing the empiric and simulated probability distributions of the fluctuations in three observables: shifted longitudinal velocity, transversal velocity and lateral deviation. The empiric probability distribution functions, as well as the ones obtained from the simulations, are shown in Fig. <ref>-<ref>. It can be seen that the stochastic properties of the velocity fluctuations are captured by the model. The simulated fluctuations in transversal position are also in good agreement. However, for lateral deviations larger than 10 cm (|h|>0.1 m), we observe that the empirical fluctuations deviate from the Gaussian behavior. This could potentially be attributed to architectural constraints within the station (e.g. the entrance of the staircase) which could impede inward (h<0) and facilitate outward fluctuations (h>0). Another important statistical property, also used in the model calibration, is the correlation of the shifted longitudinal velocity. In Fig. <ref>, it can be seen that the empiric v_ shifted^-correlation is recovered reasonably well by the model. § DISCUSSION We have investigated the fluctuating dynamics of undisturbed pedestrians walking along curved paths with high statistical, space- and time-accuracy. Our analysis hinged on large trajectory datasets acquired in both real-life conditions and in a experimental set-up. The trajectories in the datasets cover a broad range of curvature radii. Thanks to these, we have shown that in the diluted limit a fundamental diagram-like relation between the average longitudinal walking velocity and path curvature exists. Specifically, the average longitudinal velocity decreases for increasing curvature. Notably this reduction is quantitatively compatible with a basic rigid-body-like kinematic model. A first-order expansion of such a model, yield a fundamental diagram-like relation. Based on the large datasets, we have analyzed pedestrian motion beyond averages targeting fluctuations in velocity and lateral deviation. These display Gaussian statistics. Besides, the amplitude of the velocity fluctuations (variance) is independent on the curvature level, at for the range of curvatures observed (k∈ [0,1]m^-1). Based on these findings, we have extended the quantitative Langevin-like model by Corbetta et al. <cit.> to reproduce, in a statistically quantitative way, the walking dynamics of pedestrians along generic, curved, average paths. In our model, we have considered pedestrians as particle moving according to a custom geodesic flow shaped after the average path. The geodesics we consider are characterized by the conservation of kinetic energy and by the fact that they remain parallel to the average path (when the initial velocity is). We have modeled pedestrian dynamics by perturbing this geodesic flow by (social-like) forces representing (lateral) path adherence, longitudinal propulsion, and random noisy fluctuations. We have validated the model by comparing the probability density functions and the correlation functions generated by repeated model simulations with our measurements at Amsterdam South station. Our model successfully captures the stochastic features of the motion in terms of fluctuations in velocity and position. We have opted to operate in Cartesian coordinates within a curved geometry, embedding curvature effects in a custom covariant derivative. We believe this choice is instrumental towards further generalization of the model to include, e.g., interactions with other pedestrians and/or different types of forces or noise. All these are typically addressed in Cartesian coordinates. Within the geometric framework we propose, in fact, no coordinate transformations of the forces are required, but only a computation of a correction term (i.e. a Christoffel symbol). Mapping interaction forces in the local coordinate system of each pedestrian would rapidly turn prohibitively complex and computationally expensive. § DEFINITION OF PREFERRED PATH For the definition of the preferred path, we consider a bundle with N trajectories, {_ν(t)|ν=1,2,…,N}. Due to the variability in the velocity of pedestrians, we parameterize each trajectory by the relative time, s := t-t_1/t_2-t_1, where t_1 and t_2 are the times that a trajectory enters and leaves the measurement site, respectively. The preferred path, , is defined as an ensemble average over the bundle at each relative time instance s∈[0,1]: =⟨_ν(s)⟩_ν = 1/N∑_ν=1^N_ν(s). § DATA SELECTION PROCEDURE Trajectory selection AMS. To ensure that the data only contains trajectories under diluted conditions, we restrict to trajectories tracked when no other pedestrian is tracked on the platform. We, furthermore, restrict to walking-speed by removing trajectories with average velocity outside [0.5, 2.5] ms^-1. We consider the bundle with trajectories starting near the railroad (i.e. (x,y) ∈ [0.6, 1.3] × [1.6, 2.3] m^2) and finishing at the staircase (i.e. (x,y) ∈ [-3.0, -0.2] × [0.7, 3.5] m^2) depicted by the two rectangles in Fig. <ref>. We determine an average path, , according to Eq. (<ref>) and parameterize its tubular neighborhood with coordinates s and h as in Appendix <ref>. Note that h represents the normal deviation from . For a trajectory (t), we use the evolution of its h-coordinate, h(t), to determine the distance from : ‖- ‖ := 1/t_2-t_1∫_t_1^t_2|h(t)| dt, where the trajectory is defined for time t∈[t_1,t_2]. We improve the bundle by filtering out the 5% most deviating trajectories. That is ‖- ‖>24.6 cm, as depicted in Fig. <ref>-<ref>. Trajectory selection EHV. In contrast with the measurements at Amsterdam train station, nearly always more than one pedestrian is measured at the measurement domain in Eindhoven train station. Therefore we employ a rectangular grid consisting of 3 m× 3 m cells. We define the local density as the number of pedestrians in a grid cell. To ensure diluted conditions, we only consider trajectories where the local density does not exceed one during their course. Furthermore, we ensure walking trajectories by applying the same velocity restriction as in AMS trajectory selection. We group trajectories that originate and terminate in the same areas of the train station into bundles. Five bundles are suited for our analysis as they contain many (curved) paths. Average paths are determined as before. In a similar fashion to AMS trajectory selection, we improve each bundle by discarding the most deviating 10%. The average paths of three bundles are displayed in Fig. <ref> (paths in red, blue, green correspond to bundle 1, 2 and 3 respectively). The average paths of bundle 4 and 5 are displayed in Fig. <ref>. § CONSTRUCTION OF TUBULAR NEIGHBORHOOD AND DERIVATION OF THE COVARIANT DERIVATIVE §.§ Covariant derivative A covariant derivative (a.k.a. affine connection) is a mapping that describes how vectors change when transporting them in a smooth collection of tangent spaces. The concept of covariant derivative can be understood as an generalization of the ordinary derivative towards curved surfaces. For 𝐮 and 𝐯 vectors in a tangent space of a curved surface, the covariant derivative of 𝐮 along 𝐯 is denoted as ∇_𝐯𝐮 and respects the following properties (e.g. <cit.>): (i) ∇_f_1𝐯_1+f_2𝐯_2𝐮 = f_1∇_𝐯_1𝐮 + f_2∇_𝐯_2𝐮, (ii) ∇_𝐯(𝐮_1 + 𝐮_2) = ∇_𝐯𝐮_1+∇_𝐯𝐮_2, (iii) ∇_𝐯(f𝐮) = f∇_𝐯𝐮 + 𝐯(f)·𝐮, for 𝐮,𝐮_1,𝐮_2, 𝐯,𝐯_1,𝐯_2 in a tangent space and f,f_1,f_2 smooth functions. We can define the covariant derivative by defining Christoffel symbols of the second kind, Γ_i j^k. These coefficients determine how basis vectors in different spaces are connected via Γ_i j^k𝐞_k := ∇_𝐞_j𝐞_i. Note that from now on, we will use the Einstein summation convention (e.g. Γ_i j^k_k≡∑_k Γ_i j^k_k). Using the properties above, we could write the covariant derivative in terms of Christoffel symbols: ∇_𝐯𝐮=∂𝐮/∂𝐯+u^k u^jΓ_k j^i𝐞_i, where 𝐮=u^i_i. The covariant derivative can be pushed forward to other coordinate charts using the coordinate transformation ϕ = ψ_β∘ψ_α^-1, which maps from chart ψ_α to chart ψ_β. This induces a relation between Christoffel symbols in different coordinate charts: Γ_i j^k=T_ℓ^k(S_j^m S_i^nΓ̅_n m^ℓ+∂_j S_i^ℓ), with T=J_ϕ and S=J_ϕ^-1=[J_ϕ]^-1 the (inverse) Jacobian of ϕ and Γ_i j^k and Γ̅_i j^k the Christoffel symbols in the coordinate charts ψ_α and ψ_β respectively. §.§ Tubular neighborhood We construct a coordinate chart, ψ_, that covers the tubular neighborhood of a generic curve :ℝ→ℝ^2:s↦(x,y) by using the tangent and normal vectors, _=^'(s)/|^'(s)|and_⊥= ([ 0 1; -1 0 ]) _, as basis vectors. The coordinate lines are parallel and normal to with coordinates s and h representing the parallel and transversal direction respectively. The coordinate transformation form ψ_ to the Cartesian coordinates is given by ϕ(s, h)=+h _⊥(s). §.§ Energy conserving connection Geodesics are generally defined as parallel transport of velocity vectors in their own direction <cit.>, = x+Γ^i_k jx^kx^j𝐞_i= 0, analogously to Eq. (<ref>) with correction term C(, , )=Γ^i_k jx^kx^j𝐞_i. We derive our affine connection (i.e. derive the Christoffel symbols) such that geodesics respect the physical properties: * geodesic curves conserve kinetic energy; * geodesic curves initially parallel to remain parallel to at all times, as explained in Sect. <ref>. These properties fully describe geodesics in flat space nearby straight paths as =0 (Γ^i_k j=0 ∀_i,j,k). However, this simple connection does not hold for curved paths or curvilinear coordinates. Energy conservation is ensured by conserving the physical velocity, ‖ v ‖^2 = g_ijq̇^̇i̇q̇^̇j̇, where trajectory (q^1(t), q^2(t)) is in generic coordinate chart ψ_q with metric g. By defining the metric tensor in the Cartesian coordinate chart as g_ij=δ_ij, we define the physical velocity to be the Euclidean velocity (|| v||^2=ẋ^2+ẏ^2). We define coordinate chart ψ_ as in Appendix <ref>. Then the metric in ψ_ is given by <cit.>: ĝ_k q=g_i j∂ϕ^i/∂ s^k∂ϕ^j/∂ s^q, where ϕ denotes the coordinate transformation to the Cartesian coordinates (Eq. (<ref>)) and (s^1,s^2)=(s,h). Note that ĝ_sh=ĝ_hs=⟨_,_⊥⟩=0 since _⊥_⊥. Furthermore, ĝ_hh=‖_⊥‖^2=1 by definition. Therefore the metric in coordinate chart ψ_ can be written as ĝ_ij = ([ (∂_s ϕ_x)^2+(∂_s ϕ_y)^2 0; 0 1 ]). Because the metric is diagonal, the physical velocity can be separated into two orthogonal parts, v^ = √(ĝ_ss)ṡ and v^⊥ = √(ĝ_hh)ḣ, which are the longitudinal and transversal velocity components respectively. To meet the properties, both velocity components need to be conserved, meaning {[ d/d tv^=d/d t(√(ĝ_ss)ṡ)=0; d/d tv^⊥=ḧ=0 ]., which can be elaborated to {[ s̈ + ∂_s ĝ_ss/2ĝ_ssṡ^2+∂_h ĝ_ss/2ĝ_ssḣṡ=0; ḧ=0 ]. Using Eq. (<ref>), the Christoffel symbols in ψ_ can be determined such that Eq. (<ref>) is respected: Γ̅_i j^s=[[ ∂_s ĝ_ss/2ĝ_ss ∂_h ĝ_ss/2ĝ_ss; ∂_h ĝ_ss/2ĝ_ss 0 ]], and Γ̅_i j^h=0. Note that we can obtain the Christoffel symbols in the Cartesian coordinate chart by applying Eq. (<ref>). § NUMERICAL SIMULATIONS We integrate Eq (<ref>) by using the Runge-Kutta SRI2 algorithm <cit.> (via the PyPI library  <cit.>). We choose a discretization step size of 0.1 s, similar to the sampling frequency of our measurements. We initialize our simulations at the beginning of our preferred path (s) with s(t=0)=0 and h(0), v^⊥(0) and (0) distributed according to the Fokker-Planck equilibrium distributions (see Appendix <ref>). The Christoffel symbols, needed every time step during the integration of Eq. (<ref>), are obtained by the computational steps shown in Fig. <ref>. For step (1), the computation of the tubular coordinates, we use the two-dimensional Newton-Raphson method <cit.>. This iterative method solves equations of the form 𝐟(s)=0. If s_0 is an approximate solution, then the sequence s_p+1=s_p-J^-1(s_p)𝐟(s_p) for p=1,2,... and J Jacobian of 𝐟, converges to a solution. Given , the tubular coordinates are represented by the roots of function 𝐟(𝐬)=ϕ(𝐬)-x̂. The roots of 𝐟 are estimated with the Newton-Raphson method with the tubular coordinates of the previous time step as an approximated solution. With our typical simulation duration and discretization step size, two iterations of the Newton-Raphson method give a sufficient accurate estimation of coordinates s and h. In step (2), we use Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>) to calculate the metric in tubular coordinates, ĝ_ss, and the derivatives with respect to s and h (∂_s ĝ_ss and ∂_h ĝ_ss). We compute the Christoffel symbols in the tubular coordinate chart in step (3) using Eq. (<ref>)-(<ref>). Finally, in step (4), we push the Christoffel symbols to the Cartesian coordinate chart using Eq. (<ref>). § MODEL CALIBRATION The model is calibrated by estimating the model parameters, {α, β, μ, , δ, σ}. We use the Amsterdam train station measurements to estimate the parameters. The ‘straight-path velocity’ is estimated by linearly extrapolating the v^ - k relation towards k=0. For the Amsterdam train station this results in =1.33 ms^-1. The body size radius, δ, represent the slope of the fundamental diagram (Fig. <ref>). The estimation of δ is obtained by a linear fit: δ=0.19 m. The remaining four parameters are estimated by applying fits to empirical Langevin potentials and a correlation function. The first fit is applied to the transversal velocity potential. In the stationary regime, the model produces probability distribution of the transversal velocity and lateral deviation from the preferred path, (h,v^⊥), according the well-known Fokker-Planck equation <cit.> with solutions (h, v^⊥)=𝒩exp[-2 μ/σ^2(v^⊥)^2-4 βμ/σ^2 h^2] where 𝒩 denotes a normalization constant. A Langevin potential can be constructed according to Φ(·)=-ln((·)). The analytical potentials of the transversal dynamics should agree with the empiric potentials such that -ln(_exp(v^⊥))≈2μ/σ^2(v^⊥)^2 + K_1 and -ln(_exp(h))≈4βμ/σ^2h^2 + K_2. The constants K_1 and K_2 are normalization constants and _exp(·) denotes the empiric probability distribution function. The fitting can be observed in Fig. <ref>-<ref> where the resulting estimated ratios are given by 2μ/σ^2≈ 21.77 and 4βμ/σ^2≈ 51.08. The same can be done for the longitudinal dynamics. In the stationary regime, the probability of the shifted longitudinal velocity is distributed according to ( )=𝒩exp[2 α/σ^2()^2] where 𝒩 is a normalization constant. The ratio 2α/σ^2 is compared to the empirical distribution function of the shifted longitudinal velocity according to -ln(_exp( ))≈2α/σ^2()^2 +K_3. Constant K_3 again represents normalization. The fit (Fig. <ref>) results in the estimation of the ratio: 2α/σ^2≈ 14.36. To complete the parameter estimation, a time correlation function of the shifted longitudinal velocity is used. Using Eq. (<ref>) and the definition of , the deterministic shifted longitudinal dynamics can be described by d/dt=-2α. Therefore, the time correlation of should decay as exp(-2α t). An estimated value of α follows from the fit (Fig. <ref>): -2α≈ -0.51 The estimates obtained by the fitted values result in the parameter values reported in <ref>. To determine uncertainty intervals for our estimates, we repeat the fitting procedure five times using randomly selected, equally-sized partitions of the data. We then use the fitted values from each of the five partitions to estimate the minimum and maximum values for each parameter. We set these as the lower and upper bounds of the respective intervals. apsrev4-1
http://arxiv.org/abs/2306.10832v2
20230619103432
Pneumatic bellows actuated parallel platform control with adjustable stiffness using a hybrid feed-forward and variable gain I-controller
[ "Martin Varga", "Ivan Virgala", "Michal Kelemen", "Lubica Mikova", "Zdenko Bobovsky", "Peter Jan Sincak", "Tomas Merva" ]
cs.RO
[ "cs.RO" ]
1]Martin Varga 1]Ivan Virgala 1]Michal Kelemen 1]Ľubica Miková 2]Zdenko Bobovský 1]Peter Ján Sinčák 1]Tomáš Merva [1]Faculty of Mechanical Engineering, Technical University of Košice, Slovakia [2]Faculty of Mechanical Engineering, Technical University of Ostrava, Czech Republic Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Pneumatic bellows actuated parallel platform control with adjustable stiffness using a hybrid feed-forward and variable gain I-controller [ Received July 31, 2023; accepted ??? ========================================================================================================================================= Redundant cascade manipulators actuated by pneumatic bellows actuators are passively compliant, rugged and dexterous which are qualities making them exceptionally well suited for applications in agriculture. Unfortunately bellows actuators are notoriously difficult to precisely position. This paper presents a novel control algorithm for the control of a parallel platform actuated by pneumatic bellows actuators, which is serving as one module of a cascade manipulator. The algorithm combines a feed-forward controller and a variable gain I-controller. The feed-forward controller was designed using experimental data and two regression steps to create a mathematical representation of the data. The gain of the I-controller depends linearly on the total reference error, which allows the I-controller to work in concert with the feed-forward part of the controller. The presented algorithm was experimentally verified and its performance was compared with two controllers, an ANFIS controller and a constant gain PID controller, to satisfactory results. The controller was also tested under dynamic loading conditions showing promising results. pneumatic bellows, parallel platform, feed-forward controller, variable PID § INTRODUCTION Industrial robots are an indispensable part of the manufacturing process in many parts of the industry, where their traits, i.e., precision, speed, and the ability to work basically nonstop, help increase productivity and decrease cost. It is therefore understandable, that there is a strong incentive to use industrial robots in other fields, like agriculture and medicine, to name a few. The most common industrial robots are serial link 6R robots, 2R1T SCARA robots or parallel Delta robots driven by, most commonly, electric actuators, or in some cases hydraulic actuators <cit.>. These industrial robots were developed for many decades, their design is standardized, and their mathematical description and control design is fairly well researched. Unfortunately, these robots lack some key features needed in the aforementioned new fields of application. For example compliance, agility and complex modes of motion <cit.>. These requirements fulfil new emerging classes of robots i.e., redundant cascade and continuum robots <cit.>. The redundant robots are all those that have more degrees of freedom than is necessary to perform a certain task. Development of these robots took pace from the year 2000. Redundant robots in general, but especially cascade and continuum robots, have unique characteristics. For example, according to <cit.> and <cit.>, compliance, a good reach to weight ratio, modularity and other. These characteristics arise from their specific design. In general, continuum and cascade robots consist of several in series connected parallel modules that, if underactuated, form a continuum robot and if fully actuated form a cascade robot. To describe the motion of a redundant robot and design a control algorithm, it is first necessary to focus on its individual modules and their properties. The chosen actuator type influences the achievable properties of a module. Electrical linear servo motors as used by <cit.> within the structure of a module or outside as done by <cit.> provide high force, stiffness and are usually equipped with position sensors simplifying control. Hydraulic actuators as can be seen in <cit.> provide high forces and can be precisely positioned, jet are slow. Nonstandard actuators for such applications can also be used, for example SMA springs as seen in the work of <cit.> or dielectric materials as described by <cit.>. Pneumatic actuators are a popular class of actuators that are used in these applications. They provide high power density, relatively low weight, can be easily manufactured to custom specification, as described in <cit.> and <cit.>, or bought off the shelve in a variety of types and sizes <cit.>, <cit.>, <cit.> and the compressibility of air give them a natural level of compliance. This ability makes them the actuator of choice for medical applications, like rehabilitation equipment <cit.>, in flexible endoscopes <cit.>, <cit.>, in agriculture <cit.>, as parts of mobile robots <cit.>, <cit.>, as the actuator for a high precision positioning system <cit.> or a stiffness regulating element in hybrid actuation schemes for continuum tendon driven robots as presented by paper <cit.>. Control of a parallel platform module actuated by three or four pneumatic actuators is a challenging task. One approach to control of these modules is to use a feed-forward controller alone or in combination with other types. <cit.> presented a modelling framework to design a model from which a feed-forward controller with satisfactory performance was developed. The work <cit.> developed a custom bellows type actuator and applied it in a parallel platform with three degrees of freedom. The control algorithm was a feed-forward controller based on experimental mapping between pose, external forces and input pressure. In <cit.> authors used a feed-forward controller based on a mathematical model in combination with a variable P gain PI controller to combat the effect of hysteresis in the system. To positioning of a soft robot, the paper <cit.> uses a model based controller using both feed-forward and feedback components with a structure similar to a PD controller. A planar platform actuated by pneumatic muscles is controlled by <cit.> using three fuzzy controllers synchronized through an ANFIS (Adaptive neuro fuzzy inference system) based controller. The paper <cit.> applied a simple constant gain PID controller for positioning of a parallel platform actuated by four pneumatic muscles. <cit.> have demonstrated a nonlinear SMC (sliding mode controller) based on a PID type sliding surface combined with a lumped element model-based controllers to control a soft pneumatically actuated robot. In the paper <cit.> authors uses a fourier series-based adaptive sliding-mode controller with H∞ tracking performance to solve the high non-linearity and time-varying problem for a parallel platform actuated by rod-less pneumatic cylinders. While <cit.> dealing with a tendon driven redundant manipulator proposes a population-based model-free control method that could be applied to pneumatically actuated manipulators as well. Based on previous works we have set up to develop the mechanical design and control system for a rugged redundant cascade manipulator driven by pneumatic bellows intended for both research and agricultural use. The most difficult part of the development is the controller design for one separate module, as noted in previously mentioned articles, this task is notoriously difficult due to the inherent nonlinear hysteresis behaviour of the chosen actuator type and MIMO system as a whole. A similar route to previous research in pneumatic parallel platform module control was taken by relying on experimental data, but this concept was expanded upon by applying two regression steps to the data to get a mathematical module model, which is taking the place of a feed-forward controller. This feed-forward controller was later supplemented by a variable gain I-controller that facilitates disturbance rejection. The controller design allows for a on demand change in stiffness of the system during operation and lends itself well to be a part of complete control system for the whole cascade manipulator. Based on previous papers survey, the novelty of the paper can be defined as follows: * Development of a novel hybrid FFvI controller * Establishment of controller design methodology for cascade redundant robots * Experimental positioning analyses under dynamic disturbance effects Proposed FFvI controller features are: * real-time stiffness change during operation * simple implementation as a lower level part of a larger control algorithm * easy and fast automatic controller update with the use of real data during use * the controller is fast and has easily predictable behaviour * computationally non demanding This paper focuses on the development of a novel controller for a 2 DOF pneumatic parallel platform that represents one module of the pneumatic manipulator PneuTrunk (see Fig. <ref>), developed by our Cognitics Lab. The first part of the paper presents the design and kinematic model of one module of PneuTrunk. In the second part a feed-forward controller based on an experimentally identified system with stiffness regulation capabilities combined with a variable gain I controller for disturbance rejection is presented. In the third part, the proposed controller is compared with a simple PID controller and ANFIS controller. § DESIGN OF PNEUTRUNK MODULE As can be seen in Fig. <ref>, the redundant manipulator PneuTrunk is a cascade type manipulator constructed out of parallel platform modules ordered in series. The number of modules depends on the required degrees of freedom. One module, shown in Fig. <ref>, consists of two duraluminium plates connected by an universal joint and three evenly spaced pneumatic bellows. The tilt angles between the top and the bottom plate are measured by two potentiometric rotation sensors placed in such a way that the axis of the universal joint are colinear with both axes of the sensors. The pneumatic actuators are of the shelve Dunlop 2 3/4 x 3 bellows. The pressure in the pneumatic bellows is controlled by three separate electropneumatic converters SMC ITV1050-31F20. All tubing is of inner diameter 6 mm to eliminate the effects of tubing diameter on the dynamic behavior of the bellows. The module is controlled by a B&R PLC type 4PPC70-0702-20B. The maximum operating pressure for one bellow is 7 bar, but to prevent damage to the system and especially the universal joint, the allowable pressure range is set to 0-5 bar. The tilt around the x-axis and y-axis at this pressure range is written in Tab. <ref>. The x-axis is oriented towards the center of one bellows. It is expected that the module will be driven only by positive pressure. This has important implications when designing the control algorithm for such a device. While extension of one bellows is facilitated by simply supplying pressure, compression is achieved by applying external forces coming predominantly from the extension of one, or both remaining bellows. This fact is also the reason why the minimum number of pneumatic bellows is three. Coincidentally, because the module only has two degrees of freedom, this design is inherently overactuated. This causes one posture of the module to be reachable by an infinite number of bellows input pressure combinations and, in theory, giving the system the ability to change its stiffness without changing the posture. This was taken into account when designing the control algorithm for one module. The flow of information and energy is visualized in Fig. <ref>. The whole system is simple and contains only necessary components. It could be argued that adding a center closed 2/2 valve between each bellows and its corresponding electropneumatic converter could give the system the ability to pneumatically lock the bellows extension, improving the systems positioning performance. Unfortunately, this would also complicate the system and its regulation, introducing other challenges and distracting from the aim of this paper. § MATHEMATICAL MODELING §.§ Mathematical model of one module An important step before attempting to design a controller for a module is, first to create an inverse kinematic model of the module. In other words, finding a way to map the desired output parameters, here the tilting angles, to the input parameters, in this case the extension/contraction of the bellows, see Fig. <ref>. Inspiration is taken from the work of <cit.>, where bellows type actuators are represented by two elements connected by a translational joints and connected to the bottom and top plate by universal joints. This approach greatly simplifies kinematic modelling. For the purpose of modelling the dynamics of an actuator, the model needs to be augmented by adding torque on both universal joints that represents resistance to bending of the actuator. For our design, there exists a closed form solution to inverse kinematics in the form of l_i = |a_i - H_tb_i(α_x, α_y)| H_t=T_z34×R_x23×R_y12×T_z01 where i∈{1, 2, 3} denotes the bellows, a_i∈ℝ^3 is coordinates of the center of the bellows on the bottom plate, b_i∈ℝ^3 is coordinates of the center of the bellows on the top plate, l_i is distance between point a_i and b_i. Matrix H_t∈ℝ^4 × 4 represents the transformation matrix between fixed coordinate frame x_ay_az_a and top plate coordinate frame x_by_bz_b. T_z01, T_z34, R_x23, R_y12∈ℝ^4 × 4 where T_z01 is the translation matrix between the base frame and a parallel but offset frame x'y'z', R_y12 is the rotation matrix rotating frame x'y'z' around its y axis by α_y into x”y”z”, R_x23 is the rotation matrix rotating frame x”y”z” around its x axis by α_x into x”'y”'z”' and T_z34 is the translation matrix between the frame x”'y”'z”' and a parallel but offset top plate frame x_by_bz_b, see Fig. <ref>. Angle α_x is tilt angle of the top plate around axis x and α_y is tilt angle of top plate around axis y. The role of the established kinematic model in relation to the tilt control of the module described in later chapters is a central one. The end goal of the control is to control the tilt, but this is achieved indirectly by controlling the extension and total pressure in the respective bellows. The presented inverse kinematic model converts the reference tilt and actual tilt sensed by rotation sensors to the required extension/contraction and actual deformation of the bellows. §.§ Model of pneumatic bellows A pneumatic bellows is a linear pneumatic actuator consisting of a bellows type body and mounting flanges whose free length is dependent on the difference between the ambient pressure and the pressure inside the bellows. From a physical point of view, the pneumatic bellows is a pneumatic spring with variable equilibrium length, dependent on the passive properties of the bellows and the internal pressure within the bellows Fig. <ref>. The bellows can be modelled using the standard mass, spring, damper model represented by eq. <ref>, see Fig. <ref>. The dominant effect on the system has spring force and it in turn is dependent on the pneumatic spring stiffness and the equilibrium height of the bellows at the given internal pressure. The eq. <ref> shows this relationship. The bellow without being pressurized behaves like a spring whose stiffness depends on the shape of the bellow, the current material properties that are also dependent on other factors like ambient temperature. Therefore, if the bellows is deformed a spring force appears in the direction of free height. The equilibrium length is the length of the bellows at which the deformation force from the internal pressure is at equilibrium with the spring force. It can be seen that, the equilibrium height is a nonlinear parameter that depends on multiple other coupled parameters. Therefore, instead of a physical modelling approach, the model for the equilibrium height was derived from experimental data by measuring the equilibrium height at different internal pressures, see Fig. <ref>. The data was then interpolated by a third order polynomial function resulting in eq. <ref>. F_M + F_b + F_k(k_p,P_b) + F_o = 0 F_k = k_p(z_e(F_k, P_b) - z_r) z_e = 0.45P_b^3 + 5.6P_b^2 + 23P_b +1200 where F_M is inertial force, F_b is damping force, F_k is pneumatic spring force, F_o is outside disturbance force, k_p is pneumatic spring stiffness, P_b is internal pressure, z_e is equilibrium length, z_r is actual length and F_m is material spring force. To create a simulation model of a pneumatic spring the stiffness of the spring is necessary. This parameter can be derived from the eq. <ref>, <ref> and <ref>. k_p = dF_k/dz where F_k = (P_0 - P_A)A Assuming that the change in bellows internal volume is polytropic we get P_0V^2 = constant Combining the above equations k_p = P_0nA^2/V + P_BdA/dz where V is total volume of air within the bellows and corresponding pneumatic tube,P_0 is the absolute pressure inside the bellows,A is an effective surface of the bellow, n is polytropic constant (for this process n = 1). According to eq. <ref> - <ref> a simulation model in MATLAB was developed. The results of this model were compared with experimental data, where the bellow was pressurized to different pressures, a positive extension force was applied to the bellow and the total extension was measured. The results are in Fig. <ref>. The model of pneumatic bellow gives satisfactory results. The maximum deviation for the pressures 1 bar to the pressure 6 bar does not exceed 1 mm while for pressure 0 bar the deviation is nearly 4 mm, which points to either to a measurement error or to some unknown effect that is much less pronounced in higher pressures. This model represents the static behavior of an air bellow performing linear deformation. It does not capture its bending behavior or its dynamics. To be able to design a controller for one module of the manipulator PneuTrunk, it is necessary to also have a basic understanding of the dynamic behavior of one bellow. This can be seen in the step response of one bellow to an input pressure step of 5 bar, shown in Fig. <ref>. There is no discernible overshoot and the rise time from 0 s to maximum value is about 0.4 s. This means that the system is overdamped and 0.4 s represents the maximum possible regulation speed. It is also important to note the behavior of the bellows when going back from 5 bar to 0 bar, where the actuator is passively returning to the original length. Here, not even after 8 s does the actuator reach the original length. One important property of a pneumatic bellow, as noted by <cit.>, is its hysteresis behavior, where inflating and deflating a bellow results in a different free length at zero internal pressure. In our experiments, this behavior resulted in a deviation of ± 2mm. To combat this effect, the bellow was forced by an external stop to always be extended at zero internal pressure securing a stable free length. Creating a comprehensive bellow model falls outside the scope of this paper and will be a topic of further research. Non the less, it gives important insights into the behavior of one bellow regarding controller development. § CONTROLLER DESIGN Controlling the posture of one module requires the combined effort of all three of its bellows actuators. The presented controller is designed to deal both with the non-linearity of the actuators and the over-actuation of the system. We define the controller consisting of two parts, a feed-forward controller and a variable gain I-controller (FFvI). The feed-forward control is widely used in research for these applications, for example <cit.> and <cit.>. It uses an inverse model of a controlled system without a feedback loop. For this application, it will provide the rough estimate input. This leverages the lack of overshoot of the actuators even at large input pressure steps, as can be seen in Fig. <ref>, it maximizes the controller speed, and it is generally easy to design and implement. On the other hand, because of its lack of a feedback loop, as seen in <cit.>, it is unable to compensate for disturbance forces and system-model deviation. These are the reasons why the feed-forward controller is supplemented by a variable gain I-controller designed to complement the feed-forward controller and dynamically react to any differences between the reference values and actual values of the controlled variables. §.§ Feed-forward controller design The feed-forward controller developed in this paper was designed using experimental data. Various pressure combinations were supplied to each bellows and the resulting tilt was measured. The supplied pressures ranged from 0 bar to 5 bar with a 0.2 bar increment. This results in 18275 different pressure combinations and their corresponding tilting angles. The module workspace can be seen on Fig. <ref>. The pointcloud matrix structure is organized as seen in eq. <ref> P_PC = [α_x, α_y, P_1C, P_2C, P_3C] where α_x and α_y are the measured stable tilt angles which are the result of corresponding input pressures for the respective bellows P_1C, P_2C and P_3C. Because of over actuation and the parallel nature of the module design, one orientation of the module is achievable by an infinite combination of input pressures. This can be seen on Fig. <ref>. Here the x-axis and y-axis are the tilt around the respective axis in degrees and the z-axis is the aggregate pressure, which is the sum of all bellows input pressures in bars. A higher aggregate pressure corresponds to a higher mechanical stiffness of the system. The control algorithm needs to take this into account. The experimentally measured data represent points in aggregate pressure augmented workspace. To find the inlet pressures from the measured point-cloud, Algorithm 1 was applied. Region- matrix of measured input pressures and corresponding tilts that will be used to calculate the ended input pressures to reach desired tilt; anT- maximum Euclidean distance of a measured point from the reference point in the augmented workspace in the α_x α_y plane to be eligible for inclusion in Region; aggrT- maximum Euclidean distance of a measured point from the reference point in the augmented workspace along the aggregate pressure axis to be eligible for inclusion in Region; incrementAngle- increment to expand anT in case the previous search yealdet empty Region; incrementPressure- increment to expand aggrT in case the previous search yealdet empty Region Alg. <ref> will already supply a set of usable input pressures. Unfortunately, the results are influenced by errors in measurement and effects of hysteresis. In a smooth trajectory tracking task this can produce erratic, non-smooth input pressures. To solve these issues, the above Alg. <ref> was supplied with a set of reference angles ranging from -10° to 10° with an increment of 0.05° for both α_x and α_y and a constant aggregate pressure P_agr. The result are three 3D meshes representing the relationship between the reference angles and the three input pressures separately. These meshes were then separately interpolated as a surface using a second order x and second order y surface plot. The result for input pressure 1 can be seen in Fig. <ref> and the equation describing this surface is eq. <ref> P_1 = 2.964 - 0.1113α_x + 0.000344α_y + 0.000726α_x^2 + 0.00407α_xα_y - 0.00123α_y^2 This process was repeated for aggregate pressures between 3.6 bar and 15 bar with increment of 0.6 bar for all three bellows. The result is a set of smooth surfaces representing the complete augmented workspace, see Fig. <ref> for bellows 1. The coefficients describing all the surfaces can be further interpolated to get six equations approximating the complete aggregate pressure augmented workspace. The coefficients were interpolated by a 7-th order polynomial. Fig. <ref> compares the output of Alg. <ref> and the interpolated feed-forward controller. The output of Alg. <ref> follows the smooth reference signal, but it shows non-smooth, erratic step behavior, while the output of the interpolated feed-forward controller is smooth. §.§ Variable gain I-controller design The goal of adding a I-part to the controller design is to facilitate disturbance rejection by integrating the reference error over time and scaling it by using a gain. In a constant gain I-controller the gain is tuned to and fixed at a value dependent on the controlled plant. It is simple, easy to implement and does not necessarily require the plant model for correct design and unlike a proportional controller it allows for complete error compensation in a step response. The problem with using a constant gain I-controller is as follows. The feed-forward controller can immediately supply a rough estimate for input pressures, but it is never clear before the movement ends how good this estimation is. Therefore, if the estimate is optimal, a constant gain I-controller would cause overshoot, requiring the controller to be slow. On the other hand, if the estimation is sub-optimal, an aggressive constant gain I-controller is needed to quickly compensate the error. This contradiction in requirements can be solved by applying a variable gain I-controller, with the gain dependent on the error, see Eq. <ref> u(t) = K_i(e(t)) ∫_0^t e(t) dt where K_i(e(t)) is controller gain, e(t) is tilt error, t is time and u(t) is controller output. The relationship between I-controller gain and tilt error can be described by different types of smooth monotonic functions, like linear, exponential etc. For this controller, as a proof of concept a linear relationship was chosen, see Eq. <ref>. K_i = ae_t(t) + b e_t(t) = √(e_x^2(t) + e_y^2(t)) where e_t(t) is total tilt error, e_x(t) is tilt error around x-axis and e_y(t) is tilt error around y-axis. Parameters a and b were calculated from experimental data, where K_i = 350 was found to work well for small total error values below 1° and K_i = 75 was found to not cause significant overshoot at error values bellow 5°. This relationship is described by eq. <ref> and can be seen on Fig. <ref> K_i = -68.75e_t + 418.75 The comparison between the performance of a constant gain I-controller and our variable gain controller in combination with our feed-forward controller is depicted in Fig. <ref>. One can see, that in cases a) and b) a gain of 350 results in a significant overshoot, but for small changes in tilt and a large residual error after feed-forward controller action, like in case c), the controller is fast and has acceptable overshoot. On the other hand, a gain of 75 has no overshoot in any case but is slow and has best performance if the residual error from feed-forward controller action is small, like in case b). The performance of both fixed gain controllers in case c) is not satisfactory. Our variable gain controller performs satisfactory in all cases. As mentioned above, for our controller, we have chosen a linear relationship to govern the variable gain. A linear relationship is simple, is easy to tune and should result in predictable behaviour of the module. Nevertheless, other types of functions like exponential, polynomial or even logarithmic can also be applied and could result in better controller performance than with a linear equation. The purpose of this article is to show the viability of the by us presented controller and using a linear relationship for this purpose is sufficient. The complexity of comparing different types of governing equations and the required tuning methods warrant its own article. § EXPERIMENTAL VERIFICATION OF FFVI CONTROLLER This section will focus on experimental comparison between FFvI controller and controllers designed according to established algorithms. First of all, the performance of the feed-forward part of the FFvI algorithm will be compared with a ANFIS controller designed using the same dataset as our feed-forward controller. In the second part the complete FFvI controller will be compared to a constant gain PID controller. The ANFIS controller was designed using the MATLAB neuro-fuzzy designer. Three controllers for each bellows separately were created. The input data are tilt angles α_x and α_y and the output is the corresponding pressure. The teaching data are picked from the same data, that is used to design the feed-forward controller, but is limited to having an aggregate pressure of 9±1.5 bar. The feed-forward controller is also set to the same level of aggregate pressure. This will decrease the teaching time and ensure more reliable results. The neural network used is a Sugeno type network <cit.> with 10 linear generalized bell-shaped membership functions for each input. The minimum achieved teaching error is in Tab. <ref>. Both controllers were supplied with the sets of reference tilts and their outputs were compared to input pressures creating the reference tilts. The resulting comparison between the outputs of the feed-forward controller and the ANFIS controller are shown in Fig. <ref>. The minimum error and mean error are in Tab. <ref>. The ANFIS controller has a consistently lower error and the mean error is also lower. Newer the less, as can be seen from the above results, both controllers perform well, with the maximum error of the feed-forward controller not exceeding 0.15°. The magnitude of this error is still well within what would be considerate acceptable for this application. Considering that the feed-forward controller can approximate the whole pressure augmented workspace while the ANFIS controller is specifically designed to work in the tested aggregate pressure region, these results are encouraging. When assessing the complete FFvI controller, one must keep in mind that the physical module is meant to be part of a robot. Therefore, we have specified that the overshoot, when performing a movement, should not exceed 2° on α_x and α_y separately. This number, although arbitrary, should be a good controller benchmark for this case and the controller is expected to be further tuned after being installed in the complete robot control system. As a comparison to our FFvI controller a PID-based controller was used. The controller consists of three identical constant gain PID controller controlling all the bellows separately. The PID controllers were tuned using the P-I-D tuning approach described in <cit.>. The relevant constants can be seen in Tab. <ref>. Although the values of the tuned constants are not optimal, the authors believe that they are close enough to the optimal values combining adequate speed while not exceeding our criterion of 2° overshoot on α_x and α_y. The performance of both controllers can be seen on Fig. <ref>. The reference signal is sequentially alternating between α_xref =8^∘, α_yref =-10^∘, α_xref = -8^∘ and α_yref = 10^∘. These values were chosen because all bellows need to be engaged simultaneously to different degrees, they represent values in the middle part of the module workspace and it is expected that most movement will be in this area and they also demonstrate the asymmetric behavior of the platform. The performance of both controllers types is shown in Tab. <ref>. It needs to be noted that, this comparison deviates from a standard comparison of step responses by not having the platform at zero tilt and testing a combined rotation around α_x and α_y. Nevertheless, this comparison is much closer to comparing the controller performance under more realistic conditions. It can be seen, that both controllers fulfill the condition of not having more than 2^∘ overshoot. From Tab. <ref> it can be seen, that the FFvI controller is faster in all categories and has more overshoot only on when going from -8^∘→ 8^∘. The second comparison between both controllers is in following a sine reference signal with parameters written in Tab. <ref>, as can be seen in Fig. <ref>. Here the FFvI controller can tightly and smoothly follow the sinus reference signal while overshooting at the maximum and minimum of the reference signal. The overshoot, again, does not exceed 2° for either tilt angles. The PID controller, while not overshooting nearly as much, lags constantly behind the reference signal by about 0.7 s and the plot is in some parts jittery. The overall tracking error is much smaller for FFvI than for PID controller. It can be said that both controllers perform acceptably, while the FFvI controller is faster while still passing the overshoot criteria. One ability that the FFvI controller has is to change the stiffness of the system while maintaining reference tilt. This is shown in Fig. <ref>. The module is first tilted to α_xref = 10^∘, α_yref = 5^∘ at a requested aggregate pressure of 3 bar. Then, the aggregate pressure is changed stepwise 6 bar, 9 bar, 12 bar and 15 bar, respectively. A change in aggregate pressure corresponds to a change in stiffness of the system. As can be seen from the Fig. <ref> pressure and therefore stiffness can be changed online. This change results in a momentous destabilization of the system resulting in slight position loss. The maximum error for our test was at the transition between aggregate pressure 12 bar and 15 bar with maximum error for α_xref = 1.3^∘, α_yref = 1.65^∘. This can be again, attributed to nonsynchronous pressure change between the bellows and possible measurement errors in the original feed-forward controller input data. Therefore, instead of a sharp aggregate pressure step a smooth aggregate pressure transition should decrease this issue. § DYNAMIC TEST OF FFVI TILT PLATFORM CONTROLLER To be able to apply the proposed controller as part of the control system of the whole manipulator it is necessary to test the controller under dynamic conditions. For this purpose a two axis loading mechanism was developed, see Fig. <ref>. It consists of two linear motion axis stacked perpendicular on top of each other and a attached to the top axis. This mechanism is mounted on top of a tilt module with the axis axis of the tilt platform aligned with the linear motion axis. It can be seen that the potential loading momentum will be different for both axis because the bottom axis is loaded not only by the loading weight but also the top axis itself. The possible motion of one linear axis is 0mm to 330mm and is centered on the central axis of the tilt module. The weight of the axis and loading weight is in Tab. <ref> .This mechanism is used to generate dynamic loading forces to study the capability of the FFvI controller to reject dynamic disturbances. The experiment seen in Fig. <ref> is done for reference tilts α_x=0° and α_y=0°. Error rejection is facilitated by the I controller part of the algorithm, hence the test will be performed with the I controller active and, for comparison, with the I controller inactive. Also, the experiment will be done for different values of aggregate pressure and for different speeds of movement of the loading weight. The movement of the load was converted to loading momentum around axis x and y (see Fig. <ref>). Fig. <ref> and Fig. <ref> show only the the extremes of the test, different combinations of load speed and aggregate pressure were also tested. The disturbance momentum was indirectly established from the measured movement of the load and the geometry of the mechanism. The shown time window is the same for all movement speeds. The maximum total error for all measurements is in Tab. <ref>. From the above figures one can deduce important findings regarding the behaviour of the regulator under dynamic changing load. The controller greatly benefits the precision of the control by rejecting the residual error created by feed-forward error imperfection where the maximum total error with the I part active was 1.52° and the mean error was 0.86° and the maximum total error with the I part inactive was 6.85° and the mean error was 3.85°. Apart from that a well tuned variable gain I-controller has a significant stabilisation effect at higher dynamic loads. It can also be seen, especially at higher speeds of the load, that a higher aggregate pressure has a positive effect on the rejection of dynamic loads. It can be concluded that his controller in its current state is able to control the prototype manipulator, especially at lower speeds and higher aggregate pressure settings. § CONCLUSION In this paper a new type of controller for the control of the tilt of a pneumatic bellows actuated module of the cascade robot PneuTrunk was presented. The controller consists of a feed-forward controller designed using experimental data and a variable gain I-controller. The feed-forward controller is created by fitting the data at a certain pressure level using a polynomial function and subsequently, again fitting the resulting set of polynomial function constants by another set of polynomials. This allows for a simple and fast controller, that not only allows to control the tilt of the module, but also its stiffness on demand. The variable gain I-controller supplements the feed-forward controller by adding a feedback loop, hence facilitating disturbance rejection and correcting for feed-forward controller imperfections. The variable gain allows for fast error correction while limiting overshoot and windup at the same time. This hybridisation approach allows for a simple controller design for a complex MIMO systems that can be easily adjusted and updated. This controller was compared to other established controllers. On the feed-forward level, the controller was compared with an ANFIS controller, delivering comparable results. Comparing the complete controller to a tuned PID controller showed that our FFvI controller is faster, can reliably follow a harmonic reference signal while maintaining required performance parameters. This controller was also testet under dynamic load to satisfactory results. The maximum FFvI controller error during the positioning of the module with consideration of dynamic disturbance was only 1.52°. To further improve the performance of the controller, it is necessary to create a comprehensive mathematical model of the module, mainly to combat the detrimental effects of the hysteretic behavior of the bellows. It is also appropriate to compare different types of functions driving the variable gain of the I-controller. In the future, this controller will be applied as a part of a larger control system controlling the pneumatic cascade robot PneuTrunk. § ACKNOWLEDGMENTS This research was funded by Slovak Grant Agency VEGA 1/0436/22 Research on modeling methods and control algorithms of kinematically redundant mechanisms and VEGA 1/0201/21 Mobile mechatronic assistant. This research has been also elaborated under support of the project Research Centre of Advanced Mechatronic Systems, reg. No. CZ.02.1.01/0.0/0.0/16_019/0000867 in the frame of the Operational Program Research, Development and Education. IEEEtran
http://arxiv.org/abs/2306.04725v1
20230607184147
Nonlinear Evolution of Quadratic Gravity in 3+1 Dimensions
[ "Aaron Held", "Hyun Lim" ]
gr-qc
[ "gr-qc" ]
[email protected] Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universität Jena, Max-Wien-Platz 1, 07743 Jena, Germany The Princeton Gravity Initiative, Jadwin Hall, Princeton University, Princeton, New Jersey 08544, U.S. [email protected] Both authors contributed equally. The names are listed alphabetically. Computational Physics and Methods (CCS-2), Los Alamos National Laboratory, Los Alamos, NM 87545 USA Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM 87545 USA We present a numerically stable system of (3+1) evolution equations for the nonlinear gravitational dynamics of quadratic-curvature corrections to General Relativity (Quadratic Gravity). We also report on the numerical implementation of these evolution equations. We recover a well-known linear instability and gather evidence that – aside from said instability – Quadratic Gravity exhibits a physically stable Ricci-flat subsector. In particular, we demonstrate that Teukolsky-wave perturbations of a Schwarzschild black hole as well as a full binary inspiral (evolved up to merger) remain Ricci flat throughout evolution. This suggests that, at least in vacuum, classical Quadratic Gravity can mimic General Relativity, even in the fully nonlinear strong-gravity regime. Nonlinear Evolution of Quadratic Gravity in 3+1 Dimensions Hyun Lim July 31, 2023 ========================================================== § MOTIVATION The dynamics of General Relativity (GR) is governed by terms at linear order in (Riemann) curvature. As we gain access to the strong gravity regime <cit.>, we probe potential new physics which becomes relevant at higher order in curvature. Such new physics is suggested by the cosmological riddles of dark matter <cit.> and dark energy <cit.>. Moreover, GR predicts its own breakdown, as singularity theorems <cit.> imply geodesic incompleteness in the interior of black holes. In the context of new physics at strong curvature, such a breakdown is not surprising: Close to the formation of a singularity, curvature scales grow (arbitrarily) large, hence, potential higher-order curvature corrections are no longer negligible, and the dynamics of GR needs to be modified to account for these corrections. If curvature corrections are present, the respective new-physics scale may occur anywhere between the largest currently accessible curvature scales and the Planck scale. Taking a step beyond GR, we focus on dynamics at quadratic order in curvature. Such quadratic-curvature corrections are widely expected to arise from quantum fluctuations, see <cit.> for perturbative quantum gravity, <cit.> for lattice approaches to quantum gravity, <cit.> for loop-quantum gravity, <cit.> for string theory, and <cit.> for asymptotically safe gravity. Quadratic curvature corrections occur in the form of gravitational self-interactions <cit.> and in the form of non-minimal couplings of curvature to other fields <cit.>. Both sectors can be unified in the context of an effective field theory of gravity and matter, see, e.g., <cit.>. Field redefinitions can mix between the pure-gravity and the non-minimal sector and, moreover, between different orders in curvature, see, e.g., <cit.>. Several different terms may thus be physically equivalent if the field redefinitions do not impact physical conclusions. In the following, we will focus on gravitational self-interactions, we will not perform field redefinitions, and we will neglect any potential non-minimal couplings of curvature to other fields. We abbreviate the respective theory as Quadratic Gravity (QG) – sometimes also called Stelle-gravity <cit.>. General Relativity tends to hide singularities, and thus regions of diverging curvature, behind horizons <cit.>, see <cit.> for the potential exception of critical collapse. Experimental probes of horizon-scale physics <cit.> thus provide the most promising way to constrain potential new physics at large curvature. Here, we are motivated, in particular, by the rapidly growing catalog of gravitational-wave events <cit.>. Utilizing said data to constrain new physics <cit.> will eventually require predictions for gravitational wave forms in theories beyond GR. The key tool to predict the respective nonlinear dynamics close to merger is well-posed numerical evolution, see <cit.> for pioneering work in numerical relativity and <cit.> for reviews of the well-posed initial value problem in GR. Beyond GR, numerical evolution in the presence of non-minimally coupled scalar degrees of freedom has received much attention <cit.> and (for a specified set of theories) well-posedness has been established at weak non-minimal coupling <cit.>. See also <cit.> for evolution including pure-gravity operators at quartic order <cit.> and by means of damped high-frequency modes <cit.>. In previous work <cit.>, we verified stable numerical evolution in the spherically-symmetric sector of Quadratic Gravity. Here, we report on an extension of the evolution equations to (3+1) dimensions, following, in particular, the pioneering work of Noakes <cit.>, see also <cit.>. In <ref>, we start by reviewing QG, its equations of motion, and the propagating degrees of freedom. In <ref>, we perform a (3+1) decomposition and derive our key analytical result: a set of 1^st-order evolution equations. In <ref>, we describe our specific numerical implementation and verify numerical stability. In <ref>, we present first physical results which suggest that QG exhibits a nonlinearly stable Ricci-flat subsector which is fully equivalent to GR. In <ref>, we conclude with a discussion and an outlook on future work. Several technical details are relegated into appendices. We use the (-,+,+,+) signature and use Latin letters as spacetime indices. Moreover, we work in Planck units, i.e., setting the speed of light c=1. For clarity, we keep Newton's constant G explicit. Round (square) brackets denote full of the enclosed indices. § SETUP: QUADRATIC GRAVITY The action of Quadratic Gravity (QG) is given by S_QG = ∫_x [ ℒ_mat[Φ] +1/16π GR +α R_abR^ab -β R^2 ], where ∫_x is shorthand notation for ∫ d^4x √(det(-g)). In the following, the first term is taken to be independent of the curvature and depends solely on minimally coupled matter fields (and on the cosmological constant). The matter fields are collectively denoted by Φ. The second term is linear in the curvature and corresponds to GR, parameterized by Newton's constant G=1/(8π) (or, equivalently, by the Planck mass ). The third and fourth term are quadratic in the curvature and are parameterized by couplings α and β. In four dimensions, α and β are dimensionless and all other (vacuum) terms at quadratic order in the curvature can be rewritten into linear combinations of the included ones by means of the Gauss-Bonnet identity. We neglect boundary terms and non-minimal couplings between matter and curvature. The theory of QG, as defined in <ref>, propagates (i) the usual graviton, i.e., a massless spin-2 mode; (ii) a massive spin-0 mode; and (iii) a massive spin-2 mode. The massive spin-2 mode has an opposite-sign kinetic term (in comparison to the other two modes) and is thus an Ostrogradski ghost. The massive spin-0 and spin-2 mode have respective masses m_0^2 = -1/32π G(3β - α) , m_2^2 = -1/16π Gα . In the following, we express the dimensionless couplings α and β in terms of the masses m_0 and m_2. Due to the inclusion of quadratic-curvature terms, the dynamics of QG is governed by fourth-order equations of motion. Nevertheless, the full theory can be described in terms of the same degrees of freedom <cit.> as the linearized theory. To make this explicit, the Ricci scalar ℛ and the traceless Ricci tensor ℛ_ab=R_ab - 1/4g_abR can be promoted to independent evolution variables, as indicated by the calligraphic notation. This allows to write the equations of motion, obtained by varying the action in <ref>, as follows[While <cit.> use different definitions of the couplings (related by the Gauss-Bonnet identity), the respective equations of motion are all equivalent. Some signs in <cit.> differ which, however, does not affect conclusions about a well-posedness. ] <cit.>: massless spin-2: G_ab(g) = ℛ_ab - 1/4 g_abℛ≡1/^2T_ab , massive spin-0: ℛ = m_0^2 ℛ +m_0^2/^2 T^c_cc , massive spin-2: ℛ_ab = m_2^2 ℛ_ab - m_2^2/^2 T^(TL)_ab + 2 ℛ_a^acℛ_bc - 1/2g_abℛ^cdℛ_cd + 1/3(m_2^2/m_0^2+1)ℛ ℛ_ab - 1/3( m_2^2/m_0^2-1 )[ ∇_a∇_b ℛ - 1/4g_ab( m_0^2 ℛ + m_0^2/^2 T^c_cc ) ] - 2 ℛ^cdC_acbd . For reasons detailed below, we will refer to these equations as the metric equation, the trace equation, and the traceless equation, respectively. The metric equation, i.e., <ref>, is nothing but the definition of the Einstein tensor: in terms of the metric on the left-hand side (LHS); and in terms of the fiducial variables on the right-hand side (RHS). It provides a second-order evolution equation for the metric. The fiducial variables ℛ and ℛ_ab, appearing on the RHS, are effectively equivalent to matter source terms, for which we have defined a fiducial stress-energy tensor T_ab≡^2(ℛ_ab - 1/4g_abℛ). Hence, the metric equation can be treated as in GR. For instance, one can make use of harmonic gauge to diagonalize the metric equation <cit.>. Alternatively, one may use the BSSN formalism <cit.>, as we do in <ref>. The trace equation, i.e., <ref>, provides a 2^nd-order evolution equation for ℛ. The traceless equation, i.e., <ref>, provides a 2^nd-order evolution equation for ℛ_ab. Herein, we split the actual matter sources into a trace (T^c_cc) and a traceless (T^(TL)_ab) part which, in turn, source the respective fiducial variables. To keep the equations as concise as possible, we have also introduced the Weyl-tensor C_acbd. The latter can be expressed in terms of R_abcd, ℛ_ab, and ℛ as C_acbd = R_acbd + g_b[cℛ_a]d + g_d[aℛ_c]b +1/6 g_b[a g_c]dℛ . In the evolution equations of ℛ (<ref>) and ℛ_ab (<ref>), derivatives of the metric only enter via double covariant derivatives as well as in R_abcd. § DERIVATION: (3+1)-DECOMPOSITION OF THE EVOLUTION EQUATIONS The evolution system, as given in <ref>, is a good starting point to perform the (3+1)-decomposition. Herein, we decompose the metric, i.e., g_ab = γ_ab - n_an_b into the spatial metric γ_ab and the normal vector n^a orthogonal to the spatial hypersurface. (The normal vector is chosen such that n^an_a = -1.) Covariant derivatives ∇_a are projected onto spatial and normal part via ∇_a = (γ^b_a - n_a n^b)∇_b ≡ D_a - n_a n^b ∇_b , where we have defined the usual spatial covariant derivative D_a ≡γ_a^b∇_b. Moreover, we introduce the usual geometric definition[The extrinsic curvature can be defined as the symmetric part of the spatial projection of the gradient of the normal vector, i.e., as K_ij≡ - γ_i^aγ_j^b∇_a n_b, but if the normal vector is rotation free, the antisymmetric part vanishes and the strict definition reduces to the one in <ref>] of the extrinsic curvature K_ij and the acceleration a_i, respectively, as the mixed and the spatial projection of the gradient of the normal vector, i.e., a_i ≡γ_i^b n^a ∇_a n_b , K_ij ≡ - γ_i^aγ_j^b∇_a n_b . The purely temporal projection of ∇_a n_b vanishes such that one may abuse notation and also write a_b = n^a ∇_a n_b. In this case, n^ba_b=0. In complete equivalence to the above geometric definition, one can give a dynamical definition of the extrinsic curvature as a 1st-order variable for the metric, i.e., as K_ij≡-1/2ℒ_nγ_ij, where ℒ_n denotes the Lie derivative along n^a. Both definitions are fully equivalent and imply each other. In the following, we reduce the remaining 2^nd-order derivatives in the time-direction, i.e., along n^a, to 1^st-order derivatives. In anticipation of that, we define additional 1^st-order variables V_ab≡ -n^c∇_cℛ_ab , ℛ̂≡ -n^c∇_cℛ , for the fiducial Ricci variables. Furthermore, we decompose the fiducial traceless-Ricci tensor ℛ_ab and its 1^st-order variable V_ab such that 𝒜 ≡γ^cdℛ_cd , ℬ ≡γ^cdV_cd , 𝒜_ab ≡γ_a^cγ_b^dℛ_cd - 1/3γ_ab𝒜 , ℬ_ab ≡γ_a^cγ_b^dV_cd - 1/3γ_abℬ , 𝒞_a ≡ n^cγ_a^dℛ_cd , ℰ_a ≡ n^cγ_a^dV_cd , ⇒ 𝒜 = n^an^bℛ_ab , ℬ = n^an^bV_ab , where the last two relations are enforced by the tracelessness of ℛ_ab and V_ab. Equivalently, one may write this (3+1) split as ℛ_ab = 𝒜_ab + 1/3 γ_ab 𝒜 - 2 n_(a𝒞_b) +n_an_b 𝒜 , V_ab = ℬ_ab + 1/3 γ_ab ℬ - 2 n_(aℰ_b) +n_an_b ℬ . The remaining metric-dependent quantities can be decomposed using the conventional Gauss-Codazzi and Ricci equations, as collected in <ref>. We decompose the actual matter sources following the usual convention, i.e., ρ = n_a n_b T^ab , S_i = - γ_ia n_b T^ab , S_ij = γ_iaγ_jb T^ab . Similarly, we decompose the fiducial matter sources, i.e., ρ = n_a n_b T^ab = ^2(𝒜 + 1/4ℛ) , S_i = - γ_ia n_b T^ab= -^2 𝒞_i , S_ij = γ_iaγ_jb T^ab= ^2( 𝒜_ij +1/3γ_ij𝒜-1/4γ_ijℛ) . For the actual matter sources, we note that T^(TL)_ab, appearing in <ref>, is traceless in 4D but the 3D projections do not vanish, i.e., n^an^b T^(TL)_ab = 1/4(S+3ρ) , γ^ab T^(TL)_ab = 1/4(S+3ρ) , γ_i^aγ_j^b T^(TL)_ab = S_ij - 1/4γ_ij(S-ρ) . With these definitions at hand, the decomposition of the three evolution equations (metric equation, trace equation, and traceless equation, cf. <ref>) is tedious but essentially straightforward. After the decomposition, we also identify which of the decomposed equations correspond to constraints, constraint evolution, or physical evolution equations. The busy reader may skip to <ref> where we summarize the result. §.§ (3+1) decomposition of the metric equation <ref>, determines the evolution of the metric g_ab. The fiducial variables ℛ and ℛ_ab can be treated as fiducial matter sources. The actual matter sources T^ab do not appear in the metric evolution. They will only affect the other evolution equations. As for most numerical efforts in GR, our starting point for the metric sector is the York-variant of the ADM equations <cit.>, i.e., (n^c∇_cγ_ij) = -2 D_(in_j) -2 K_ij , (n^c∇_c K_ij) = - a_ia_j -2 D_(ia_j) - 2 K_m(iD_j)n^m - 2K_imK^m_j + K K_ij +^(3)R_ij - 1/^2( S_ij - 1/2γ_ij(S - ρ) ) , 0 = D_jK^j_i - D_i K - 1/^2S_i , 0 = ^(3)R - K_ijK^ij + K^2 - 2/^2ρ . The first equation (evolution of the spatial metric) is a definition, used to reduce the equations from 2^nd-order to 1^st-order in time. It is the metric equivalent of our definitions in <ref>. However, the fiducial variable, that one has introduced in the metric sector, i.e., K_ij, also carries direct geometric meaning – it is the extrinsic curvature of the spatial hypersurface. For the second equation (evolution of the extrinsic curvature), one has used the lapse constraint in <ref> to simplify the evolution equation. Hence the appearance of ρ in <ref>. The 3^rd and 4^th equation correspond to the momentum and Hamiltonian constraint, respectively. In summary, including fiducial matter sources, the metric equation decomposes in complete equivalence to GR. The spatial projections result in evolution equations for γ_ij and K_ij, i.e., for 12 pieces of initial data[Here, we already assume that hypersurfaces are chosen such as to fix g_00 and g_0i by an appropriate choice of lapse and shift as well as n^c∇_cg_00 and n^c∇_cg_0i such as to obey a specified gauge choice, e.g., harmonic gauge. This choice of gauge/coordinates already fixes 8 out of 20 pieces of initial data in the second-order evolution of g_μν.]. The mixed and the temporal projections of the metric equation result in 4 constraints – the Hamiltonian and the momentum constraint. Moreover, there remains coordinate freedom within the spatial hypersurface: We are free to choose the spatial coordinates as well as the initial time, hence removing 4 further pieced of initial data. Overall, as in GR, one finds 12-4-4=4 independent pieces of initial data, i.e., 2 degrees of freedom, in the metric sector. We will come back to the overall counting of degrees of freedom in <ref>. §.§ (3+1) decomposition of the trace equation <ref> determines the evolution of the fiducial Ricci scalar ℛ. Since the only derivatives appear in ℛ, it is of quasi-linear form. We (3+1)-decompose the covariant derivatives on the left-hand side (LHS) as ℛ = n^a ∇_a ℛ̂ +(D_i + a_i)D^iℛ - K ℛ̂ . Combining the above result with the RHS of <ref> provides two 1^st-order (in time) equations for ℛ and ℛ̂, i.e., n^a ∇_a ℛ = - ℛ̂ , n^a ∇_a ℛ̂ = - (D_i + a_i)D^iℛ + K ℛ̂ + m_0^2 ℛ+ m_0^2/^2(S - ρ) , where ρ and S = γ^abT_ab = γ^abS_ab correspond to the trace of the actual matter source terms, decomposed analogously to <ref>. In summary, we find two evolution equations and no constraints in the trace sector. §.§ (3+1) decomposition of the traceless equation <ref> evolves the traceless fiducial Ricci tensor ℛ_ab. Without recasting, this equation is not of quasi-linear form. Therefore, while performing the (3+1) decomposition, we can expect to have to use the previous evolution equations to remove all second order time derivatives on the RHS. This procedure is reminiscent of the order reduction in <cit.>. Before doing so, we consider the (3+1) decomposition of the LHS, i.e., ℛ_ab = n^c ∇_c V_ab + (D_c + a_c)D^cℛ_ab - K V_ab , where, as for the fiducial Ricci scalar, we have introduced the first-order fiducial variable V_ab = - n^c ∇_c ℛ_ab, cf. <ref>. Herein, the spatial covariant derivatives should strictly be understood as a shorthand notation, i.e., D_cD^cℛ_ab≡γ^d_c∇_d (γ^ce∇_e ℛ_ab) and a^c D_cℛ_ab≡ a^eγ_e^c∇_cℛ_ab. This subtlety is important since ℛ_ab is not yet projected and thus contains temporal components. The derivation is made explicit in <ref>. Overall, this renders the LHS manifestly 1^st-order in time. On the RHS of <ref>, the only derivative terms are contained in ∇_a∇_b ℛ and in the Riemann tensor R_acbd. The Riemann tensor can be decomposed in the usual way, cf. <ref>. Regarding ∇_a∇_b ℛ, we find ∇_a∇_b ℛ = D_a D_b ℛ + 2 n_(aD_b)ℛ̂ - 2 K_abℛ̂ - n_a n_b (n^c ∇_c ℛ̂) + n_a n_b a_c D^cℛ , which is, as expected, symmetric in (a,b). Here, we have used (i) the projection of the covariant derivative, cf. <ref>; (ii) the geometric definitions of acceleration and extrinsic curvature, i.e., <ref>; and (iii) the identity 0=∇_d g^c_a = ∇_d(γ^c_a - n_an^c) = ∇_dγ^c_a - n_a∇_dn^c -n^c∇_dn_a. The calculation is made fully explicit in <ref>. Collecting everything, we find 1^st-order evolution equations for the fiducial variables ℛ_ab and V_ab, i.e., n^c ∇_c ℛ_ab = -V_ab , n^c∇_cV_ab = - (D_c + a_c)D^cℛ_ab + K V_ab + m_2^2ℛ_ab - m_2^2/^2 T^(TL)_ab +2 ℛ_a^acℛ_bc -1/2g_abℛ^cdℛ_cd +1/3(m_2^2/m_0^2+1)ℛ ℛ_ab - 1/3(m_2^2/m_0^2-1) [ ( D_a D_b + n_a n_b a_c D^c -1/4 g_ab m_0^2 )ℛ -1/4m_0^2/^2 g_ab(S-ρ) + 2( n_(aD_b) - K_ab)ℛ̂ - n_a n_b (n^c ∇_c ℛ̂) ] - 2 ℛ^cd[ g_b[cℛ_a]d + g_d[aℛ_c]b +1/6g_b[ag_c]dℛ +^(3)R_acbd +2 K_a[bK_d]c + 4 a_[an_c]a_[bn_d] - 4 n_[b(D_d]a_[a)n_c] +4(D_[aK_c][b)n_d] +4(D_[bK_d][a)n_c] +4 n_[a K^e_c] K_e[bn_d] +4 ( γ^f_[an_c]γ^g_[bn_d]) ( n^e∇_eK_fg) ] . The terms involving (n^c ∇_c ℛ̂) and (n^e∇_eK_fg) can be writted in terms of the other evolution equations such that no time derivatives remain on the RHS. It remains to project all the non-derivative terms onto spatial and temporal parts and thereby decompose the above two 1^st-order traceless equations into spatial and temporal parts. §.§ Projection of the traceless equations We can explicitly project the traceless equations in order to separate constraint data from initial data. In all cases, we will obtain four different projections, i.e., we can obtain (i) the spatial trace with γ^ab; (ii) the spatial projection with γ^a_cγ^b_d; (iii) the temporal projection with n^an^b; and (iv) the mixed projection with n^aγ^b_c (or equivalently n^bγ^a_c). To obtain these, we have to commute the projection operators through the covariant derivative on the left-hand side of each respective equation which generates further terms. We present the explicit derivation in <ref> and find γ^ab(n^c∇_cℛ_ab) = (n^c∇_c𝒜) - 2 a^c𝒞_c , γ_i^aγ_j^b(n^c∇_cℛ_ab) = (n^c∇_c𝒜_ij) + 1/3γ_ij(n^c∇_c𝒜) - 2/3𝒜(D_(in_j) + K_ij) - 2a_(i𝒞_j) - 2 a^c(𝒜_c(in_j) + 1/3γ_c(in_j)𝒜) , n^an^b(n^c∇_cℛ_ab) = (n^c∇_c𝒜) - 2 a^c𝒞_c , n^aγ^b_d(n^c∇_cℛ_ab) = (n^c∇_c 𝒞_d) - n_d a^c 𝒞_c - a^a( 𝒜_ad + 2/3γ_ad𝒜) . The analogous projections for the fiducial first-order variables are obtained by the replacements ℛ_ab→V_ab, 𝒜_ij→ℬ_ij, 𝒜→ℬ, and 𝒞_a→ℰ_a. These left-hand-side projections separate the covariant equations into evolution equations (<ref>) and constraint evolution (<ref>). In line with the (3+1) conventions chosen in <ref>, the temporal projection in <ref> is redundant with the spatial trace projection in <ref>. The RHS projections are tedious, and we check them in the ancillary files[See the repository (<https://github.com/aaron-hd/QG-sphSymm-ancillary>). Parts of the derivation make use of the package <cit.> (<http://www.xact.es/>).]. Crucially, the RHS terms do not impact the character of the respective projections since they only involve spatial derivatives. The trace and spatial projection result in the following set of evolution equations for the spacial variables 𝒜, 𝒜_ij, ℬ, and ℬ_ij: n^c∇_c𝒜_ij = 2 a^k𝒞_k -ℬ . n^c∇_c𝒜_ij = 2/3 𝒜( D_(in_j) + K_ij) +2 a^c( 𝒜_c(in_j) +1/3γ_c(in_j)𝒜 +γ_c(i𝒞_j)) - ℬ_ij - 2/3γ_ija^k𝒞_k , n^c∇_cℬ_ij = 2 a^kℰ_k - 1/4m_2^2/^2(S+3ρ) - ( D_iD^i + a_iD^i - m_2^2 +1/6 ℛ)𝒜 + K ℬ +1/3(m_2^2/m_0^2+1)ℛ 𝒜 -1/3(m_2^2/m_0^2-1)[ ( D_iD^i - 3/4m_0^2 )ℛ -3/4m_0^2/^2(S-ρ) -2 K ℛ̂] +3/2( 𝒜_ij𝒜^ij + 4/3 𝒜^2 - 2 𝒞^i𝒞_i ) - 2 𝒞^i( D^j K_ij + a^j K_ij) - 4 K^ijD_i𝒞_j - 2( 𝒜^ij +1/3γ^ij𝒜)( ^(3)R_ij + 2 K_i[jK^k_k]) + 4 𝒞^j( D_jK - D^iK_ij) -2 𝒜( a_i a^i + D_i a^i - K^ij K_ij + γ^ij(n^c∇_c K_ij) ) , n^c∇_cℬ_ij = 2/3 ℬ( D_(in_j) + K_ij) +2 a^c( ℬ_c(in_j) +1/3γ_c(in_j)ℬ +γ_c(iℰ_j)) - m_2^2/^2( S_ij - 1/4γ_ij(S-ρ) ) - 1/3γ_ij(n^c∇_cℬ) -( D_kD^k + a_kD^k - m_2^2 + 1/6ℛ)( 𝒜_ij + 1/3γ_ij𝒜) +K( ℬ_ij +1/3γ_ijℬ) +1/3(m_2^2/m_0^2+1) ℛ( 𝒜_ij +1/3γ_ij𝒜) -1/3(m_2^2/m_0^2-1)[ ( D_iD_j -1/4γ_ijm_0^2 )ℛ -1/4m_0^2/^2γ_ij(S-ρ) -2 K_ijℛ̂] +1/2 γ_ij( 𝒜^kl𝒜_kl +4/3𝒜^2 -2 𝒞^k𝒞_k )- 2 𝒞_(i( D^k K_j)k + a^k K_j)k) - 4 K_k(i D^k 𝒞_j) -2( 𝒜^kl +1/3γ^kl𝒜)( ^(3)R_ikjl +K_i[jK_l]k) +4 𝒞^k( D_k K_ij - D_(iK_j)k) -2 𝒜( a_i a_j + D_(i a_j) - K_i^k K_kj + γ_i^k γ_j^l (n^c∇_c K_kl) ) . Here, we have used the evolution equation for (n^c∇_c𝒜) (cf. <ref>) on the RHS of the evolution equation for (n^c∇_c𝒜_ij) (cf. <ref>). It can be verified explicitly that the latter equation is (spatially) traceless. Analogously, the last term in the first line of <ref> ensures that the evolution equation for ℬ_ij is (spatially) traceless. We refrain from plugging in <ref> (as well as the evolution equation for K_ij) explicitly to keep the expressions as concise as possible. §.§ Bianchi constraints The fiducial variables ℛ and ℛ_ab are not physical. Their only purpose is to reduce the order of the system. Naturally, in order for the reduced evolution to capture the physics of the original evolution, and the correct degrees of freedom in particular, we have to ensure that the fiducial variables evaluate to the proper metric quantities <cit.>, i.e., that 0=Δ_ab≡ G_ab(g) - ℛ_ab +1/4g_abℛ . However, this equation is nothing but the metric equation itself which we already added to the system of evolution equations. Hence, projection will only reproduce the Hamiltonian constraint (temporal), the momentum constraint (mixed), and the metric evolution equation (spatial). It seems like there are no novel constraints. Crucially, since we have been replacing 2^nd-order variables, we also have to ensure that their 1^st and 2^nd derivatives match the original metric quantities. The simplest such constraint is nothing but the Bianchi identity expressed in terms of the fiducial variables, i.e., 0= ∇_bΔ_a^b = [ - ℰ_a - K_a^b𝒞_b - K𝒞_a - D^b𝒜_ab - 1/3D_a𝒜 + 1/4D_aℛ]_spatial + n_a [ ℬ + D^b𝒞_b + 1/4ℛ̂ + K_bc𝒜^bc + 4/3K𝒜]_temporal . Following Noakes <cit.>, we refer to these 4 constraints as “Bianchi constraints”. Similarly, the normal derivative of the Bianchi constraint 0 = n_c∇^c(∇_bΔ_a^b) generates 4 further constraints which we refer to as “Bianchi-dot constraints” but which need not be written explicitly for our purposes. §.§ Constraint evolution We recall that, in the ADM formalism, the purely temporal and the mixed projection of the Einstein equations result in the Hamiltonian and the momentum constraint. Similarly, the temporal and mixed projection of the higher-derivative equations are not propagating physical degrees of freedom. As mentioned before, the temporal projections (cf. <ref>) are fully redundant and merely reproduce the spatial trace projections. The mixed projections correspond to evolution equations for 𝒞_i and ℰ_i. For instance, the mixed projection of (n^c∇_cℛ_ab)= -V_ab (cf. <ref>) with n^aγ^b_i results in n^c∇_c𝒞_i = a^k[ 𝒜_ki + 2/3γ_ki𝒜] + n_ia^k𝒞_k - ℰ_i , which corresponds to an evolution equation for 𝒞_i. Similarly, the mixed projection of <ref> corresponds to evolution of ℰ_i. We refrain from showing the full expansion of the latter projection since (see <ref>) we can remove 𝒞_i and ℰ_i by use of the momentum constraint and the spatial projection of the Bianchi constraint, respectively. Hence, there is no need to explicitly evolve the variables 𝒞_i and ℰ_i. Instead, 𝒞_i and ℰ_i can be understood as constraint variables and the mixed projections can be interpreted as constraint evolution. §.§ Summary of evolution equations and constraints Overall, the (3+1) decomposition is now phrased in terms of 32 free functions of initial data[ The components of n^a and its derivatives in the time direction can be seen as the remaining 8 free functions of initial data in order to match with the 40 free functions of initial data expected from the reduction of a 4^th-order evolution of a symmetric tensor in 4D. However, here, and in GR, they are fully constrained/determined by gauge/coordinate choice. For instance, one can choose harmonic gauge, in which case F^a=0 and (n^c∇_c F^a)=0 give the respective 8 harmonic constraints. ], i.e., the spatial metric γ_ij as well as its 1^st-order variable K_ij; the (fiducial) Ricci scalar ℛ as well as its 1^st-order variable ℛ̂; and the (3+1) components of the (fiducial) traceless Ricci tensor 𝒜, 𝒜_ij, and 𝒞_i as well as their 1^st-order variables ℬ, ℬ_ij, and ℰ_i. For these 32 free functions of initial data, only 16 correspond to physical initial data: In the metric sector, the Hamiltonian and momentum constraints in <ref> as well as 4 coordinate choices in the initial-data surface reduce from 12 to 4 pieces of initial data. Hence, the metric sector still propagates the expected 2 degrees of freedom of a massless spin-2 mode. While the constraints are modified, the constraint structure remains as in GR. In the fiducial sector, the 4 Bianchi (cf. <ref>) and the 4 Bianchi-dot (cf. <ref>) constraints reduce from 20 to 12 pieces of initial data. Hence, the fiducial sector contains 6 propagating degrees of freedom, corresponding to one massive spin-0 and one massive spin-2 mode. While not all of the constraints are algebraic, it is convenient that there are sufficiently many algebraic constraints in order to fully determine and thus remove the initial data for 𝒞_i (by use of the momentum constraint in <ref>) and ℰ_i (by use of the spatial projection of the Bianchi constraint in <ref>). In practice, we thus only need to evolve γ_ij and K_ij (see <ref>), ℛ and ℛ̂ (see <ref>, as well as 𝒜, 𝒜_ij, ℬ, and ℬ_ij (see <ref>), i.e., only 26 variables. § METHOD: NUMERICAL EVOLUTION The (3+1) evolution equations derived in the previous section are fully general: we expect them to be compatible with all the state-of-the-art evolution schemes <cit.> and numerical code frameworks <cit.>. In the following, we will focus on the vacuum case, specify to the BSSN formulation <cit.>, and numerically evolve the system using the  <cit.> code. The purpose of our numerical efforts is twofold. The first purpose is of technical nature: We demonstrate that the evolution system is numerically stable, even in the nonlinear regime. The second purpose is physical: Given numerical stability, we then use the numerical evolution to investigate stability of the Ricci-flat subsector of QG. From here on, we specify to the usual (3+1) coordinate conventions, in which β^a = (0, β^i) , n^a = (1/α, -β^i/α) , ds^2 = -α^2 dt^2 +γ_ij( dx^i + β^i dt )( dx^j + β^j dt ) , with the lapse function α and the shift vector β^i. For the evolution of α and β^i we choose a standard (1+log) slicing and a Γ-driver, respectively <cit.>. Regarding the metric dynamics, the BSSN formulation <cit.> (see also <cit.> for subsequent proof of its strong hyperbolicity), proceeds exactly as in GR. For completeness, we summarize the BSSN evolution equations in <ref>. Together with the evolution equations for ℛ, ℛ̂, 𝒜, 𝒜_ij, ℬ, and ℬ_ij (see <ref>), these form the system of partial differential equations (PDEs) that we implement numerically. §.§ Numerical setup We implement the evolution equations in the  <cit.> framework. combines a parallel octree-refined adaptive mesh with a wavelet adaptive multiresolution. An additional Quadratic-Gravity module is built on top of this framework[See the repository <https://github.com/lanl/Dendro-GRCA>.]. We use a fourth-order finite-difference scheme to evaluate spatial derivatives and a fourth-order Runge-Kutta method to evolve in time. The Courant–Friedrichs–Lewy condition <cit.> which relates the temporal and spatial disretization is set to 0.25. Therefore, as we increase N_x,y,z (or decrease Δ (x,y,z)), the time discretization Δ t decreases. Other conditions are varied with respect to the test problems. §.§ Numerical stability To confirm numerical stability, we evolve a single Kerr black hole, perturbed only by numerical noise. We test with puncture initial data <cit.> for a Kerr black hole. Note that the additional QG variables are vanishing since Kerr is a Ricci-flat vacuum solution. The Kerr black hole is expressed in Kerr-Schild coordinates such that ds^2 = (η_ab + 2H k_a k_b) dx^a dx^b where η_ab is usual Minkowski spacetime and H = G M r/r^2 + a^2 (z/r)^2 , k_a dx^a = -dt - r(xdx + ydy) - a(xdy -ydx)/r^2+a^2 - zdz/r . Here, M is the black-hole mass and a is the spin parameter. In 3+1 form, we also have α =1/√(1+2Hk_0 k_0) , β_i = 2H k_0 k_i , γ_ij = δ_ij + 2H k_i k_j , and the extrinsic curvature can be obtained as K_ij = D_i β_j + D_j β_i/2α . The Kerr-Schild form is a horizon penetrating coordinate system such that there are no coordinate singularities in γ_ij and K_ij at the horizon. Kerr-Schild coordinates cover both the outside and the inside of the black hole. Since Kerr spacetime is Ricci flat, ℛ, ℛ̂, 𝒜, 𝒜_ij, ℬ, and ℬ_ij are initialized as zero. We aim to test for numerically (un)stable behavior of the time evolution. In anticipation of the presence of a linear instability in part of the parameter space, cf. <ref>, we choose mass values for which the instability is not relevant. To perform a numerical stability test, we add random noise to all components of the initial data such that 𝐮(t=0) = 𝐮_0 + A_noiseRAND(x) where 𝐮=(γ_ij,K_ij,ℛ,ℛ̂,𝒜,𝒜_ij,ℬ,ℬ_ij) is the state vector for all the evolution variables, A_noise is a noise amplitude which we vary from 10^-10 to 10^-5, and RAND(x) is a random function that generates random values between -1 and 1. The result is summarized in <ref>. We find no indication for numerical instability in our evolution scheme. The same holds for all subsequent simulations. The respective constraint plots are presented in <ref>. § RESULTS: STABILITY OF THE RICCI-FLAT SUBSECTOR OF QUADRATIC GRAVITY In this section, we present our results on the Ricci-flat subsector of Quadratic Gravity. The physical upshot is twofold: first, we recover a well-known linear instability associated to massive spin-2 excitations; second, we demonstrate that – aside from this linear instability – even fully dynamical, Ricci-flat solutions like a binary merger seem to be nonlinearly stable. §.§ Recovering the linear instability in nonlinear evolution It is known from the linearized dynamics that a single Schwarzschild black hole can be subject to a linear instability <cit.>, akin to (i.e., linearly equivalent with) the long-wavelength Gregory-Laflamme instability of higher-dimensional black strings <cit.>. In QG, the onset and the timescale of this instability are determined by the mass m_2 of the massive spin-2 degree of freedom and by the gravitational radius r_g=2 GM of the Schwarzschild black hole <cit.>. In particular, the instability occurs whenever 2 GM m_2 ≡ p < p_crit≈ 0.87 . If this inequality is fulfilled, then there exists a linear mode which grows like ∼ e^Im(ω) t. The exponential growth rate is set by Im(ω) = q(p)/2 GM = m_2 q(p)/p , where q(p) is a concave function which has been determined numerically (see, e.g., <cit.>) and is bounded by q(p) < q_max = q(p_max) ≈ 0.1 , with p_max≈0.4. Moreover, lim_p→ p_critq(p) = 0 and the numerical results indicate that also lim_p→ 0q(p) = 0. Equivalently, the instability timescale in units of the black-hole mass is given by t_GL/GM∼1/GM Im(ω) = 2/q(p)≳ 20 . This means that, with regards to the linear instability, there are three different regimes: * If 2 GM m_2 > p_crit, no linear instability is present. * If 2 GM m_2 ≪ p_max, the single Schwazschild black hole exhibits a linear instability but the exponential growth rate is comparatively slow. * At 2 GM m_2 ≈ p_max, the exponential growth rate of the linear instability is maximized, growing e-fold roughly every t_GL≈ 20 GM. We probe and recover this instability within our numerical evolution. As in <ref>, we initialize a single Schwarzschild black hole. We detect the instability by calculating the spatially averaged Ricci scalar ⟨ℛ⟩_ζ where the spatial average is taken over a cube with x, y, z∈[-ζ,+ζ] and ζ=200 GM extends across the full computational domain. If the instability is present, a non-vanishing ⟨ℛ⟩_ζ is excited by the numerical noise floor in the initial data. We probe the three different regimes identified above, cf. <ref>, and find agreement with the expectation from the linear analysis. In particular, at 2 GM m_2 ≈ p_max, we recover the expected timescale of the linear instability. This also means that we can exclude the presence of further growth modes with a faster timescale. We thus conclude that the unstable monopole mode identified in the linear analysis is indeed the dominant unstable mode. As we demonstrate in <ref>, the linear instability breaks Ricci flatness. Nevertheless, we find that the evolution remains numerically stable, cf. <ref> in <ref>. In particular, the constraint violations remain small, even in the presence of a substantial breaking of Ricci flatness. We thus find no indication that well-posed evolution is restricted to the Ricci-flat sector. We also note that exponential growth – as expected from the linear analysis – corresponds to straight lines, given the log-scale in <ref>. Hence, our numerical simulations are in agreement with the linear analysis. Prolonged nonlinear evolution will allow us to clarify the nonlinear fate of the instability. We plan to report on this in future work. Moreover, the numerical evolution can straightforwardly be extended to rotating Kerr initial data. This allows to numerically explore a potential onset of physical instability for spinning black holes, where only partial results are known in the linearized regime <cit.>. Having recovered the Gregory-Laflamme-type instability, from here on, we work in the regime in which this linear instability does not occur. In this regime, we expect that a single (Schwarzschild) black hole is stable. In the following two sections, we investigate physical Ricci-flat perturbations. First, in <ref>, we perturb a single black hole by a gravitational (Teukolsky) wave. Then, in <ref>, we investigate a full binary merger. §.§ Physical perturbations: Teukolsky waves In the previous section, we have recovered the well-known linear instability of Schwarzschild black holes in QG. In particular, we have demonstrated how the instability – if present and with sufficiently fast growth rate – is excited by the numerical noise floor. In the present section, we now separate Ricci-flat physical perturbations from the noise floor. We emphasize that while we consider small perturbations, we nevertheless solve the nonlinear evolution. Constructing initial data which corresponds to physical excitations of modes that break Ricci flatness (as, e.g., the mode that excites the linear instability in the previous section) is thus nontrivial since it requires to solve the modified nonlinear constraints. In contrast to the previous section, we, therefore, focus on Ricci-flat perturbations only. For the latter, we can construct initial data just like in GR, once more, making use of the fact that every Ricci-flat solution to GR is also a solution to QG. There are various ways to construct gravitational-wave initial data, see <cit.> for Teukolsky waves which correspond to purely quadrupolar gravitational-wave excitations and <cit.> for the nonlinear construction of Brill waves which correspond to a tower of multipole modes. We specify to Teukolsky waves and adopt Cartesian coordinates in the following. By construction, Teukolsky waves satisfy the nonlinear momentum constraint. We follow the standard procedure <cit.> to ensure that initial data also satisfies the nonlinear Hamiltonian constraint, i.e., we employ the spatial part of the metric as a conformally related metric in the Hamiltonian constraint and then solve this equation for the conformal factor, i.e., for ϕ in our case (cf. <ref>). More details of the Teukolsky wave initial data can be found in <cit.>. We initialize the black hole at the origin of the computational domain and without initial velocity. The Teukolsky wave perturbation is initialized at 50 GM distance to the black hole, from where it propagates radially in all directions. We evolve the resulting simulation up to t=250 GM such that the evolution time encompasses how the Teukolsky wave interacts with the black hole. In order to confirm physical stability of the Ricci-flat subsector, we show the spatially averaged Ricci scalar ⟨ℛ⟩_ζ in <ref>. Clearly, the Ricci scalar remains vanishing up to numerical noise fluctuations. In particular, the latter noise floor is well separated from the amplitude of physical Teukolsky-wave perturbations A_tw which are up to 10^7 times larger, cf. the legend in <ref>. We conclude that, even with significant Ricci-flat perturbations, QG exhibits a stable subsector which mimics vacuum GR. To probe this conclusion further, we now proceed to the fully nonlinear regime of a binary merger. §.§ Stability during nonlinear binary evolution From the astrophysical perspective, one of the most interesting questions is to study the evolution of binary systems and the resulting gravitational wave emission. A continuously growing catalog <cit.> of gravitational-wave events is being detected by the LIGO/Virgo collaboration. At the same, when binary systems come close to merger, they probe the fully nonlinear regime of the theory and may thus reveal otherwise hidden deviations from GR. One of the possible deviations are the quadratic-curvature corrections investigated in this work, see also <cit.> for the evolution of binary systems with the inclusion of other (related) deviations from GR. Eventually, one would like to compare the theoretical predictions for the extracted gravitational-wave form in GR and in QG (or beyond-GR more generally). However, the previous section suggests that the vacuum sector of QG is fully equivalent to the vacuum sector of GR. If this holds true in the fully nonlinear regime, QG can mimic any binary black-hole (BBH) system and, in particular, the respective gravitational-wave forms obtained in GR. Indeed, this is what we find (see below). Hence, the relevant constraints on QG will likely come from non-vacuum systems and we plan to address this in future work. As a specific binary example, we use Bowen-York initial data <cit.>, approximating a binary system which has been matched to the GW150914 LIGO/Virgo event <cit.>. The respective binary parameters are taken from the library <cit.>. Since the physical initial data is Ricci-flat, we initialize all the additional QG variables with vanishing values. We then track the lapse function to extract the motion of the respective black holes. The trajectory comparison in <ref> confirms our expectation that the two evolutions are fully equivalent. Once more, we find evidence that QG exhibits a physically stable Ricci-flat subsector which is fully equivalent to GR. As mentioned above, the obvious next physical question concerns an extension to non-vacuum (and hence non-Ricci-flat) binary systems. In contrast to the present initial data, the fiducial Ricci variables ℛ, ℛ̂, 𝒜, 𝒜_ij, ℬ, and ℬ_ij (see <ref>), corresponding to the massive spin-0 and the massive spin-2 degrees of freedom, will then, presumably, be excited. We thus expect non-vacuum binary systems, e.g., neutron stars, to show appreciable differences to GR and, therefore, expect the respective waveforms to constrain the quadratic-curvature deviations from GR. All of this comes with the question whether new instabilities arise in the non-vacuum sector of QG. We will address the non-vacuum sector in a separate publication. § DISCUSSION We derive a (3+1) evolution system for the nonlinear gravitational dynamics of quadratic-curvature corrections to General Relativity (GR), i.e., for Quadratic Gravity (QG). After verifying numerical stability, we use the nonlinear evolution to verify the nonlinear stability of a Ricci-flat subsector of QG which can mimic GR. §.§ Key results The key to well-posed nonlinear evolution is based on Noakes' insight <cit.> that the Ricci scalar and traceless Ricci tensor can be treated as fiducial variables representing the additional degrees of freedom. We find that it is possible to solve part of the constraint system algebraically such that we reduce the number of redundant evolution variables. As for GR, in the metric sector, we evolve twelve 1^st-order variables, i.e., the spatial metric γ_ij and the extrinsic curvature K_ij, which represent the two degrees of freedom associated with the massless spin-2 graviton. In the trace sector, the Ricci scalar ℛ (and its 1^st-order variable ℛ̂) correspond directly to an additional massive spin-0 degree of freedom. In the traceless sector, the spatial part of the traceless Ricci tensor – which we decompose into a 3-trace and 3-traceless part 𝒜 and 𝒜_ij, respectively – and the respective 1^st-order variables ℬ and ℬ_ij all-together propagate another twelve pieces of initial data. Two of these are redundant but we do not find an obvious way to remove this redundancy analytically. Overall, these variables correspond to the 5 degrees of freedom of the massive spin-2 mode. The respective evolution system, summarized in <ref>, can be understood as the QG equivalent of the ADM equations for GR, cf. <cit.>. In fact, the evolution system contains the standard ADM equations in which the higher-derivative variables appear as fiducial matter sources. Minimally coupled physical matter sources enter the evolution system via the higher-derivative sector. We then treat the metric sector as in the BSSN formalism <cit.> and verify that the evolution of the resulting system of PDEs is numerically stable. After verifying numerical stability (which we also continue to check throughout all subsequent numerical evolutions, cf. <ref>), we investigate the physical stability of the Ricci-flat (GR vacuum) subsector of the theory, and find: * Our nonlinear results recover a well-known linear instability of Schwarzschild black holes <cit.>. At the linear level, this instability is fully equivalent to the Gregory-Laflamme instability <cit.>. It occurs only if both the spin-2 mass m_2 and the black-hole mass M are sufficiently small (in comparison to the Planck mass), i.e., if 1/4πm_2/M/<0.87. * Aside from this linear instability, we find that, both, physical metric perturbations (e.g., Teukolsky waves as presented in <ref>) and the fully nonlinear Ricci-flat evolution (e.g., a binary merger as the one presented in <ref>) are physically stable. The latter result is quite nontrivial and suggests that – at least in parameter ranges for which the Gregory-Laflamme-type instablity is either not present or negligibly small – QG exhibits a physically stable Ricci-flat subsector. In particular, this suggests that QG can mimic all of the vacuum physics of GR. §.§ Outlook The presence of a linear instability raises the question of its nonlinear endpoint and the relation to cosmic censorship. (See <cit.> for numerical investigation of the nonlinear fate of the Gregory-Laflamme instability for higher-dimensional black strings.) More generally, the global stability (i.e., the absence of runaway solutions) and the local stability (i.e., the identification of Lyapunov stable vacua) of Quadratic Gravity are yet to be determined, see also <cit.>. We note that stable motion and ghost-like degrees of freedom may not be mutually exclusive <cit.>. With the nonlinear evolution system at hand, we are well-equipped to numerically investigate these questions in future work. The apparent nonlinear stability of the Ricci-flat sector raises the question how the theory behaves if minimally coupled matter is added to the system. Are there also stable regimes of the non-vacuum theory? If so, is there a stable sector of the theory which deviates appreciably from General Relativity? As our evolution system already includes matter terms, we plan to also address this question in future work. The key difficulty will be to construct consistent (as in obeying all of the modified constraint equations) initial data for the non-Ricci-flat sector of Quadratic Gravity. Overall, the numerical stability of the presented evolution system gives access to the fully nonlinear sector of Quadratic Gravity. Moreover, the presented treatment of quadratic-curvature corrections may also inform how to achieve fully stable nonlinear evolution when curvature corrections of yet higher order are present. In particular, any gravitational theory constructed only from Riemann curvature scalars (i.e., scalars formed solely from contractions of the Riemann curvature tensor, in particular, not involving additional covariant derivatives) still maintains fourth-order equations of motion <cit.>. This suggests that similar techniques to the ones presented here may also apply to a much wider class of gravitational theories, for instance, to the cubic and/or quartic theory <cit.>. *Acknowledgements. We thank Pau Figueras and Frans Pretorius for many helpful discussions. The work leading to this publication was supported by the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF). AH acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) under Grant No 406116891 within the Research Training Group RTG 2522/1. HL is supported by the LANL ASC Program and LDRD grant 20230555ER. This work used resources provided by the LANL Darwin testbed. Darwin is a research testbed/heterogeneous cluster funded by the Computational Systems and Software Environments subprogram of ASC program. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S.DOE (Contract No. 89233218CNA000001). This work is authorized for unlimited release under LA-UR-23-23440 § GAUSS-CODAZZI-RICCI EQUATIONS The Gauss-, Codazzi-, and Ricci equations are of purely geometric nature. They determine the foliation and are therefore independent of the dynamics, i.e., valid both in GR and QG. They follow from the (3+1) decomposition of the Riemann tensor, i.e., from R_acbd = ^(3)R_acbd +2 K_a[bK_d]c +4 n_[a K^e_c] K_e[bn_d] +4(D_[aK_c][b)n_d] +4(D_[bK_d][a)n_c] +4 (γ^f_[an_c]γ^g_[bn_d]) n^e(∇_eK_fg) , + 4 a_[an_c]a_[bn_d] - 4 n_[b(D_d]a_[a)n_c] . Projecting the decomposition onto the respective temporal and spatial indices (and specifying to the (3+1) coordinate conventions in <ref>) results in the Gauss-, Codazzi-, and Ricci-equation, respectively, i.e., γ_a^e γ_b^f γ_c^g γ_d^h R_efgh = ^(3)R_acbd +2 K_a[cK_d]b , γ_a^e γ_b^f γ_c^g n^d R_efgd = -2 D_[aK_b]c , γ_b^e γ_d^f n^a n^c R_aecf = ℒ_nK_bd + K^e_b K_de + 1/αD_bD_dα . Other contractions with two normal vectors are either equivalent (by the symmetries of the Riemann tensor) to the above or vanish. All contractions with more than two normal vectors also vanish. § DECOMPOSITION OF ∇_A∇_Bℛ Here, we detail the split of ∇_a∇_b ℛ into spatial and temporal part. We start from ∇_a∇_b ℛ = g^c_a ∇_c (g^d_b ∇_d ℛ) = (γ^c_a - n_a n^c) ∇_c [(γ^d_b - n_b n^d) ∇_d ℛ] = + γ^c_a ∇_c (γ^d_b ∇_d ℛ)_(I) + n_a n^c ∇_c (n_b n^d ∇_d ℛ)_(II) - γ^c_a ∇_c(n_b n^d ∇_d ℛ)_(III) - n_a n^c ∇_c (γ^d_b ∇_d ℛ)_(IV) , and look at each term individually, i.e., (I) = γ^c_a ∇_c (γ^d_b ∇_d ℛ) ≡ D_a D_b ℛ , (II) = n_a n^c ∇_c (n_b n^d ∇_d ℛ) = -n_a n_b (n^c ∇_c ℛ̂) -n_a a_b ℛ̂ , (III) = γ^c_a ∇_c(n_b n^d ∇_d ℛ) = - n_b D_a ℛ̂ - γ^c_a (∇_cn_b) ℛ̂ = - n_b D_a ℛ̂ + K_abℛ̂ , where we have introduced the acceleration a_b≡ n^c∇_c n_b and inserted the definition of ℛ̂≡ -n^a∇_aℛ. Finally, term (IV) can be rewritten by commuting covariant derivatives, i.e., (IV) = n_an^c ∇_c (γ^d_b ∇_d ℛ) = n_a(n^c ∇_c γ^d_b)(∇_d ℛ) + n_aγ^d_b n^c ∇_c∇_d ℛ = n_a n^c (n^d∇_cn_b + n_b∇_cn^d)(∇_d ℛ) +n_aγ^d_b n^c ∇_d∇_c ℛ = - n_a a_b ℛ̂ - n_a n_b a_c D^cℛ + n_aγ^d_b ∇_d(n^c∇_c ℛ) - γ^d_b (n_a∇_d n^c)(∇_c ℛ) = - n_a a_b ℛ̂ - n_a n_b a_c D^cℛ - n_aD_bℛ̂ + γ^d_b (n^c∇_d n_a)(∇_c ℛ) - γ^d_b (∇_d γ_a^c)(∇_c ℛ) = - n_a a_b ℛ̂ - n_a n_b a_c D^cℛ - n_aD_bℛ̂ + K_abℛ̂ , where we have twice used that 0=∇_d g^c_a = ∇_d(γ^c_a - n_an^c) = ∇_dγ^c_a - n_a∇_dn^c -n^c∇_dn_a . Note that there are no remaining temporal derivatives in any of these terms. Collecting results, we find ∇_a∇_b ℛ = D_a D_b ℛ + 2 n_(aD_b)ℛ̂ - 2 K_abℛ̂ - n_a n_b (n^c ∇_c ℛ̂) + n_a n_b a_c D^cℛ , which is also given in the main text. § DECOMPOSITION OF ℛ_AB Here, we detail the split of ℛ_ab into spatial and temporal part. We start from ℛ_ab = - γ^d_c∇_d (n^c n^e ∇_e ℛ_ab) _(I) + n_c n^d ∇_d ( n^c n^e ∇_e ℛ_ab) _(II) + n_c n^d ∇_d (γ^ce∇_e ℛ_ab) _(III) - γ^d_c∇_d (γ^ce∇_e ℛ_ab) _(IV) , project the covariant derivatives onto spatial and temporal part, and look at each term individually. In the first two terms, we can introduce the first-order fiducial variable V_ab = - n^c ∇_c ℛ_ab to find (I) = γ^d_c∇_d (n^c n^e ∇_e ℛ_ab) = - γ^d_c∇_d (n^c V_ab) = K V_ab , (II) = n_c n^d ∇_d ( n^c n^e ∇_e ℛ_ab) = - n_c n^d ∇_d ( n^c V_ab) = n^d∇_d V_ab , For the third term, we find (III) = n_c n^d ∇_d (γ^ce∇_e ℛ_ab) = n_c n^d (∇_d γ^ce) (∇_e ℛ_ab) = n_c n^d (∇_d n^c n^e) (∇_e ℛ_ab) = a^e∇_eℛ_ab = a^eγ_e^c∇_cℛ_ab≡ a^c D_cℛ_ab . Here, as well as in the fourth term, (IV) = γ^d_c∇_d (γ^ce∇_e ℛ_ab) ≡ D_cD^cℛ_ab , the spatial covariant derivatives should be understood as a shorthand notation and not yet as a purely spatial quantity. This is important since ℛ_ab is not yet projected and thus contains temporal components. With this subtlety in mind, we collect results and find ℛ_ab = n^c ∇_c V_ab + (D_c + a_c)D^cℛ_ab - K V_ab , which is also given in the main text. § PROJECTIONS OF (N^C∇_Cℛ_AB) AND (N^C∇_C V_AB) Here, we project the left-hand side of the covariant traceless evolutions equations, i.e., (n^c∇_cℛ_ab) and (n^c∇_c V_ab), onto spatial and temporal parts. In the following, we go through the ℛ_ab-case, but the V_ab-case proceeds analogously. For the spatial projection, we derive γ_i^aγ_j^b (n^c∇_cℛ_ab) = (n^c∇_cγ_i^aγ_j^bℛ_ab) - (n^c∇_cγ_i^aγ_j^b)ℛ_ab = (n^c∇_c𝒜_ij) + 1/3(n^c∇_cγ_ij𝒜) - 2(n^c∇_cγ_(i^a)γ_j)^bℛ_ab = (n^c∇_c𝒜_ij) + 1/3γ_ij(n^c∇_c𝒜) + 1/3𝒜(n^c∇_cγ_ij) - 2(n^c∇_c n^a n_(i)γ_j)^bℛ_ab = (n^c∇_c𝒜_ij) + 1/3γ_ij(n^c∇_c𝒜) - 2/3𝒜(D_(in_j) + K_ij) - 2(a^a n_(i + n^a a_(i)γ_j)^bℛ_ab = (n^c∇_c𝒜_ij) + 1/3γ_ij(n^c∇_c𝒜) - 2/3𝒜(D_(in_j) + K_ij) - 2 a^c(𝒜_c(in_j) + 1/3γ_c(in_j)𝒜) - 2a_(i𝒞_j) , where we have used the decomposition of ℛ_ab (cf. <ref>) in the second and last equality; the evolution equation for the spatial metric (cf. <ref>) in the fourth equality; and throughout, the decomposition of the metric itself (cf. <ref>). We can independently derive the spatial trace as γ^ab(n^c∇_cℛ_ab) = (n^c∇_cγ^abℛ_ab) - (n^c∇_cγ^ab)ℛ_ab = (n^c∇_c𝒜) - (n^c∇_c n^a n^b)ℛ_ab = (n^c∇_c𝒜) - 2 n^(a a^b)ℛ_ab = (n^c∇_c𝒜) - 2 a^c𝒞_c , which serves as a crosscheck and agrees with the trace of <ref>. Analogously, we derive the mixed projection, i.e., n^aγ^b_d(n^c∇_cℛ_ab) = (n^c∇_c n^aγ^b_dℛ_ab) - (n^c∇_c n^a)γ^b_dℛ_ab - n^a(n^c∇_cγ^b_d)ℛ_ab = (n^c∇_c 𝒞_d) - a^a(𝒜_ad - 1/3γ_ad𝒜) - n^a(n^c∇_c n^b n_d)ℛ_ab = (n^c∇_c 𝒞_d) - a^a(𝒜_ad - 1/3γ_ad𝒜) - n^a(a^b n_d + n^b a_d)ℛ_ab = (n^c∇_c 𝒞_d) - a^a(𝒜_ad + 2/3γ_ad𝒜) - n_d a^c 𝒞_c , and the temporal projection (which – by construction – agrees with the spatial trace, cf. <ref>, such that the 4D trace vanishes), i.e., n^an^b(n^c∇_cℛ_ab) = (n^c∇_c n^a n^bℛ_ab) - (n^c∇_c n^a n^b)ℛ_ab = (n^c∇_c𝒜) - 2 a^c𝒞_c . § BSSN EQUATIONS For completeness, we provide the implemented BSSN equations that we use to evolve the metric sector of the theory. Our conventions agree with <cit.>. With a split of the conformal metric and the extrinsic curvature into trace and traceless part, i.e., γ̃_ij = e^-4ϕγ_ij , with ϕ = ln(γ)/12 , Ã_ij = e^-4ϕ(K_ij - 1/3γ_ijK) , the York-variant of the ADM equations (cf. <ref>) can be recast into BSSN form, i.e., ∂_t ϕ = - 1/6α K + β^i ∂_i ϕ + 1/6∂_i β^i ∂_t K = - γ^ij D_j D_i α + α(Ã_ijÃ^ij + 1/3 K^2) + 1/2^2 (ρ + S) + β^i ∂_i K , ∂_t γ̃_ij = - 2 αÃ_ij + β^k ∂_k γ̃_ij + γ̃_ik∂_j β^k + γ̃_kj∂_i β^k - 2/3γ̃_ij∂_k β^k , ∂_t Ã_ij = e^- 4 ϕ[ - ( D_i D_j α )^TF +α(^(3)R_ij^TF - 1/^2S_ij^TF) ] + β^k ∂_k Ã_ij + Ã_ik∂_j β^k + Ã_kj∂_i β^k - 2/3Ã_ij∂_k β^k + α (K Ã_ij - 2 Ã_ilÃ^l_ j) , ∂_t Γ̃^i = 2α( Γ̃^i_jkÃ^kj - 2/3γ̃^ij∂_j K - 1/^2γ̃^ijS_j + 6 Ã^ij∂_j ϕ) - 2 Ã^ij∂_j α + β^j ∂_j Γ̃^i - Γ̃^j ∂_j β^i + 2/3Γ̃^i ∂_j β^j + 1/3γ̃^li∂_l∂_jβ^j + γ̃^lj∂_j∂_lβ^i . <ref> (as well as <ref>) evolve the trace part (as well as the traceless part) of the metric and extrinsic curvature. They are obtained from the York-ADM equations by tracing and subtracting the trace, respectively. Superscripts ^TF denote trace-free parts. <ref> is introduced to remove 2^nd-order mixed spatial derivatives in R_ij^TF of <ref> by extending the system. The explicit expression for R_ij^TF in terms of the conformal connection functions Γ̃^i can be found, e.g., in <cit.>. Their definition Γ̃^i ≡γ̃^jkΓ̃^i_jk = - ∂_jγ̃^ij serves as an additional constraint. Initial data is physical only if it also obeys <ref>. Finally – and crucially with regards to numerical stability – the shift constraint has been used in <ref> to remove spatial derivatives of Ã_ij. § CONVERGENCE TESTS All simulations were performed under LANL supercomupter Darwin. Darwin is a very heterogeneous cluster with a wide variety of hardware available, including x86, Power PC and ARM CPU architectures, systems with terabytes of memory, and a variety of GPUs and other accelerators. In particular, we choose partition which has dual socket 2.1 GHz 18 core Intel Broadwell E5 2695v4 processor with 45MB of cache and 128GB of RAM on each node. We perform standard convergence tests. To be specific, the self-convergence ratio is given by 𝒞_self = log_2 ||𝐅_h_i - 𝐅_h_i+1||_q/||𝐅_h_i+1 - 𝐅_h_i+2||_q , where 𝐅 is the state vector for all evolution variables, and || · ||_q is a general expression for different norms. Convergence tests have to be performed with respect to a specific norm which is suitable for given system of evolution of equations. In the following, we denote with || · ||_H_1 the H_1 norm. This norm is computed in a discrete approximation that replaces the respective continuum norm <cit.>. Similarly, the exact convergence ratio, with 𝐅_exact = 0, can be computed 𝒞_exact = log_2 ||𝐅_h_i - 𝐅_exact||_q/||𝐅_h_i+1 - 𝐅_exact||_q = log_2 ||𝐅_h_i ||_q/||𝐅_h_i+1 ||_q . Given the employed fourth-order scheme, the expected convergence rate is four, in both cases. A more detailed discussions of convergence tests is given in <cit.>. In <ref>, we show the self-convergence test for Schwarzschild spacetime (upper) and the exact convergence test. In both cases, we find the expected fourth-order convergence ratio which matches the implemented fourth-order discretization scheme. For completeness, we show plots of the constraint (i.e., the l2-norm of the Hamiltonian constraint in <ref>) for all of our numerical simulations: <ref> refers to the linear instability in <ref>, see also <ref>; <ref> refers to the Teukolsky wave test in <ref>, see also <ref>; <ref> refers to the linear instability in <ref>, see also <ref>. Clearly, in all cases, the constrain violations remain small and even decay.
http://arxiv.org/abs/2306.09143v1
20230615140327
Spontaneous dimerization, spin-nematic order, and deconfined quantum critical point in a spin-1 Kitaev chain with tunable single-ion anisotropy
[ "Qiang Luo", "Shijie Hu", "Jinbin Li", "Jize Zhao", "Hae-Young Kee", "Xiaoqun Wang" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
GBKsong [][email protected] College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China Key Laboratory of Aerospace Information Materials and Physics (NUAA), MIIT, Nanjing, 211106, China Beijing Computational Science Research Center, Beijing 100084, China College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China Key Laboratory of Aerospace Information Materials and Physics (NUAA), MIIT, Nanjing, 211106, China School of Physical Science and Technology & Key Laboratory for Magnetism and Magnetic Materials of the MoE, Lanzhou University, Lanzhou 730000, China Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, China [][email protected] Department of Physics, University of Toronto, Toronto, Ontario M5S 1A7, Canada Canadian Institute for Advanced Research, Toronto, Ontario, M5G 1Z8, Canada [][email protected] School of Physics, Zhejiang University, Hangzhou 310058, China The Kitaev-type spin chains have been demonstrated to be fertile playgrounds in which exotic phases and unconventional phase transitions are ready to appear. In this work, we use the density-matrix renormalization group method to study the quantum phase diagram of a spin-1 Kitaev chain with a tunable negative single-ion anisotropy (SIA). When the strength of the SIA is small, the ground state is revealed to be a spin-nematic phase which escapes conventional magnetic order but is characterized by a finite spin-nematic correlation because of the breaking spin-rotational symmetry. As the SIA increases, the spin-nematic phase is taken over by either a dimerized phase or an antiferromagnetic phase through an Ising-type phase transition, depending on the direction of the easy axis. For large enough SIA, the dimerized phase and the antiferromagnetic phase undergo a “Landau-forbidden" continuous phase transition, suggesting new platform of deconfined quantum critical point in spin-1 Kitaev chain. Spontaneous dimerization, spin-nematic order, and deconfined quantum critical point in a spin-1 Kitaev chain with tunable single-ion anisotropy Xiaoqun Wang July 31, 2023 =============================================================================================================================================== § INTRODUCTION The celebrated Kitaev model on the honeycomb lattice <cit.> and its multitudinous variants offer unprecedented opportunities for our understanding of exotic states of matter arising from bond-directional exchange couplings <cit.> and unconventional quantum phase transitions (QPTs) that are beyond the Landau-Ginzburg-Wilson (LGW) paradigm <cit.>. It is rigorously demonstrated that the ground state of the Kitaev honeycomb model is a quantum spin liquid (QSL) with fractionalized excitations consisting of itinerant majorana fermions and localized ℤ_2 vortices (visons) <cit.>. The quantum fluctuation can be greatly enhanced by including further nearest-neighbor interactions and off-diagonal exchanges, giving rise to emergent phases such as the vison crystal <cit.>, QSLs of different nature <cit.>, nematic paramagnet that breaks lattice rotational symmetry <cit.>, and spin-flop phase which can be interpreted as superfluid phase <cit.>. At the same time, smoking-gun signals of the topological QPTs are observed by the change of Chern number and the onset of the peak in the thermal Hall conductivity <cit.>. While substantial efforts have been devoted to studying extended Kitaev models in two dimensions, many intriguing phenomena regarding the collective behaviors of the excitations remain elusive because of the numerical challenges and limitations of different computational methods. One of the prominent examples is the antiferromagnetic (AFM) Kitaev model subject to a [111] magnetic field, which is shown to have an intermediate region between the low-field non-Abelian QSL and the high-field polarized phase <cit.>. The plausible perspective which asserts that the intermediate region is a gapless QSL with spinon Fermi surface has been challenged by a recent study, where a different scenario of gapped QSL with a Chern number of 4 is proposed <cit.>. Also, it is revealed by another work that the intermediate region is composed of two gapped phases with finite Chern number <cit.>. To reconcile these seemingly conflicting results, attempts have been made on the spin-ladder analogue in which a staggered chiral phase as well as a few possible incommensurate phases appears <cit.> and on the spin-chain limit where a chiral soliton phase is observed <cit.>. Therefore, the (quais-) one-dimensional Kitaev-type spin chains serve as fruitful grounds to offer insights into the enigmatic phases in higher dimensions. Over the years, the Kitaev-type spin chains have been the focus of intensive research efforts since they can harbor interesting phases and unconventional QPTs <cit.>. In the Kitaev-Γ chain where Γ interaction is an off-diagonal exchange coupling <cit.>, a magnetically ordered state that displays a spin-nematic correlation occurs in the neighbor of the dominant AFM Kitaev interaction <cit.>. Thus, these studies provide a promising way towards pursuing the spin-nematic order in the models with bond-directional exchanges. The spin-nematic state is characterized by a quadrupolar order in which the spin-rotational symmetry is broken whereas both translational and time-reversal symmetries are retained, constituting the magnetic analogue of liquid crystal <cit.>. Despite an active search for several decades, theoretical proposals of the spin-nematic order is rare and experimental detection has been hindered by the fact that the spin-nematic order parameter is not coupled to the external magnetic field directly <cit.>. On the other hand, a continuous QPT between two magnetically ordered states with different symmetry breaking is reported in the Kitaev spin chain with multiple-spin interaction <cit.>. Such an exotic transition is forbidden by the conventional LGW paradigm, providing another concrete example of the deconfined quantum critical point (DQCP) in one dimension <cit.>. In contrast to the spin-1/2 Kitaev-type chains that have gained much attention, the rich physics of their spin-1 counterparts remains hitherto largely unexplored. For example, although it is revealed that the spin-1 Kitaev chain can host unusual excitations and display an alluring double-peak structure in its specific heat <cit.>, nature of its ground state has not been understood thoroughly. To this end, in this paper we consider a spin-1 Kitaev chain with a negative single-ion anisotropy (SIA) whose easy axis varies from [001] direction to [110] direction, passing through the [111] direction. We propose that the Kitaev phase is a sort of spin-nematic phase that can further be classified into two kinds, depending on the structures of their low-lying excited states. In the presence of an overwhelmingly dominant SIA, we find a continuous QPT between the dimerized phase and the AFM phase which break different discrete symmetries, showing that a DQCP is likely realized in the spin-1 Kitaev-type chain. The remainder of the paper is constructed as follows. In Sec. <ref> we construct the theoretical model, introduce the numerical methods, and show the resultant quantum phase diagram. Section <ref> is devoted to presenting the nature of spin-nematic phase and relevant QPTs, which include QPTs from the dimerized (AFM) phase to the spin-nematic phase for the [001]-type ([111]-type) SIA, behavior of the four-spin correlation function in the spin-nematic phase, and emergence of the DQCP in the continuous dimer-AFM transition. Finally, a brief conclusion is stated in Sec. <ref>. § MODEL AND METHOD We consider the spin-1 Kitaev chain with a tunable SIA whose Hamiltonian reads ℋ = K ∑_i=1^L/2(S_2i-1^xS_2i^x + S_2i^yS_2i+1^y) + D ∑_i=1^L[sinϑ/√(2)(S_i^x+S_i^y)+cosϑ S_i^z]^2, where S_i^γ (γ = x, y, z) are the three components of the spin operator at the ith site, and L is the total length of the chain which is a multiple of 4. The first term is the Kitaev (K) interaction with alternating x- and y-type bonds. The second term represents the SIA, in which D < 0 is the strength and ϑ∈ [0, π/2] determines the direction of the easy axis. The SIA term is reduced to the simple form (S_i^z)^2 and (S_i^x+S_i^y)^2/2, respectively, when ϑ = 0 and π/2, while it exhibits the form (S_i^c)^2 with S_i^c = (S_i^x+S_i^y+S_i^z)/√(3) when ϑ = tan^-1(√(2)) ≈ 0.3041π. Although the full SU(2) spin-rotational symmetry is absent, the Hamiltonian in Eq. (<ref>) respects a time-reversal symmetry 𝒯 (S_i^γ↦ -S_i^γ) and a link-inversion symmetry I (S_i^γ↦ S_L+1-i^γ). In light of a proper basis rotation (S_i^x, S_i^y, S_i^z)^T = R̂_zR̂_y · (S̃_i^x, S̃_i^y, S̃_i^z)^T with R̂_z = [ [ 1/√(2) -1/√(2) 0; 1/√(2) 1/√(2) 0; 0 0 1; ]], R̂_y = [ [ cosϑ 0 sinϑ; 0 1 0; -sinϑ 0 cosϑ; ]], it is further revealed to have a ℤ_2^x̃×ℤ_2^z̃ dihedral symmetry D_2 where ℤ_2^x̃/z̃ stands for the spin inversion in x̃/z̃ direction. Due to the bond-alternating nature of the Kitaev interaction, the model possesses a two-site translational symmetry T_2 apparently. However, at least in the limit cases where ϑ = 0 and π/2, ℋ enjoys an one-site translational symmetry T_1. This can be seen by exerting the following unitary transformation on the even sites: (S_2i^x, S_2i^y, S_2j^z) ↦ (S_2i^y, S_2i^x, -S_2i^z) <cit.>. Consequently, the Kitaev term takes the form ∑_i S_i^xS_i+1^y while the SIA term remains unchanged, both of which are translationally invariant. In fact, the SIA is naturally expected in all high-spin materials under a slight distortion from their ideal structures <cit.>, and it has been identified in various Kitaev materials like CrI_3, CrGeTe_3 and CrSiTe_3 <cit.>. Meanwhile, the role played by the [001]-type and [111]-type SIAs in the spin-1 and spin-3/2 Kitaev honeycomb models has been studied extensively <cit.>. In the large-S limit, it is revealed that the SIA can stabilize an interesting triple-meron crystal consisting of three merons, leading to a finite topological number and a quantized topological Hall conductance <cit.>. These studies imply that Eq. (<ref>) should also harbour a rich physics. In what follows we set K = 1 as the energy unit unless stated otherwise. The quantum phase diagram is mapped out by the density-matrix renormalization group (DMRG) method <cit.>. In the DMRG calculation we adopt both open (OBC) and periodic (PBC) boundary conditions alternatively, depending on the prominent issue that matters. To improve the numerical accuracy, 2000 block states are kept in order to maintain a small truncation error of ∼10^-7 or less. The sweep is executed twelve times basically, with the potential to increase by several times in the vicinity of the quantum critical point. When necessary, the transfer-matrix renormalization group (TMRG) method is also employed to study the finite-temperature evolution of physical quantities <cit.>. During the calculation, we set the Trotter-Suzuki step τ = 0.01 and the block states m = 1024. Figure <ref> illustrates the quantum phase diagram in the region of D ∈ [-1.5, 0.0] and ϑ∈ [0, π/2] in the spin-1 Kitaev chain with tunable SIA. Firstly, by calculating the four-spin correlation function pertaining to the spin-nematic order, we find that the small-D region, including the Kitaev limit whose ground state is previously termed Kitaev phase <cit.>, exhibits a nonzero spin-nematic correlation over the vanishing magnetic moment. This area is thus arguably a spin-nematic phase that has long been pursued in the past decades <cit.>. The spin-nematic phase has a unique ground state, above which a finite excitation gap is acquired. According to the degeneracy of its first excited state, however, it can be further divided into two parts where a crossover occurs between them. Secondly, the dimerized phase and the AFM phase, which break translational symmetry and dihedral and time-reversal symmetries, respectively, appear as the strength of the SIA increases. When the strength of the SIA is moderate, the spin-nematic order is intervened between the two, in accordance with the fact that the spin-nematic order preserves the translational symmetry and time-reversal symmetry. Last but not the least, a continuous QPT between the dimerized phase and the AFM phase, which is advocated by a central charge of 1, is identified if the SIA is overwhelmingly dominant. Hence, a DQCP is likely realized in the spin-1 Kitaev-type chain. § RESULTS AND DISCUSSION §.§ Dimerized phase and AFM phase The dimerized phase and the AFM phase are two representative symmetry-breaking phases that have been widely recognized in the field of quantum magnetism. For concreteness, we consider the Kitaev chain in the [001]-type ([111]-type) SIA to study the dimerized phase (AFM phase) and its transition to the spin-nematic phase. The dimerized phase breaks the translational symmetry spontaneously, leading to a gapped ground state with a two-fold degeneracy. In the spin-1 Heisenberg chain, the dimerized phase is demonstrated to be realized by adding competing biquadratic interaction <cit.>, three-spin interaction <cit.>, or spatial alternation <cit.>. Nevertheless, the SIA itself cannot induce the dimerized phase <cit.>. In the Kitaev chain with a [001]-type SIA, however, the intrinsic bond-directional interaction opens the possibility of realizing the dimerized phase. Since there is only one site in each unit cell due to the translational symmetry T_1, a natural way to check for the dimerized phase is by measuring the dimer order parameter defined as O = lim_L→∞ O_L with O_L = |⟨ S_L/2-1^x S_L/2^x⟩ - ⟨ S_L/2^y S_L/2+1^y⟩|. Thus, the dimerized phase occurs as long as the bond strength of |⟨ S_L/2-1^x S_L/2^x⟩| and |⟨ S_L/2^y S_L/2+1^y⟩| differ. Figure <ref> shows the finite-temperature TMRG calculation of the bond strength |⟨ S_i^γ S_j^γ⟩| (γ = x, y) with D = -0.4 and -0.8. As the temperature T evolves from 10 to 0.0033, the curves of the bond strength between neighboring x bond (red dot-dashed line) and y bond (blue dotted line) overlap persistently when D = -0.4. By contrast, there is a sharp differentiation of the bond strength as long as the temperature is lower than ∼0.01 when D = -0.8, indicating a spontaneous dimerization thereof. In the ultra-low temperature region (T < 0.01), the bond strength is insensitive to the temperature and the fact that strength of the weak bond strength remains finite down to the zero temperature reveals a partially dimerized phase. To study the nature of the QPT, we use the DMRG method to calculate the dimer order parameter O_L for different length L. According to the finite-size scaling ansatz <cit.>, the dimer order parameter O_L satisfies the formula O_L(D) ≃ L^-β/ν f_O(|D-D_c|L^1/ν), where β and ν are critical exponents of order parameter and correlation length, and f_O(·) is a nonuniversal function that relies on O_L. To extract the critical exponents, we adjust parameters μ_1,2 until we see the intersection of O_L L^μ_1 as a function of D and the collapse of O_L L^μ_1 as a function of |D-D_c|L^μ_2 for all length L. The critical exponents are then given by β = μ_1/μ_2 and ν = 1/μ_2. Figure <ref> shows the finite-size scaling result of the dimer order parameter O_L with L = 128, 192, 256, 320, and 384. By using of the least-square fitting method, we obtain the quantum critical point D_c = -0.6551(2), and the critical exponents β = 0.123(4) and ν = 0.98(3). These values are consistent with the critical exponents of the Ising transition which says that β = 1/8 and ν = 1, suggesting that the transition between the dimerized phase and the spin-nematic phase belongs to the Ising universality class. Before proceeding further, we wish to note that the dimer order parameter in Eq. (<ref>) is still suitable even though the easy-axis direction of the SIA is away from the [001] direction. After applying the local transformation (S_2i^x, S_2i^y, S_2j^z) ↦ (S_2i^y, S_2i^x, -S_2i^z), the Kitaev interaction is translationally invariant while the SIA term takes the form [sinϑ/√(2)(S_i^x+S_i^y)-(-1)^icosϑ S_i^z]^2. In the dimerized phase, S_i^z is only weakly coupled to S_i^x and S_i^y when compared to the dominating (S_i^z)^2. In addition, although the intensity of (S_i^x)^2 and (S_i^y)^2 are different, all the components of S_i^αS_i^β (α, β = x, y, z) are uniformly distributed, suggesting an effective one-site translational symmetry. Next, we turn to study the AFM phase which is known to break the dihedral symmetry and time-reversal symmetry and exhibits a gapped doubly-degenerate ground state. The magnetic moments along the three spin directions are all finite except for the case where ϑ = π/2. Due to symmetric structures of the Kitaev interaction and SIA, the x and y components of magnetic moments are equal but are larger than that of the z component. We apply an staggered pinning field of value 𝒪(1) at two end sites to slightly break the degenerate manifold. The nondegenerate ground state thus displays a well-behaved magnetic pattern, and the magnetic order parameter can be calculated as M = lim_L→∞ M_L with M_L = √((⟨ S_L/2^x⟩)^2 + (⟨ S_L/2^y⟩)^2 + (⟨ S_L/2^z⟩)^2). Figure <ref> shows the finite-size scaling result of the magnetic order parameter M_L (L = 128, 192, and 256) in the Kitaev chain with a [111]-type SIA. Following a similar procedure mentioned above, we get the quantum critical point D_c = -0.6035(2), and the critical exponents β = 0.127(3) and ν = 0.99(2), demonstrating that the transition between the AFM phase and the spin-nematic phase also falls in the Ising universality class. To further verify the continuous QPT, we calculate the lowest excitation gaps Δ_1,2 = E_1,2-E_0 in the vicinity of the quantum critical point. Here, E_0,1,2 are the three lowest energy levels in the energy spectrum, with E_0 being the ground-state energy. In the calculation we use the PBC to remove the boundary effect, and the ground state of the spin-nematic phase is unique while it is doubly degenerate in the AFM phase. Behaviors of the excitation gaps Δ_1 (open symbols) and Δ_2 (filled symbols) as a function of D are shown in Fig. <ref>. Deep in the AFM phase, Δ_1 is vanishingly small and Δ_2 is robust against the chain length. As the SIA approaches the quantum critical point, the finite-size effect is significant since Δ_2 decreases apparently with the increase of the system size. The inset of Fig. <ref> shows the evolution of Δ_2 as a function of 1/L for a series of chain length L ranging from 24 to 144. The linear extrapolation gives an estimate of 0.002(5) for Δ_2, corroborating a continuous QPT with a closure of the lowest excitation gap. We wish to comment on the influence of the sign of the Kitaev interaction on the QPT. For the [001]-type SIA with ϑ = 0, the transformation of (S_i^x, S_i^y, S_i^z) ↦ (-S_i^x, -S_i^y, S_i^z) on all even sites implies that ℋ(K, D) = ℋ(-K, D), showing that the sign of the Kitaev interaction does not alter the position of transition point. By contrast, for the [111]-type SIA with ϑ = tan^-1(√(2)), ℋ(K, D) and ℋ(-K, D) are no longer equivalent. While the QPT is still of the Ising universality class when the Kitaev interaction is ferromagnetic, the transition point is -0.5531(2), which is larger than that of the AFM case. §.§ Spin-nematic phase The spin-nematic order is an intriguing phase which lacks the conventional magnetic order but breaks the spin-rotational symmetry, giving rise to a nonzero quadrupolar order and possessing unusual excitations <cit.>. Therefore, emergence of the spin-nematic order is often related to the geometrical frustration and competing interactions which enhance quantum fluctuations <cit.>. Hitherto, several different scenarios have been proposed to theoretically realize the spin-nematic phase. The spin-1/2 ferromagnetic chain with frustrated next-nearest-neighbor interaction is perhaps the most realistic model since it is believed to characterize a couple of quasi-one-dimensional magnets like LiCuVO_4 <cit.>. According to the proposal by Zhitomirsky and Tsunetsugu <cit.>, just below the saturation field, the gapped magnon excitations and the attractive interaction between them enforce the energy of the two-magnon bound state is lower than that of the single-magnon state, thereby favoring the spin-nematic phase <cit.>. Theoretical analysis and numerical calculation suggest that the spin-nematic phase can be stabilized in spin-1 chains with the biquadratic interaction <cit.>. In addition, the spin-nematic phase is also demonstrated to manifest itself in spin-1 chains whose Hamiltonians do not have U(1) symmetry <cit.>. We start by checking for the possible existence of vector spin chirality κ̂_i = (S_i ×S_i+1)_z = - (S_i^+S_i+1^- - S_i^-S_i+1^+)/2, which is the vector product of two adjacent spins along the chain <cit.>. The chiral order preserves the time-reversal symmetry but breaks the inversion symmetry. The chiral-chiral correlation function is defined as K(i, j) = ⟨κ̂_iκ̂_j⟩, in which i and j are site indices and we assume that r ≡|j-i|→∞. For concrete, we set (i, j) = (l_0, l_0 + r) with l_0 = L/2 and calculate the correlator K(l_0, l_0+r) at two representative points, see Fig. <ref>(a). It is observed that K(l_0, l_0+r) decays rapidly with the distance r and tends to zero, indicating that the chiral order is not favored in the ground state. On the other hand, the spin-nematic order can be confirmed by the spin-nematic order parameter 𝒪_SN, which is extracted from the four-spin correlation function <cit.> Q_δ(i, j) = ⟨ S_i^+ S_i+δ^+ S_j^- S_j+δ^-⟩≃𝒪_SN^2 e^-ϕ. Here, δ is fixed as 1 throughout the paper, and ϕ is a phase factor that varies as the interaction strength changes. The real (blue color) and imaginary (cyan color) parts of Q_1(r) = Q_1(l_0, l_0+r) at a specific point Q_1(l_0, l_0+r) in which D = -0.2 and ϑ = tan^-1(√(2)) are shown in Fig. <ref>(b). It is observed that depending on the odevity of r, Q_1(r) has a strong even-odd effect. When r is even, Q_1(r) is real as (Q_1(r)) is vanishingly small. By contrast, both (Q_1(r)) and (Q_1(r)) saturate to finite values for odd r. In any circumstance, the fact that the spin-nematic order parameter 𝒪_SN is nonzero manifests the existence of the spin-nematic order. Of note is that the spin-rotational symmetry pertaining to the spin-nematic order is explicitly broken in the Hamiltonian. We proceed to focus on the Kitaev chain with a [111]-type SIA to study the behavior of the spin-nematic order parameter. The real (blue color) and imaginary (cyan color) parts of Q_1(r ≫ 1) for a chain of length L = 128 are shown in Fig. <ref>(a). Irrelevant of the strength of |D|, (Q_1(r)) vanishes when r is even, and is finite except for the limit case where D = 0 and an accidental point with D ≈ -0.49 when r is odd. In the former case the phase factor ϕ is 0 while in the latter case it is nontrivial. The left axis of Fig. <ref>(b) illustrates the amplitude of Q_1(r) when r is even (pink square) and odd (brown diamond), respectively. The fact that all the data points are overlapped indicates that 𝒪_SN^2 is uniformly distributed and can be safely extracted from either case. The right axis of Fig. <ref>(b), on the other hand, shows the behavior of the phase factor ϕ as D changes. It decreases from π in the pure Kitaev limit where D is zero to 0 when |D| is large enough such that the system is in the deep AFM phase. A nontrivial observation is that the phase factor ϕ undergoes a rapid change near the quantum critical region, indicating that it may serve as a tool to probe the QPT. To reveal the relation between phase factor ϕ and quantum criticality, we show the derivative of ϕ with respect to tuning parameter D in Fig. <ref>(c). The quantity ∂ϕ/∂ D displays a singular peak in the vicinity of the quantum critical point D_c, with the height of peak growing and the position of peak approaching D_c as the chain length L increases. Thus, ∂ϕ/∂ D is predicted to diverge as L→∞ and should in principle display a scaling behavior. We note in passing that derivative of the geometric Berry phase associated with the many-body ground state has already been demonstrated to exhibit universality in the neighbor of the quantum critical point <cit.>. Whereas the spin-nematic phase is characterized by a unique ground state under PBC, its excited states are quite involved and display distinct patterns. We find that all the excited states are doubly degenerate except for the first excited ground state. The first excited ground state is unique in the wide region, as compared to the twofold case observed in a specific area where |D| and ϑ are small. Therefore, we distinguish the spin-nematic phase as type-I and type-II, respectively, based on its degeneracy of the first excited state (for illustration, see Fig. <ref>). However, since the lowest excitation gap of the spin-nematic phase does not close throughout its whole region, there is not a QPT but a likely crossover between the two. To illustrate it, we have calculated the phase factor ϕ at fixed SIA, saying D = -0.3. The derivative of ϕ with respect to ϑ shows a broad hump and suffers from an insignificant finite-size effect, characteristic of crossover phenomenon. To further discriminate the two different types of spin-nematic phase, we resort to the bond-parity operator Ŵ_i defined as <cit.> Ŵ_2i-1 = Σ_2i-1^yΣ_2i^y, Ŵ_2i = Σ_2i^xΣ_2i+1^x, where Σ_i^α = e^π S_i^α is the on-site operator. For the pure Kitaev chain, Ŵ_i commutes with the Hamiltonian such that its eigenvalues should only be ±1 for the ground state. By switching on the SIA, the relation [Ŵ_i, ℋ] = 0 does not hold as long as ϑ≠ 0, indicating that ⟨Ŵ_i⟩ will deviate from 1. The spatial patterns of ⟨ W_i^[l]⟩ in a closed chain of L = 60 at different energy levels l = 0, 1, 3, 5 for the type-I and type-II spin-nematic phases are shown in Fig. <ref>, with D = -0.3 and ϑ/π = 0.30 and 0.05 for the left and right panels, respectively. It can be seen from Fig. <ref>(a) and Fig. <ref>(e) that the ground-state patterns of ⟨ W_i^[0]⟩ for both types are uniformly distributed with a periodicity p = 1 along the chain. For the excited-state patterns, they display a similarity within the twofold degenerate states and thus only three selected energy levels are shown. For the spin-nematic phase of the type-I, the first excited state is again unique and ⟨ W_i^[1]⟩ is completely flat. ⟨ W_i^[3]⟩ and ⟨ W_i^[5]⟩ are smoothly changed within the chain, with periodicity p = 30 (see Fig. <ref>(c)) and p = 15 (see Fig. <ref>(d)), respectively. By contrast, while ⟨ W_i^[l]⟩ (l = 1, 3, 5) exhibits periodicity of p = 10 (see Fig. <ref>(f)), p = 3 (see Fig. <ref>(g)), or p = 15 (see Fig. <ref>(h)), its values are quite fluctuating and elusive. However, pertaining to the behavior of the first excited state, the flatness versus oscillation of ⟨ W_i^[1]⟩ is the hallmark of the difference between the type-I and type-II spin-nematic phases. It is in this sense that we can identify the crossover boundary of the two by the standard deviation of ⟨ W_i^[1]⟩, i.e., σ_W. In our calculation on three closed chains of length L = 24, 48, and 72, the quantity σ_W undergoes a sharp jump at ϑ/π≈ 0.13, as depicted in Fig. <ref>. We note that the periodicity of ⟨ W_i^[l]⟩ in the excited states should be different as we change the chain length, and such a periodicity can be discerned by the discrete Fourier transform of ⟨ W_i^[l]⟩. Nevertheless, the most remarkable feature that the curves of ⟨ W_i^[l]⟩ (l > 0) are smooth and discrete, respectively, in the type-I and type-II spin-nematic phases remains preserved. Finally, we comment on the mechanism of the spin-nematic phase. Hitherto, the two-magnon bound state picture in frustrated spin-1/2 systems with the nearly saturated magnetic field and the description of the on-site quadrupolar order in spin-1 models with the biquadratic interaction are widespread to describe the spin-nematic phase. More interestingly, an attempt to unify these scenarios based on the language of spin-1 dimers has been proposed <cit.>. Physically, the presence of magnetic step of two in magnetization curve or the Anderson tower of states containing only the even total spin sectors <cit.> is known as the fingerprint of the spin-nematic phase. However, it seems to be infeasible to check the picture as the total spin is not a conserved quantity for the lack of U(1) symmetry. Nevertheless, one can calculate the one-magnon and two-magnon dynamical spectra, from which the magnon and magon-pair gaps can be extracted. This may give some clues on the nature of the spin-nematic phase and deserves future study. §.§ Deconfined quantum critical point Dating back to 2004, the DQCP is a fascinating proposal which asserts a continuous QPT between two spontaneous symmetry-breaking phases with completely unrelated broken symmetries <cit.>. Right at the DQCP, deconfined fractionalized particles appear, accompanying by an emergent symmetry to reconcile the two different order parameters nearby. This scenario is clearly beyond the conventional LGW paradigm as the latter predicts that this kind of QPT should be of first order. While the transition between the AFM phase and the valence-bond-solid phase in two dimension is regarded as the possible realization of the deconfined criticality, decisive evidences are still lacking as a weakly first-order QPT cannot be ruled out <cit.>. The one-dimensional analogy was put forward in 2019, providing another feasible way towards unraveling the enigmatic DQCP <cit.>. Massive numerical work has been devoted to studying the DQCP in one-dimensional spin-1/2 models during the past few years, including the ferromagnetic frustrated spin chain <cit.>, the spin ladder with ring-exchange interaction <cit.>, and the Kitaev spin chain with multiple-spin interaction <cit.>. We will demonstrate that the spin-1 Kitaev chain with tunable SIA is another promising platform that exhibits the DQCP. To begin with, we focus on the line of D = -1 and calculate the dimer order parameter O_L and magnetic order parameter M_L, see Fig. <ref>(a). It can be seen that both order parameters decrease smoothly as the driving parameter ϑ approaches their corresponding quantum critical points. In the intervening region where 0.0601 ≲ϑ/π≲ 0.1683, the two order parameters vanish and the spin-nematic phase of type-I survives, in accordance with the fact that the spin-nematic phase preserves the translational symmetry and time-reversal symmetry. Next, we appeal to the central charge to pin down the nature of QPTs. The central charge c is usually extracted from the entanglement entropy which is known to obey the conformal field theory <cit.>. Although the OBC is frequently adopted in the DMRG calculation, it can induce an intrinsic alternating term which decays away from the boundary with an approximately power-law behavior in the entanglement entropy, making the fitting formula more intricate <cit.>. Therefore, we turn to the PBC and the entanglement entropy is well described by the following expression <cit.> 𝒮_L(x) = c/3ln[L/πsin(π x/L)] + c', where x is the length of a subsystem and c' is a nonuniversal constant. Results of the fitted central charge for three different lengths L are shown in Fig. <ref>(b). It is found that at ϑ/π≈ 0.0601 and ϑ/π≈ 0.1683 the central charges are slightly decreases with the increase of the system but saturate to 1/2 eventually, indicating that both QPTs belong to the Ising universality class. As the intensity of the SIA increases, region of the spin-nematic phase shrinks slightly and does not disappear until |D| is large enough. After a careful inspection of the quantum criticality, we take D = -100 as an example to illustrate the direct QPT between the dimerized phase and the AFM phase. The behaviors of order parameters O_L and M_L in a narrow window of 0.01 ≤ϑ/π≤ 0.02 are shown in Fig. <ref>(c). They are smoothly changed as ϑ varies and the finite-size scaling [see Eq. (<ref>)] suggests that there is only a sole quantum critical point at ϑ/π≈ 0.0158. In Fig. <ref>(d), we also fit the central charge in the same parameter range as that of Fig. <ref>(c). Far away from the critical region, the central charge is vanishingly small and tends to be zero with the increase of the system size, indicative of the gapped ground states. In the critical region, the central charge is sizable and its maximal value is extremely close to 1. Such a finite central charge is also confirmed in several independent calculations like D = -200. Since the nonzero central charge is crucial to corroborate the continuous QPT, our result thus demonstrates that the dimer-AFM transition is continuous. Nevertheless, determining the nature of this QPT is numerically challenging, albeit a conceivable possibility is the Gaussian transition which has been proposed in other similar situations <cit.>. Notably, because the broken translational symmetry and dihedral symmetry are totally irrelevant, the continuous QPT is forbidden by the LGW paradigm and thus the quantum critical point is interpreted as a DQCP. § CONCLUSION We have studied the quantum phase diagram of a spin-1 Kitaev chain with tunable SIA by the DMRG method, which is identified to host a dimerized phase, an AFM phase, and two distinct spin-nematic phases. In line with the previous research effort which reveals that the ground state of the spin-1 Kitaev chain is a nonmagnetic Kitaev phase <cit.>, we further clarify that it is a spin-nematic order which preserves the translational symmetry and time-reversal symmetry but breaks spin-rotational symmetry, giving rise to a finite spin-nematic correlation. The four-spin correlation function pertaining to the spin-nematic order parameter can exhibit a nontrivial phase factor that varies as the SIA |D| changes, and the derivative of the phase factor is demonstrated to be a useful probe to capture QPTs. Depending on the degeneracy of the first excited state, the spin-nematic phase can be classified into two types and the model undergoes a crossover between the two. Notably, the nature of the spin-nematic phase is an intriguing topic which deserves future study. As the strength of SIA increases, the dimerized phase and the AFM phase with broken translational symmetry and dihedral and time-reversal symmetries set in when the SIA is aligned along the [001] direction and [111] direction, respectively. Of particular note is that the spontaneous dimerization is induced by the SIA only, highlighting the unique role played by the Kitaev interaction. When the SIA is modest, the spin-nematic phase is intervened between the two spontaneous symmetry-breaking phases, and both QPTs belong to the Ising universality class. By contrast, the spin-nematic phase is destroyed by strong SIA, leading to a continuous QPT between the dimerized phase and the AFM phase. Thus, our result demonstrates that the Kitaev-type spin chain can offer a promising playground to study the DQCP. In the future, it is desirable to study the emergent symmetry <cit.>, the dynamic signatures <cit.>, the fidelity and entanglement from the quantum information aspect <cit.>, and the nonequilibrium critical dynamics described by Kibble-Zurek mechanism <cit.> in the critical region so as to corroborate this exotic QPT. This work is supported by the National Program on Key Research Project (Grant No. MOST2022YFA1402700), the Natural Science Foundation of Jiangsu Province (Grant No. BK20220876), the National Natural Science Foundation of China (Grants No. 12247183, No. 12274187, No. 12247101, No. 12174020, No. 11974244, and No. U1930402), and the NSERC Discovery (Grant No. 2022-04601). Q.L. also acknowledges the Fundamental Research Funds for the Central Universities (Grant No. NS2022097) and the startup Fund of Nanjing University of Aeronautics and Astronautics (Grant No. YAH21129). H.-Y.K. also acknowledges funding from the Canadian Institute for Advanced Research and the Canada Research Chairs Program. The computations are partially supported by High Performance Computing Platform of Nanjing University of Aeronautics and Astronautics (NUAA) and Tianhe-2JK at the Beijing Computational Science Research Center (CSRC). Computations are also performed on the Niagara supercomputer at the SciNet HPC Consortium. SciNet is funded by the Canada Foundation for Innovation under the auspices of Compute Canada, the Government of Ontario, Ontario Research Fund, Research Excellence, and the University of Toronto. 99 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty Kitaev2006 A. Kitaev, Anyons in an exactly solved model and beyond, Ann. Phys. (NY) 321, 2 (2006). ZhangWHB2019 Shang-Shun Zhang, Z. Wang, G. B. Halász, and C. D. Batista, Vison Crystals in an Extended Kitaev Model on the Honeycomb Lattice, Phys. Rev. Lett. 123, 057201 (2019). WangNmdLiu2019 J. Wang, B. Normand, and Z.-X. Liu, One Proximate Kitaev Spin Liquid in the K-J-Γ Model on the Honeycomb Lattice, Phys. Rev. Lett. 123, 197201 (2019). RalkoMerino2020 A. Ralko and J. Merino, Novel Chiral Quantum Spin Liquids in Kitaev Magnets, Phys. Rev. Lett. 124, 217203 (2020). LuoNPJ2021 Q. Luo, J. Zhao, H.-Y. Kee, and X. Wang, Gapless quantum spin liquid in a honeycomb Γ magnet, npj Quantum Mater. 6, 57 (2021). LeeKCetal2020 H.-Y. Lee, R. Kaneko, L. E. Chern, T. Okubo, Y. Yamaji, N. Kawashima, and Y. B. Kim, Magnetic-field induced quantum phases in tensor network dtudy of Kitaev magnets, Nat. Commun. 11, 1639 (2020). GohlkeCKK2020 M. Gohlke, L. E. Chern, H.-Y. Kee, and Y. B. Kim, Emergence of nematic paramagnet via quantum order-by-disorder and pseudo-Goldstone modes in Kitaev magnets, Phys. Rev. Research 2, 043023 (2020). Luo2022PRB Q. Luo and H.-Y. Kee, Interplay of magnetic field and trigonal distortion in the honeycomb Γ model: Occurrence of a spin-flop phase, Phys. Rev. B 105, 174435 (2022). FengZX2007 X.-Y. Feng, G.-M. Zhang, and T. Xiang, Topological Characterization of Quantum Phase Transitions in a Spin-1/2 Model, Phys. Rev. Lett. 98, 087204 (2007). ShiYYN2009 X.-F. Shi, Yue Yu, J. Q. You, and Franco Nori, Topological quantum phase transition in the extended Kitaev spin model, Phys. Rev. B 79, 134431 (2009). GoJungMoon2019 A. Go, J. Jung, and E.-G. Moon, Vestiges of Topological Phase Transitions in Kitaev Quantum Spin Liquids, Phys. Rev. Lett. 122, 147203 (2019). LiKimKee2022 H. Li, Y. B. Kim, and H.-Y. Kee, Magnetic field induced topological transitions and thermal conductivity in a generalized Kitaev model, Phys. Rev. B 105, 245142 (2022). KnolleKCM2014 J. Knolle, D. L. Kovrizhin, J. T. Chalker, and R. Moessner, Dynamics of a Two-Dimensional Quantum Spin Liquid: Signatures of Emergent Majorana Fermions and Fluxes, Phys. Rev. Lett. 112, 207203 (2014). ZhuKSF2018 Z. Zhu, I. Kimchi, D. N. Sheng, and L. Fu, Robust non-Abelian spin liquid and a possible intermediate phase in the antiferromagnetic Kitaev model with magnetic field, Phys. Rev. B 97, 241110(R) (2018). GohlkeMP2018 M. Gohlke, R. Moessner, and F. Pollmann, Dynamical and topological properties of the Kitaev model in a [111] magnetic field, Phys. Rev. B 98, 014418 (2018). HickeyTrebst2019 C. Hickey and S. Trebst, Emergence of a field-driven U(1) spin liquid in the Kitaev honeycomb model, Nat. Commun. 10, 530 (2019). PatelaTrv2019 N. D. Patela and N. Trivedia, Magnetic field-induced intermediate quantum spin liquid with a spinon Fermi surface, Proc. Natl. Acad. Sci. USA 116, 12199 (2019). ZhangHalBat2022 S.-S. Zhang, G. B. Halász, and C. D. Batista, Theory of the Kitaev model in a [111] magnetic field, Nat. Commun. 13, 399 (2022). JiangLCQLWang2020 M.-H. Jiang, S. Liang, W. Chen, Y. Qi, J.-X. Li, and Q.-H. Wang, Tuning topological orders by a conical magnetic field in the Kitaev model, Phys. Rev. Lett. 125, 177203 (2020). SorensenCGK2021 E. S. Sørensen, A. Catuneanu, J. Gordon, and H.-Y. Kee, Heart of Entanglement: Chiral, Nematic, and Incommensurate Phases in the Kitaev-Gamma Ladder in a Field, Phys. Rev. X 11, 011013 (2021). SorensenGRWK2022 E. S. Sørensen, J. Gordon, J. Riddell, T. Wang, and H.-Y. Kee, Field Induced Chiral Soliton Phase in the Kitaev Spin Chain, Phys. Rev. Research 5, L012027 (2023). SenShankar2010 D. Sen, R. Shankar, D. Dhar, and K. Ramola, Spin-1 Kitaev model in one dimension, Phys. Rev. B 82, 195435 (2010). AgrBrkNis2018 C. E. Agrapidis, J. van den Brink, and S. Nishimoto, Ordered states in the Kitaev-Heisenberg model: From 1D chains to 2D honeycomb, Sci. Rep. 8, 1815 (2018). YangKG2020 W. Yang, A. Nocera, T. Tummuru, H.-Y. Kee, and I. Affleck, Phase Diagram of the Spin-1/2 Kitaev-Gamma Chain and Emergent SU(2) Symmetry, Phys. Rev. Lett. 124, 147205 (2020). YangJKG2020 W. Yang, A. Nocera, and I. Affleck, Comprehensive study of the phase diagram of the spin-1/2 Kitaev-Heisenberg-Gamma chain, Phys. Rev. Research 2, 033268 (2020). YangSN2021 W. Yang, A. Nocera, E. S. Sørensen, H.-Y. Kee, and I. Affleck, Classical spin order near the antiferromagnetic Kitaev point in the spin-1/2 Kitaev-Gamma chain, Phys. Rev. B 103, 054437 (2021). YouSunRen2020 W.-L. You, G. Sun, J. Ren, W. C. Yu, and A. M. Oles, Quantum phase transitions in the spin-1 Kitaev-Heisenberg chain, Phys. Rev. B 102, 144437 (2020). LuoPRB2021 Q. Luo, J. Zhao, X. Wang, and H.-Y. Kee, Unveiling the phase diagram of a bond-alternating spin-1/2 K-Γ chain, Phys. Rev. B 103, 144423 (2021). LuoPRR2021 Q. Luo, S. Hu, and H.-Y. Kee, Unusual excitations and double-peak specific heat in a bond-alternating spin-1 K-Γ chain, Phys. Rev. Research 3, 033048 (2021). Andreev1984 A. F. Andreev and I. A. Grishchuk, Spin nematics, Sov. Phys. JETP 60, 267 (1984). Chandra1991 P. Chandra and P. Coleman, Quantum spin nematics: Moment-free magnetism, Phys. Rev. Lett. 66, 100 (1991). Chubukov1991 A. V. Chubukov, Chiral, nematic, and dimer states in quantum spin chains, Phys. Rev. B 44, 4693(R) (1991). ManmanaPRB2011 S. R. Manmana, A. M. Lauchli, F. H. L. Essler, and F. Mila, Phase diagram and continuous pair-unbinding transition of the bilinear-biquadratic S = 1 Heisenberg chain in a magnetic field, Phys. Rev. B 83, 184433 (2011). OrlovaPRL2017 A. Orlova, E. L. Green, J. M. Law, D. I. Gorbunov, G. Chanda, S. Krämer, M. Horvatić, R. K. Kremer, J. Wosnitza, and G. L. J. A. Rikken, Nuclear Magnetic Resonance Signature of the Spin-Nematic Phase in LiCuVO_4 at High Magnetic Fields, Phys. Rev. Lett. 118, 247201 (2017). Macedo2022 R. A. Macêdo, F. B. Ramos, and R. G. Pereira, Continuous phase transition from a chiral spin state to collinear magnetic order in a zigzag chain with Kitaev interactions, Phys. Rev. B 105, 205144 (2022). Senthil2004Science T. Senthil, A. Vishwanath, L. Balents, S. Sachdev, and M. P. A. Fisher, Deconfined Quantum Critical Points, Science 303, 1490 (2004). Stavropoulos2019PRL P. P. Stavropoulos, D. Pereira, and H.-Y. Kee, Microscopic Mechanism for a Higher-Spin Kitaev Model, Phys. Rev. Lett. 123, 037203 (2019). Xu2018npjCM C. Xu, J. Feng, H. Xiang, and L. Bellaiche, Interplay between Kitaev interaction and single ion anisotropy in ferromagnetic CrI_3 and CrGeTe_3 monolayers, npj Comput. Mater. 4, 57 (2018). Xu2020PRL C. Xu, J. Feng, M. Kawamura, Y. Yamaji, Y. Nahas, S. Prokhorenko, Y. Qi, H. Xiang, and L. Bellaiche, Possible Kitaev quantum spin liquid state in 2D materials with S = 3/2, Phys. Rev. Lett. 124, 087205 (2020). Stav2021PRR P. P. Stavropoulos, X. Liu, and H.-Y. Kee, Magnetic anisotropy in spin-3/2 with heavy ligand in honeycomb mott insulators: Application to CrI_3, Phys. Rev. Research 3, 013216 (2021). Zhou2021PRB Z. Zhou, K. Chen, Q. Luo, H.-G. Luo, and J. Zhao, Strain-induced phase diagram of the S = 3/2 Kitaev material CrSiTe_3, Phys. Rev. B 104, 214425 (2021). Bradley2022PRB O. Bradley and R. R. P. Singh, Instabilities of spin-1 Kitaev spin liquid phase in presence of single-ion anisotropies, Phys. Rev. B 105, L060405 (2022). Jin2022NC H.-K. Jin, W. M. H. Natori, F. Pollmann, and J. Knolle, Unveiling the S=3/2 Kitaev honeycomb spin liquids, Nat. Commun. 13, 3813 (2022). Chen2023NJP K. Chen, Q. Luo, Z. Zhou, S. He, B. Xi, C. Jia, H.-G. Luo, and J. Zhao, Triple-meron crystal in high-spin Kitaev magnets, New J. Phys. 25, 023006 (2023). White1992 S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863 (1992). Peschel1999 I. Peschel, X. Q. Wang, M. Kaulke, and K. Hallberg, Density-Matrix Renormalization (Springer, Berlin, 1999). Schollwock2005 U. Schollwöck, The density-matrix renormalization group, Rev. Mod. Phys. 77, 259 (2005). BurXiangGeh1996 R. J. Bursill, T. Xiang, and G. A. Gehring, The density matrix renormalization group for a quantum spin chain at non-zero temperature, J. Phys.: Condens. Matter 8, L583 (1996). WangXiang1997 X. Wang and T. Xiang, Transfer-matrix density-matrix renormalization-group theory for thermodynamics of one-dimensional quantum systems, Phys. Rev. B 56, 5061 (1997). Lauchli2006 A. Läuchli, G. Schmid, and S. Trebst, Spin nematics correlations in bilinear-biquadratic S = 1 spin chains, Phys. Rev. B 74, 144426 (2006). Hu2014PRL S. Hu, A. M. Turner, K. Penc, and F. Pollmann, Berry-Phase-Induced Dimerization in One-Dimensional Quadrupolar Systems, Phys. Rev. Lett. 113, 027202 (2014). ChepigaAM2016 N. Chepiga, I. Affleck, and F. Mila, Dimerization transitions in spin-1 chains, Phys. Rev. B 93, 241108(R) (2016). Kitazawa1996 A. Kitazawa, K. Nomura, and K. Okamoto, Phase Diagram of S = 1 Bond-Alternating XXZ Chains, Phys. Rev. Lett. 76, 4038 (1996). Chen2003PRB W. Chen, K. Hida, and B. C. Sanctuary, Ground-state phase diagram of S = 1 XXZ chains with uniaxial single-ion-type anisotropy, Phys. Rev. B 67, 104401 (2003). Hu2011PRB S. Hu, B. Normand, X. Wang, and L. Yu, Accurate determination of the Gaussian transition in spin-1 chains with single-ion anisotropy, Phys. Rev. B 84, 220402(R) (2011). Fisher1972PRL M. E. Fisher and M. N. Barber, Scaling Theory for Finite-Size Effects in the Critical Region, Phys. Rev. Lett. 28, 1516 (1972). Tsunetsugu2006 H. Tsunetsugu and M. Arikawa, Spin Nematic Phase in S= 1 Triangular Antiferromagnets, J. Phys. Soc. Jpn. 75, 083701 (2006). Kohamaa2019PNAS Y. Kohamaa, H. Ishikawaa, A. Matsuoa, K. Kindoa, N. Shannonb, and Z. Hiroia, Possible observation of quantum spin-nematic phase in a frustrated magnet, Proc. Natl. Acad. Sci. USA 116, 10686 (2019). ZvyaginPRB2019 A. A. Zvyagin and G. A. Zvyagina, Spontaneous spin-nematic ordering in a spin-chain system, Phys. Rev. B 100, 014416 (2019). Mourigal2012PRL M. Mourigal, M. Enderle, B. Fåk, R. K. Kremer, J. M. Law, A. Schneidewind, A. Hiess, and A. Prokofiev, Evidence of a Bond-Nematic Phase in LiCuVO_4, Phys. Rev. Lett. 109, 027203 (2012). Buttgen2014PRB N. Büttgen, K. Nawa, T. Fujita, M. Hagiwara, P. Kuhns, A. Prokofiev, A. P. Reyes, L. E. Svistov, K. Yoshimura, and M. Takigawa, Search for a spin-nematic phase in the quasi-one-dimensional frustrated magnet LiCuVO_4, Phys. Rev. B 90, 134401 (2014). Zhitomirsky2010EPL M. E. Zhitomirsky and H. Tsunetsugu, Magnon pairing in quantum spin nematic, Europhys. Lett. 92, 37001 (2010). Hikihara2008PRB T. Hikihara, L. Kecke, T. Momoi, and A. Furusaki, Vector chiral and multipolar orders in the spin-1/2 frustrated ferromagnetic chain in magnetic field, Phys. Rev. B 78, 144404 (2008). Sudan2009PRB J. Sudan, A. Lüscher, and A. M. Läuchli, Emergent multipolar spin correlations in a fluctuating spiral: The frustrated ferromagnetic spin-1/2 Heisenberg chain in a magnetic field, Phys. Rev. B 80, 140402(R) (2009). Arlego2011PRB M. Arlego, F. Heidrich-Meisner, A. Honecker, G. Rossini, and T. Vekua, Resonances in a dilute gas of magnons and metamagnetism of isotropic frustrated ferromagnetic spin chains, Phys. Rev. B 84, 224409 (2011). Syromyatnikov2012 A. V. Syromyatnikov, Spin nematic phase in one-dimensional and quasi-one-dimensional frustrated magnets in a strong magnetic field, Phys. Rev. B 86, 014423 (2012). Parvej2017PRB A. Parvej and M. Kumar, Multipolar phase in frustrated spin-1/2 and spin-1 chains, Phys. Rev. B 96, 054413 (2017). Sato2013PRL M. Sato, T. Hikihara, and T. Momoi, Spin-Nematic and Spin-Density-Wave Orders in Spatially Anisotropic Frustrated Magnets in a Magnetic Field, Phys. Rev. Lett. 110, 077206 (2013). ZhuPRL2006 S.-L. Zhu, Scaling of Geometric Phases Close to the Quantum Phase Transition in the XY Spin Chain, Phys. Rev. Lett. 96, 077206 (2006). Tanaka2020PRB K. Tanaka and C. Hotta, Multiple quadrupolar or nematic phases driven by the Heisenberg interactions in a spin-1 dimer system forming a bilayer, Phys. Rev. B 101, 094422 (2020). Lauchli2005PRL A. Läuchli, J. C. Domenge, C. Lhuillier, P. Sindzingre, and M. Troyer, Two-Step Restoration of SU(2) Symmetry in a Frustrated Ring-Exchange Magnet, Phys. Rev. Lett. 95, 137206 (2005). Sandvik2007 A. W. Sandvik, Evidence for Deconfined Quantum Criticality in a Two-Dimensional Heisenberg Model with Four-Spin Interactions, Phys. Rev. Lett. 98, 227202 (2007). JiangMotrunich2019 S. Jiang and O. Motrunich, Ising ferromagnet to valence bond solid transition in a one-dimensional spin chain: Analogies to deconfined quantum critical points, Phys. Rev. B 99, 075103 (2019). Roberts2019PRB B. Roberts, S. Jiang, and O. I. Motrunich, Deconfined quantum critical point in one dimension, Phys. Rev. B 99, 165143 (2019). Huang2019PRB R.-Z. Huang, D.-C. Lu, Y.-Z. You, Z. Y. Meng, and T. Xiang, Emergent Symmetry and Conserved Current at a One Dimensional Incarnation of Deconfined Quantum Critical Point, Phys. Rev. B 100, 125137 (2019). Sun2019PRB G. Sun, B.-B. Wei, and S.-P. Kou, Fidelity as a probe for a deconfined quantum critical point, Phys. Rev. B 100, 064427 (2019). Luo2019PRB Q. Luo, J. Zhao, and X. Wang, Intrinsic jump character of first-order quantum phase transitions, Phys. Rev. B 100, 121111(R) (2019). Ogino2021PRB Takuhiro Ogino, Ryui Kaneko, Satoshi Morita, Shunsuke Furukawa, and Naoki Kawashima, Continuous phase transition between Néel and valence bond solid phases in a J-Q-like spin ladder system, Phys. Rev. B 103, 085117 (2021). CalCar2004 P. Calabrese and J. Cardy, Entanglement entropy and quantum field theory, J. Stat. Mech.: Theory Exp. (2004) P06002. Laflorencie2006PRL N. Laflorencie, E. S. Sørensen, M.-S. Chang, and I. Affleck, Boundary Effects in the Critical Scaling of Entanglement Entropy in 1D Systems, Phys. Rev. Lett. 96, 100603 (2006). Mudry2019PRB C. Mudry, A. Furusaki, T. Morimoto, and T. Hikihara, Quantum phase transitions beyond Landau-Ginzburg theory in one-dimensional space revisited, Phys. Rev. B 99, 205153 (2019). Xi2020CPL N. Xi and R. Yu, Dynamical signatures of the one-dimensional deconfined quantum critical point, Chin. Phys. B 31, 057501 (2022). Yang2021PRE S. Yang and J.-B. Xu, Quantum entanglement and criticality in a one-dimensional deconfined quantum critical point, Phys. Rev. E 104, 064121 (2021). Huang2020PRR R.-Z. Huang and S. Yin, Kibble-Zurek mechanism for a one-dimensional incarnation of a deconfined quantum critical point, Phys. Rev. Research 2, 023175 (2020).
http://arxiv.org/abs/2306.04081v1
20230607004406
Majorana zero modes in Y-shape interacting Kitaev wires
[ "Bradraj Pandey", "Nitin Kaushal", "Gonzalo Alvarez", "Elbio Dagotto" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
Department of Physics and Astronomy, The University of Tennessee, Knoxville, Tennessee 37996, USA Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Computational Sciences and Engineering Division, Oak Ridge, Tennessee 37831, USA Department of Physics and Astronomy, The University of Tennessee, Knoxville, Tennessee 37996, USA Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Motivated by the recent experimental realization of minimal Kitaev chains using quantum dots, we investigate the Majorana zero modes (MZM) in Y-shape Kitaev wires. We solve the associated Kitaev models analytically at the sweet spot (t_h=Δ) and derive the exact form of MZM wave-functions in this geometry. The novelty of our result is the observation of non-local MZMs near the junction center, made of a linear combination of edge sites MZMs for each arm. Furthermore, we simulate the stability of local (on site) and non-local MZMs modes in the presence of Coulomb repulsion, using density matrix renormalization group theory. Our local density-of-states calculation shows that these non-local MZMs are as equally topologically protected as the local MZMs when in the presence of Coulomb repulsion. Majorana zero modes in Y-shape interacting Kitaev wires Elbio Dagotto July 31, 2023 ======================================================= Introduction Majorana zero modes (MZMs) are charge-neutral non-Abelian quasiparticles <cit.>. They have attracted much interest because of their potential application in fault-tolerant topological quantum computing <cit.>. The occurrence of zero-bias peaks in tunneling spectroscopy is one of the experimental signatures of MZMs <cit.>. A promising platform to realize MZMs are semiconductor nanowires proximitized to superconductors, where MZMs are expected to develop at both ends of the wire <cit.>. Recently, the realization of MZMs was also proposed in quantum-dot-superconductor linear arrays <cit.>. These quantum-dots systems <cit.> are expected to overcome the problem of random-disorder potential, as compared to the proximitized semiconductor nanowires where the effect of disorder is strong <cit.> and may create false signal of MZMs in tunneling spectra <cit.>. Interestingly, the experimental realization of minimal Kitaev chain has been demonstrated using two quantum dots coupled through a short superconducting-semiconductor hybrid (InSb nanowire) <cit.>. In this experiment <cit.>, two localized MZMs were observed in tunneling conductance measurements at the sweet spot t_h=Δ. In topological quantum computation, it is required to move and perform braiding operations of the MZMs <cit.>. A strict 1D geometry is not sufficient to perform such braiding operations, because in 1D the MZMs can fuse during their exchange process <cit.>. To realize non-Abelian statistics (or braiding), T and Y-shaped wires geometry have been proposed <cit.>. It has been shown that the MZMs in T-shape nanowires can be transformed under exchange, similarly to 2D p+ip superconducting systems displaying non-Abelian statistics <cit.>. Braiding-based gates with MZMs using quantum dot arrays was proposed in Ref. <cit.>. This paper focuses on finding MZM modes near the junction of interacting Y-shaped Kitaev wires, which is important for all proposed multi-terminal nanowires and quantum-dots setups related to braiding and non-Abelian statistics of MZMs. In the context of proximity-induced semiconductor nanowires, there are only a few studies related to ground states of T-shaped wires <cit.>. The sub-gap properties of a three-terminal Josephson junction (composed of effective spinless p-wave superconductors), joined into a T-shaped normal-metallic region, has been studied using the scattering matrix approach in the non-interacting limit <cit.>. They found that depending upon the superconducting phase of each arm, the Majorana zero mode extended into the metallic region of either all three legs or two legs of the T-shape wire. However, in these studies the precise form of the Majorana wave functions and the effect of Coulomb interactions were not addressed <cit.>. The repulsive Coulomb interaction is expected to suppress the pairing-induced bulk-gap and can affect the stability of Majorana modes <cit.>. Motivated by the above described recent progress in realization of minimal Kitaev chain using quantum dots <cit.>, we study the Majorana zero modes in Y-shaped interacting Kitaev wires. We address the question related to the exact form of Majorana wavefunctions near the junction and its dependency on the superconducting phases at each wire. Assuming each arm of Y-shape wire can take different values of superconducting (SC) phase [see Fig. <ref>a], we solve exactly the Kitaev model for Y-shapes wires working at the sweet spot t_h=Δ. Remarkably, in terms of Majorana operators, we are able to write four independent commuting Hamiltonians, consisting of the three arms (I, II, III) and one central region (IV), as shown in Fig. <ref>b. This allow us to diagonalize the full system independently for each region and solve exactly at the sweet spot. We find the expected three localized MZMs on the edge sites of the Y-shape wire. Surprisingly, depending on the SC phase values, we also find exotic non-local MZMs near the central region, which are made from linear combinations of the local MZMs residing on the edge site of each arm. Furthermore, we perform much needed unbiased simulations of the Majorana zero modes in Y -shaped interacting Kitaev wires, using density matrix renormalization group (DMRG) <cit.>. In the non-interacting limit, we find peaks in the site-dependent local density-of-states (LDOS) at the locations predicted by our analytical calculations. In order to compare the stability of local vs. non-local MZMs, we examine the electron and hole components of the LDOS separately <cit.>, against the increase in repulsive Coulomb interactions. Interestingly, the LDOS(ω,j) calculations indicate the non-local MZMs are as equally stable as the local-MZMs residing at the ends of the Y-shaped wire. We believe these MZMs should be observed in quantum-dots experiments, close to the sweet spots in the tunneling-conductance measurements <cit.> Results Description of analytical method We solve the Y-shape Kitaev model analytically at the sweet spot t_h=Δ and V=0, for three different sets of superconducting phases: (i) ϕ_1=π, ϕ_2=0, and ϕ_3=0, (ii) ϕ_1=0, ϕ_2=0, and ϕ_3=0, and (iii) ϕ_1=0, ϕ_2=0, and ϕ_3=π/2. First, we divide the system Hamiltonian into four different parts (for details see Model Hamiltonian) in terms of spinless fermionic operators. Then, we rewrite the system Hamiltonian in terms of Majorana operators, using the transformation c_j=1/√(2) e^-iϕ_k/2(γ^A_j+iγ^B_j) <cit.>. In terms of the Majorana operators, remarkably the system can be written as four independent commuting Hamiltonians: (1) the three independent 1d wires (I, II, III) (see Fig. <ref>b) and (2) the central region (IV), consisting only of five Majorana operators (two from central site and three from edge sites of each leg), as shown in Fig. <ref>b. This procedure allows us to solve the Y-shape Kitaev model exactly at the sweet spot t_h=Δ and V=0, for any values of the SC phases of each arm. For all three cases of SC phases discussed above, we find there is one Majorana zero mode at the outer edge of each arm (blue color in Fig. <ref>b), as expected intuitively. The novelty is that depending on the phase values of each arm, in addition to above mentioned outer edge MZMs, we find at the center region either (i) only one non-local MZM (χ) or (ii) two non-local MZMs (χ) accompanied with one local MZM (γ). These non-local MZMs (χ) are formed from the linear combinations of MZMs residing at the edge sites (near the central region) of each arm. These exotic non-local MZMs could be realized in quantum dot experiments, by changing the phases of each arm in a Y-shape geometry of quantum dots arrays. In terms of Majorana operators, the Hamiltonian of each leg is independent of the SC phases (ϕ_1, ϕ_2,ϕ_3). Using the transformations c^I_j= 1/√(2) e^-iϕ_1/2(γ^I_A,j+iγ^I_B,j), c^II_j= 1/√(2)e^-iϕ_2/2(γ^II_A,j+iγ^II_B,j) and c^III_j= 1/√(2)e^-iϕ_3/2(γ^III_A,j+iγ^III_B,j), the Hamiltonian for the three legs H^I, H^II and H^III can be written in terms of Majorana operators as: H^I =-2iΔ∑_j=1^l-1( γ^I_A,j+1γ^I_B,j), H^II =-2iΔ∑_j=l+2^2l( γ^II_A,j+1γ^II_B,j), H^III =-2iΔ∑_j=2l+2^3l(γ^III_A,j+1γ^III_B,j), where we have used t^x_h=t^y_h=|Δ|. In these equations, the Majorana operators γ^I_A,1, γ^II_B,2l+1, and γ^III_B,3l+1 are absent <cit.>, and commute with these Hamiltonians, which indicates the presence of three end MZMs at the edge sites of the Y-shape Kitaev wire (see Fig. <ref>b), similarly as in the original Kitaev chain exact solution. For the central region, the Hamitltonian H^IV depends upon the SC phases. We solve H^IV for three different cases, in order to understand the nature of the central MZMs. The case ϕ_1=π, ϕ_2=0, and ϕ_3=0 To explain the exact solution of Majorana wave functions in Y-shape geometries, we start with phase values ϕ_1=π, ϕ_2=0, and ϕ_3=0 on arms I, II, and III, respectively. For these phase values, the pairing term at each arm preserves the rotational symmetry of the system, as under 120^∘ rotation around the central sites l+1, the system Hamiltonian remains invariant due to the π phase in arm I (see SM for more detail <cit.>) <cit.>. For the central sites, using the relations c_l=1/√(2) e^-iϕ_1/2(γ^I_A,l+i γ^I_B,l), c_l+1=1/√(2)(γ^IV_A,l+1+i γ^IV_B,l+1), c_l+2=1/√(2) e^-iϕ_2/2(γ^II_A,l+2+i γ^II_B,l+2), and c_2l+2=1/√(2) e^-iϕ_3/2(γ^III_A,2l+2+i γ^III_B,2l+2), the sector H^IV (with ϕ_1=π, ϕ_2=0, and ϕ_3=0) can be transformed in terms of Majorana operators as: H^IV=-2iΔ( γ^I_B,l + γ^II_A,l+2 + γ^III_A,2l+2) γ^IV_B,l+1. Note that in Eq. 4 the Majorana operator γ^IV_A,l+1 at the central site l+1 is absent, signaling the presence of a localized MZM at site j=16. Next, we write Eq. 4 in terms of a 4×4 matrix in the basis of (γ^I_B,l,γ^II_A,l+2, γ^III_A,2l+2, γ^IV_B,l+1) and obtained four eigenvalues (-√(3), √(3), 0, 0). The last two eigenvalues (e_3, e_4) take value zero, and the associated eigenvectors χ_3= -1/√(2)γ^I_B,l+ 1/√(2)γ^III_A,2l+2 and χ_4=-1/√(6)γ^I_B,l + √(2/3)γ^II_A,l+2 - 1/√(6)γ^III_A,2l+2, develop with properties χ^†_3=χ^†_3 and χ^†_4=χ^†_4. In addition, χ_3 and χ_4 also commute with H^IV; these properties confirm the presence of two non-local MZMs in the central region. These two non-local MZMs, χ_3 and χ_4, are made from linear combinations of MZMs on the edge sites of the arms (see Fig. <ref>a). The diagonalized Hamiltonian for the central site can be written as H^IV= 2√(3)Δ(χ̅_̅2̅χ_2-1/2), where χ_2=i/√(6)γ^I_B,l + i/√(6)γ^II_A,l+2 + i/√(6)γ^III_A,2l+2+ 1/√(2)γ^IV_B,l+1 is an ordinary fermion. The other remaining Hamiltonians for each arm can be diagonalized using fermionic operators d_k,j=1/√(2)(γ^k_B,j+i γ^k_A,j+1), where k=I, II, or III. The diagonalized Hamiltonian H^I, H^II, and H^III are: H^I=2Δ∑_j=1^l-1( d^ †_I,j d_I,j-1/2), H^II= 2Δ∑_j=l+2^2l( d^ †_II,j d_II,j-1/2), H^III= 2Δ∑_j=2l+2^3l( d^ †_III,j d_III,j-1/2). In conclusion, our analytical calculations predict a total of six MZMs, for the case of ϕ_1=π, ϕ_2=0 and ϕ_3=0. These six MZMs leads to eight fold-degeneracy in the ground state of the system <cit.>, which we also find is fully consistent with our numerical Lanczos calculations (see SM for more details <cit.>). Next, we analyze the stability of MZMs in the presence of a nearest-neighbor interaction H_I=Vn_jn_j+1. We calculate the LDOS, using DMRG for a Y-shaped geometry with system size L=46. As shown in Fig. <ref>a, the site dependent LDOS(ω=0,j) shows sharp peaks for the edge sites j=1, 31, and 46, indicating three localized MZMs at each edge of the arms. At the central site l+1, there is a sharp peak with same height as for the edge sites, showing the presence of a localized MZM γ_4 at site j=16, as already discussed. Interestingly, there are three other peaks in that LDOS(ω=0,j) on sites j=15, 17, and 32, with height 2/3 as compared to the edge sites. These three peaks signal the presence of two non-local MZMs, distributed over three central sites (j=15, 17, and 32). With increase in interaction to V=2, the Majoranas are no longer strictly localized on a single site j. The MZMs are still exponentially decaying over a few more sites and consequently the peak height of LDOS(ω=0,j) decreases (Fig. <ref>a). This shows that the Majorana zero modes are topologically protected against moderate values of Coulomb interaction. To compare the topological protection against V, for local and non-local MZMs, we calculate the electron and hole parts of LDOS(ω,j) separately for the edge and central sites (right panel of Fig. <ref>). In Fig. <ref>b and c, we show the electron and hole part of LDOS(ω,j) for the central site j=16. The peak height of electron and hole parts of LDOS(ω) at ω=0 decrease to the same values with increasing V, showing the preservation of its MZM nature (γ=γ^ †) <cit.>. Due to the rotational symmetry of the system, sites near the center j=15,17, and 32 are equivalent and they behave very similarly increasing V. In Fig. <ref>(d) and (e), we show the electron and hole parts of LDOS(ω,j) for site j=17. As discussed previously, the two non-local MZMs χ_3 and χ_4 are distributed on sites (j=15 and 32) and (j=15, 17, and 32), with total amplitude 2/3 on each site (j=15, 17, and 32), which leads to a spectral weight 2/3 (compared to the localized MZMs) in the LDOS(ω,j) for site j=17 (also for j=15 and 32 at V=0). Figures <ref>(f) and (g) present electron and hole part of LDOS(ω,j) for the edge site j=31 with increasing V (note: sites j=1 and 46 are equivalent). The rate of decrease in peak height in electron and hole part of LDOS(ω,j), for non-local MZMs at site j=17 and local MZM at edge site j=31 are almost identical when increasing V (V≤ 2). The LDOS(ω,j) of the local MZM at the central site j=16 decreases with a slightly faster rate because it has a finite overlap with χ_3 and χ_4, with increasing V. The case ϕ_1=0, ϕ_2=0, and ϕ_3=0 Here, we consider the Y-shape geometry with the same phase ϕ=0 on each arm. Surprisingly, for ϕ_1=0, ϕ_2=0, and ϕ_3=0, the pairing term in the Hamiltonian breaks the rotational symmetry, as after 120^∘ anti-clockwise rotation around the central sites l+1, the pairing term in leg I changes its sign (becomes negative due fermionic anticommutations) [ see also SM for details <cit.>]. The arm II and arm III have reflection symmetry around the central site l+1. Using the same transformations as discussed previously, the H^I, H^II and H^III of each arm in terms of Majorana operators can be written in similar form as described by Eqs. 1, 2, and 3. Again, in these equations, the Majorana operators γ^I_A,1, γ^II_B,2l+1, and γ^III_B,3l+1 are absent, indicating the presence of three edge MZMs on sites j=1,31, and 46. The Hamiltonian for the central region H^IV can be transformed as: H^IV = -2iΔ[γ^IV_A,l+1γ^I_B,l +( γ^II_A,l+2 + γ^III_A,2l+2) γ^IV_B,l+1]. In the above equation, H^IV has a reflection symmetry [γ^II_A,l+2↔γ^III_A,2l+2]. Thus, defining the operators R_1 = 1/√(2)(γ^II_A,l+2 +γ^III_A,2l+2), R_2 = 1/√(2)(γ^II_A,l+2 -γ^III_A,2l+2) the Hamiltonian H^IV further simplifies as H^IV = -2iΔγ^IV_A,l+1γ^I_B,l - 2√(2)i Δ R_1 γ^IV_B,l+1. In Eq. 10 the operator R_2 is absent and has the properties R^†_2=R^ †_2, and [H^IV,R_2]=0, indicating R_2 is a Majorana zero mode. The Majorana zero mode R_2 = 1/√(2)(γ^II_A,l+2 -γ^III_A,2l+2) is equally distributed on sites 17 (II leg) and 32 (III leg) (see Fig. <ref>a), showing that R_2 is indeed a non-local MZM. Next, we write Eq. 10 in terms of a 4×4 matrix in the basis of [γ^IV_A,l+1,R_1,γ^IV_B,l+1, γ^I_B,l] and obtain four eigenvalues (-√(2), √(2), -1, 1). The central region diagonal Hamiltonian can be written in terms of ordinary fermions χ_2 and χ_4 as H^IV= 2√(2)Δ(χ̅_̅2̅χ_2-1/2) +2Δ(χ̅_̅4̅χ_4-1/2), with χ_2=1/√(2)(i R_1+ γ^IV_B,l+1) and χ_4=1/√(2)(i γ^IV_B,l+1 + γ^I_B,l) (see SM for more detail <cit.>). The diagonalized Hamiltonian H^I, H^II, and H^III takes similar form as Eqs. 6, 7, and 8, in terms of the ordinary d_k,j fermionic operators. In summary, our analytical calculation finds a total of four MZMs (three localized at the edge sites and one non-local MZM near the center region). These four MZMs results in four-fold degeneracy in the ground state of the system, which is also consistent with our full-diagonalization numerical results. Figure <ref> shows DMRG results for L=46 sites at t_h=Δ and for different values of the Coulomb interaction V. At V=0, the site dependent LDOS(ω=0,j) shows sharp localized peaks at the edge sites j=1, 31, and 46, indicating three localized MZMs on those edge sites, as expected. Near the center, the LDOS(ω=0,j) displays two peaks at sites j=17 and 32, with height 1/2 compared to the edge sites, suggesting the presence of a non-local MZM. With increase in interaction (V=2), these MZMs remain exponentially localized over a few sites (Fig. <ref>a). To compare the stability of local and non-local MZMs, we calculate the electron and hole parts of LDOS(ω,j) for different values V. The peaks values for LDOS^e(ω) (Fig. <ref>b and f) and LDOS^h(ω) (Fig. <ref>c and g) at ω=0, for edge sites j=1 and 31, decrease to the same values with increase in V. This shows that the characteristic features of the local MZM remain and the spectral weight of electron and hole part of LDOS(ω,j) are equal <cit.>, at moderate values of V≤2. Interestingly, the spectral weight of electron and hole of LDOS(ω) for site j=17, takes value half (compared to the local MZMs on edge sites) at V=0. This is because the non-local MZM R_2= 1/√(2)(γ^II_A,17 -γ^III_A,32) is equally distributed at sites j=17 and 32. Increasing the repulsion strength V, the peak values of LDOS^e(ω) (Fig. <ref>d) and LDOS^h(ω) (Fig. <ref>e) are reduced (but still take the same values). The rate of decrease in peak height for local and non-local MZMs are almost the same, which shows these Majorana modes are equally topologically protected against V. The case ϕ_1=0, ϕ_2=0, and ϕ_3=π/2 Finally, we consider the Y-shape Kitaev wires with phases ϕ_1=0, ϕ_2=0, and ϕ_3=π/2. This limit is also equivalent to two perpendicular Kitaev chains with phase difference of π/2 (T-shape wire) <cit.>. The Hamiltonian for three legs H^I, H^II, and H^III takes the same form as Eqs. 1, 2, and 3, in terms of Majorana operators. As expected, in these equations the Majorana operators γ^I_A,l, γ^II_B,2l+1, and γ^III_B,3l+1 are absent, indicating the presence of three end MZMs at the edge sites of the Y-shape Kitaev wire (see Fig. <ref> ). The central region, H^IV, in terms of Majorana operators becomes: H^IV= -√(2)iΔγ^III_A,2l+2( γ^IV_A,l+1+ γ^IV_B,l+1) -2iΔ(γ^IV_A,l+1γ^I_B,l + γ^II_A,l+2γ^IV_B,l+1). Equation 12 can be written as a 5× 5 matrix in the basis [ γ^II_A,l+2, γ^III_A,2l+2, γ^I_B,l, γ^IV_A,l+1, γ^IV_B,l+1]. After diagonalizing H^IV, we obtained five eigenvalues (-√(2), √(2),-1,1,0 ). The last eigenvalue e_5=0 and its eigenvector χ_5= -1/2γ^II_A,l+2 + 1/√(2)γ^III_A,2l+2 +1/2γ^I_B,l has the property χ^†_5=χ_5^ †, and [H^IV,χ_5^IV]=0, confirming the presence of a non-local MZM near the junction. The non-local MZM χ_5 is made from linear combinations of local MZMs residing at sites j=15, 17, and 32 (see Fig. <ref>). The central region H^IV can be written in diagonal form as: H^IV=2√(2)Δ(χ̅_̅2̅χ_2 -1/2) +2 Δ(χ̅_̅4̅χ_4 -1/2), where χ_2=i/2√(2)γ^II_A,l+2 + i/2γ^III_A,2l+2- i/2√(2)γ^I_B,l +1/2γ^IV_A,l+1 + 1/2γ^IV_B,l+1 and χ_4= i/2γ^II_A,l+2 + i/2γ^I_B,l -1/2γ^IV_A,l+1 + 1/2γ^IV_B,l+1. Note that the diagonalized system Hamiltonian, in case of the SC phase (ϕ_1=0, ϕ_2=0, and ϕ_3=π/2) and (ϕ_1=0, ϕ_2=0, and ϕ_3=0) take similar form [see Eqs. 11 and 13], which lead to the same energy spectrum for both cases, although the form of non-local Majoranas wavefunctions are quite different for these two cases. The remaining Hamiltonians H^I, H^II, and H^III, after diagonalizing in terms of the ordinary d_k,j fermionic operators, take similar forms as Eqs. 6, 7, and 8. In conclusion, our analytical calculations find a total of 4 MZMs. The three local MZMs are located at edge sites in their natural positions, while a non-local MZM χ_5 is situated near the central region. These four MZMs results in four-fold degeneracy in the ground state of the system. In Fig. <ref> , we present the DMRG calculations with ϕ_1=0, ϕ_2=0, and ϕ_3=π/2 for different values of V using a system size L=46. Similarly to the previous cases, the LDOS(ω=0,j) shows sharp peaks for the edge sites j=1, 31, and 46, indicating three localized edge MZMs. Interestingly, near the center LDOS(ω=0,j) shows three peaks with heights 1/4, 1/4, and 1/2 (compared to the edge sites) on sites j=15, 17, and 32, respectively. These peaks in LDOS(ω=0,j) indicates the presence of a non-local MZM near the central site. The peak height can be explained by the special form of the non-local MZM wavefunction χ_5= -1/2γ^II_A,l+2 + 1/2γ^I_B,l+ 1/√(2)γ^III_A,2l+2, showing that χ_5 is distributed on sites j=15, 17, and 32 with amplitudes 1/4, 1/4, and 1/2, respectively. Increasing the repulsion V, the peak height of LDOS(ω=0,j) decreases for different sites and these MZMs are exponentially localized over a few sites (Fig. <ref>). We find that the peak height of the electron and hole part of LDOS(ω=0,j) take the same values even for V=2, indicating the MZMs are quite stable against repulsive interaction for V ≤ 2. Discussion In this publication, we studied the Y-shaped interacting Kitaev chains using analytical and DMRG methods for different superconducting phases at each arm. At the sweet spot t_h=Δ and V=0, we show the system can be divided into four independent Hamiltonians (three arms and one central region) when using the Majorana degrees of freedom. We found exact analytical solutions at the sweet spot and predict the exact form of Majorana wavefunctions for different sets of SC phases of each arm. Remarkably, the central region can be written in terms of just five Majorana operators, and we thus unveil the non-local nature of these MZMs. Based on our analytical and DMRG results: (i) For ϕ_1=π, ϕ_2=0, and ϕ_3=0, we predict a total of six MZMs. There are three local MZMs on each edge sites, one local MZM at the central site, and also there are two non-local MZMs near the central region, which results in three peaks at sites l (arm I), l+1 (arm II), and 2l+2 (arm III), with heights 2/3 (as compared to the edge sites), in the site-dependent LDOS calculations. (ii) For ϕ_1=0, ϕ_2=0, and ϕ_=0, we find a total of four MZMs, three localized at the edge sites and one non-local MZM near the central site. The non-local MZM is equally distributed on site l+2 (arm II) and 2l+2 (arm III), leading to two peaks with height 1/2 (compared to the edge sites) as unveiled by the site-dependent LDOS. (iii) For ϕ_1=0, ϕ_2=0, and ϕ_3=π/2, we find a total of four MZMs. As expected three localized on the edge sites and one non-local MZM near the center. The non-local MZM is distributed on sites l (arm I), l+2 (arm II), and 2l+2 (arm III), which leads to three peaks with heights 1/4, 1/4, and 1/2 in the LDOS calculation. Furthermore, we compare the stability of local and non-local MZMs against the repulsive interaction V, by calculating the electron and hole part of LDOS(ω,j) separately. Our DMRG results shows the local and non-local MZMs are equally stable, as the peak values of LDOS(ω,j) reduce with similar rate, for moderate values of repulsive interaction. We believe our proposed exotic non-local MZMs could be realized in quantum-dot systems <cit.> at the sweet spot, using just seven quantum dots in Y-shape geometry. In this paper, we primarily focused on finding the physical location of the MZMs in a Y-shape geometry Kitaev chain. In the near future, it will be also interesting to study MZMs in the X-shaped Kitaev wire, using similar analytical and DMRG methods. A recent study shows the X-shape wire is also quite important for the braiding process in quantum wires <cit.>. Methods Model Hamiltonian The Hamiltonian for the Y-shaped Kitaev model at the sweet spot t_h=Δ, with superconducting phases ϕ_1, ϕ_2, and ϕ_3, at each arm, can be divided into four different parts. The Hamiltonian for each leg can be written as: H^I = ∑_j=1^l-1( -t^x_hc^ †_jc_j+1 + e^iϕ_1Δ c_j c_j+1 + H.c. ), H^II = ∑_j=l+2^2l( -t^x_hc^ †_jc_j+1 + e^iϕ_2Δ c_j c_j+1 + H.c. ), H^III = ∑_j=2l+2^3l( -t^y_hc^ †_jc_j+1 + e^iϕ_3Δ c_j c_j+1 + H.c. ). Moreover, the Hamiltonian for the central site l+1 joining each leg edge site can be written as: H^IV = ( -t^x_hc^ †_lc^†_l+1 +e^iϕ_1Δ c_l c_l+1 + H.c. ) ( -t^x_hc^ †_l+1c^†_l+2 + e^iϕ_2Δ c_l+1 c_l+2 + H.c. ) ( -t^y_hc^ †_l+1c^†_2l+2 + e^iϕ_3Δ c_l+1 c_2l+2 + H.c. ). DMRG method In order to solve numerically, the Y-shaped Kitaev Hamiltonian and measure observables, we have used the density matrix renormalization group (DMRG) method <cit.> with DMRG++ <cit.>. We performed our DMRG calculations within the two-site DMRG approach, for a system size L=46 sites and employing m=1500 states, with truncation error ≤ 10^-10. Local density-of-states We have calculated the local density-of-states LDOS(ω,j) as a function of frequency ω and site j, Krylov-space correction vector DMRG; for a technical review see <cit.>. The electron part of the LDOS(ω,j) is <cit.>: LDOS^e(ω,j)=1/π Im[ < ψ_0 | c_j^ †1/ω +H -(E_g-iη)c_j|ψ_0 > ], and the hole part of LDOS(ω,j) is <cit.>: LDOS^h(ω)=-1/π Im[ < ψ_0 | c_j1/ω -H +(E_g-iη)c_j^ †|ψ_0 > ], where c_i is the fermionic annihilation operator while c^†_j is the creation operator, E_g is the ground state energy. We use as broadening parameter the value η=0.1 as in previous studies <cit.>. The total local density-of-states is defined as LDOS(ω,j)= LDOS^e(ω,j) + LDOS^h(ω,j). For the Majorana zero mode, it is expected that the peak values of LDOS^e(ω,j) and LDOS^h(ω,j) be close to ω=0. Data availability The data that support the findings of this study are available from the corresponding author upon request. Code availability The computer codes used in this study are available at https://g1257.github.io/dmrgPlusPlus/https://g1257.github.io/dmrgPlusPlus/. Acknowledgments The work of B.P., N.K., and E.D. was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division. G. A.  was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. Author contributions B.P. and E.D. designed the project. N.K. and B.P. carried out the analytical calculations for the Y-shaped Kitaev model. B.P performed the numerical DMRG calculations. G.A. developed the DMRG++ computer program. B.P., N.K., and E.D. wrote the manuscript. All co-authors provided useful comments and discussion on the paper. Competing interests The authors declare no competing interests. Additional information Correspondence should be addressed to Bradraj Pandey ([email protected]). 99 Kitaev1Kitaev AY. Unpaired Majorana fermions in quantum wires. https://doi.org/10.1070/1063-7869/44/10S/S29Phys.-Usp. 44, 131 (2001). Kitaev2Kitaev AY. Fault-tolerant quantum computation by anyons. https://doi.org/10.1016/S0003-4916(02)00018-0Ann Phys (NY) 303, 2 (2003). SarmaSarma, S., Freedman, M. Nayak, C. Majorana zero modes and topological quantum computation. https://doi.org/10.1038/npjqi.2015.1npj Quantum Inf 1, 15001 (2015). NayakNayak C, Simon SH, Stern A, Freedman M, Sarma SD. Non-abelian anyons and topological quantum computation. https://doi.org/10.1103/RevModPhys.80.1083Rev Mod Phys 80, 1083 (2008). Shnirman Scheurer, M. S., & Shnirman. Nonadiabatic processes in Majorana qubit systems. https://doi.org/10.1103/PhysRevB.88.064515 Phys. Rev. B 88, 064515 (2013). Law Law, K. T., Lee, P. A., and Ng, T. K. Majorana Fermion Induced Resonant Andreev Reflection. https://doi.org/10.1103/PhysRevLett.103.237001Phys. Rev. Lett. 103, 237001 (2009). SauLutchyn, R. M., Sau, J. D., Das Sarma S. Majorana fermions and a topological phase transition in semiconductor–superconductor heterostructures. https://doi.org/10.1038/ncomms1966Phys. Rev. Lett. 105, 077001 (2010). JaySau, J., Sarma, S. Realizing a robust practical Majorana chain in a quantum-dot-superconductor linear array. https://doi.org/10.1038/ncomms1966Nat Commun. 3, 964 (2012). Souto Tsintzis, A., Souto, R. S., Leijnse, M. Creating and detecting poor man's Majorana bound states in interacting quantum dots. https://doi.org/10.1103/PhysRevB.106.L201404Phys. Rev. B. 106, L201404 (2022). Mills Mills, A.R., Zajac, D.M., Gullans, M.J. et al. Shuttling a single charge across a one-dimensional array of silicon quantum dots. https://doi.org/10.1038/s41467-019-08970-z Nat Commun. 10, 1063 (2019). Leijnse Leijnse, M. and Flensberg, K. Introduction totopological superconductivity and Majorana fermions. https://doi.org/10.1088/0268-1242/27/12/124003Semicond. Sci. Technol. 27, 124003 (2012). Bara Górski, G., Barański, J., Weymann, I. et al. Interplay between correlations and Majorana mode in proximitized quantum dot. https://doi.org/10.1038/s41598-018-33529-1 Sci Rep. 8, 15717 (2018) Csonka Hofstetter, L., Csonka, S., Nygård, J. et al. Cooper pair splitter realized in a two-quantum-dot Y-junction. https://doi.org/10.1038/nature08432 Nature. 461, 960–963 (2009). Deng Deng, M. T., Vaitiekėnas, E., Hansen, E. B., Danon, J. et al. Majorana bound state in a coupled quantum-dot hybrid-nanowire system. https://doi.org/10.1126/science.aaf3961 Science 354, 1557–1562 (2016). Liu Liu, C.-X., Wang, G., Dvir, T. & Wimmer, M. Tunable superconducting coupling of quantum dots via Andreev bound states in semiconductor-superconductor nanowires. https://doi.org/10.1103/PhysRevLett.129.267701 Phys. Rev. Lett. 129 267701 (2022) LossRančić, J. M., Hoffman, S., Schrade, C., Klinovaja, J., & Loss, D. Entangling spins in double quantum dots and Majorana bound states. https://doi.org/10.1103/PhysRevB.99.165306 Phys. Rev. B. 99 165306 (2019) StanescuStanescu, T. D., Lutchyn, R. M. & Das Sarma, S. Majorana fermions in semiconductor nanowires. https://doi.org/10.1103/PhysRevB.84.144522Phys. Rev. B. 84, 144522 (2011). DvirDvir, T., Wang, G., van Loo, N. et al. Realization of a minimal Kitaev chain in coupled quantum dots. https://doi.org/10.1038/s41586-022-05585-1Nature . 614, 445–450 (2023). Alicea Alicea, J., Oreg, Y., Refael, G. et al. Non-Abelian statistics and topological quantum information processing in 1D wire networks. https://doi.org/10.1038/nphys1915Nature Phys. 7, 412–417 (2011). Aasen Aasen, D., Hell, M., Mishmash, R. V., Higginbotham, A., et al. Milestones toward Majorana-based quantum computing. https://doi.org/10.1103/PhysRevX.6.031016Phys. Rev. X. 6, 031016 (2016). Pandey Pandey,B., Mohanta, N., Dagotto, E. Out-of-equilibrium Majorana zero modes in interacting Kitaev chains. https://doi.org/10.1103/PhysRevB.107.L060304Phys. Rev. B. 107, L060304 (2023). Han Zhou, T., Dartiailh, M.C., Sardashti, K., Han, J. E. et al. Fusion of Majorana bound states with mini-gate control in two-dimensional systems. https://doi.org/10.1038/s41467-022-29463-6Nat Commun. 13, 1738 (2022). Heck van Heck, B., Akhmerov, A. R., Hassler, F., Burrello, M. Beenakker, C. W. J. Coulomb-assisted braiding of majorana fermions in a josephson junction array. https://doi.org/10.1103/PhysRevB.107.L060304New J. Phys. 14, 035019 (2012). Harper Harper, F., Pushp, A., Roy, R. Majorana braiding in realistic nanowire Y-junctions and tuning forks. https://doi.org/10.1103/PhysRevResearch.1.033207Phys. Rev. Research. 1, 033207 (2019). Boross Boross, P., Pályi, A. Braiding-based quantum control of a Majorana qubit built from quantum dots. https://doi.org/10.48550/arXiv.2305.08464arXiv:2305.08464. (2023). Zhou Y Zhou and M W Wu. Majorana fermions in T-shaped semiconductor nanostructures. https://doi.org/10.1088/0953-8984/26/6/065801J. Phys.: Condens. Matter . 26, 065801 (2014). Ardone Spånslätt, C., Ardonne, E. Extended Majorana zero modes in a topological superconducting-normal T-junction. https://doi.org/10.1088/1361-648X/aa585dJ. Phys.: Condens. Matter . 29, 105602 (2017). Oleg Stoudenmire, E. M., Alicea, J., Starykh, O. A., and Fisher, M. P. A. Interaction effects in topological superconducting wires supporting Majorana fermions. https://doi.org/10.1103/PhysRevB.84.014503Phys. Rev. B. 84, 014503 (2011). Martin Tsintzis, A., Souto, R. S., & Leijnse, M. Creating and detecting poor man's Majorana bound states in interacting quantum dots. https://doi.org/10.1103/PhysRevB.106.L201404Phys. Rev. B. 106, L201404 (2022). alvarez Alvarez, G. The density matrix renormalization group for strongly correlated electron systems: A generic implementation. https://doi.org/10.1016/j.cpc.2009.02.016 Comput. Phys. Commun. 180, 1572-1578 (2009). Nocera Nocera, A. & Alvarez, G. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors. https://link.aps.org/doi/10.1103/PhysRevE.94.053308 Phys. Rev. E 94, 053308 (2016). Rachel Thomale, R., Rachel, S., and Schmitteckert, P. Tunneling spectra simulation of interacting Majorana wires. https://doi.org/10.1103/PhysRevB.88.161103Phys. Rev. B. 88, 161103(R) (2013). Herbrych Herbrych, J., Środa, M., Alvarez, G., Dagotto, E. Interaction-induced topological phase transition and Majorana edge states in low-dimensional orbital-selective Mott insulators. https://doi.org/10.1038/s41467-021-23261-2Nat Commun. 12, 2955 (2021). SM See the Supplemental Material for the detailed analytical calculations. Dagotto1 Dagotto, E., Moreo, A., Barnes, T. Hubbard model with one hole: Ground-state properties. https://doi.org/10.1103/PhysRevB.40.6721Phys. Rev. B. 40, 6721 (1989). Dagotto2 Dagotto, E., Fradkin , E., Moreo, A. SU(2) gauge invariance and order parameters in strongly coupled electronic systems. https://doi.org/10.1103/PhysRevB.38.2926Phys. Rev. B. 38, 2926(R) (1988). Luzie Weithofer, L., Recher, P., & Schmidt, T. L. Electron transport in multiterminal networks of Majorana bound states. https://doi.org/10.1103/PhysRevB.90.205416Phys. Rev. B. 90, 205416 (2014). FornieriFornieri, A., Whiticar, A.M., Setiawan, F. et al. Evidence of topological superconductivity in planar Josephson junctions. https://doi.org/10.1038/s41586-019-1068-8 Nature 569,89–92 (2019) TongZhou, T., Dartiailh, M. C., Mayer, W., Han, J. E., Matos-Abiague, et al. Phase Control of Majorana Bound States in a Topological X Junction https://doi.org/10.1103/PhysRevLett.124.137001Phys. Rev. Lett. 124, 137001 (2020). white1992density White, S. R. Density matrix formulation for quantum renormalization groups. https://doi.org/10.1103/PhysRevLett.69.2863 Phys. Rev. Lett. 69, 2863 (1992). schollwock2005density Schollwöck, U. The density-matrix renormalization group. https://doi.org/10.1103/RevModPhys.77.259 Rev. Mod. Phys. 77, 259 (2005). bradPandey, B., Lin, L. F., Soni, R., Kaushal, N. et al. Prediction of exotic magnetic states in the alkali-metal quasi-one-dimensional iron selenide compound Na_2FeSe_2 https://doi.org/10.1103/PhysRevB.102.035149Phys. Rev. B. 102, 035149 (2020).
http://arxiv.org/abs/2306.02124v1
20230603143045
The mixing-spacetime symmetry in the Floquet-Bloch band theory
[ "Pei Wang" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.mes-hall", "cond-mat.str-el" ]
G↑↓ψ̂B̂ i e Tr#1 5pt #1Department of Physics, Zhejiang Normal University, Jinhua 321004, [email protected] We discover a class of spacetime symmetries unique to time-periodic systems, which we term "mixing symmetry" due to its combination of space and time coordinates in the symmetry transformation. We systematically enumerate the symmetry groups, and classify the corresponding Floquet-Bloch band theories by utilizing the winding number of quasi-energy. Moreover, we provide a comprehensive scheme for the experimental realization of these symmetries. The particle propagator exhibits an intriguing pattern that remains invariant even under transformations mixing space and time coordinates. We anticipate that this distinct feature can be observed in current cold atom experiments. The mixing-spacetime symmetry in the Floquet-Bloch band theory Pei Wang July 31, 2023 ============================================================== Introduction.— The study of Floquet-Bloch bands has emerged as a central topic in the field of nonequilibrium driven many-body dynamics <cit.>. Recent advancements in precise control and probing techniques have allowed for the realization of Floquet-Bloch bands in diverse platforms, including photonic waveguides <cit.>, solid materials <cit.>, and cold atom systems <cit.>. By employing periodic driving, these systems offer a unique opportunity to explore models that are challenging to realize in static setups <cit.>. Moreover, periodic driving enables the emergence of new states of matter that lack a static analog, leading to captivating phenomena in condensed matter physics, such as symmetry breaking <cit.>, localization <cit.>, and topological effects <cit.>. Symmetry plays a fundamental role in the study of band theory, exerting profound effects on various aspects of band structures. Spatial symmetries such as rotation, mirror reflection, and space inversion have long been recognized for their ability to protect band crossings or generate degeneracies <cit.>. Time reversal, particle-hole, and chiral symmetry have been utilized in the renowned tenfold classification of insulators and superconductors <cit.>. This classification has been applied to provide a periodic table for the topological phases <cit.>, and more recently, it has been extended to the topological classification of Floquet-Bloch bands <cit.>. Moreover, researchers have recognized the critical interplay between space group symmetries and topology, culminating in the comprehensive topological classification of band structures for all 230 crystal symmetry groups <cit.>. Notably, in the realm of Floquet systems, the presence of intertwined spatial and temporal translations, including nonsymmorphic symmetries such as glide time-reversal or time glide reflection, can preserve spectral degeneracy and give rise to novel out-of-equilibrium phases <cit.>. But previous studies have overlooked a class of symmetries that is unique to time periodic systems and absent in static ones. These symmetries are referred to as mixing symmetries in this paper. Let us consider the coordinates in 1+1-dimensional spacetime as (t,x). A linear coordinate transformation can be represented as (t',x')^T=A (t,x)^T. If the matrix A contains non-zero off-diagonal elements, it is called a mixing transformation because it combines the space and time coordinates. One well-known example of a mixing symmetry is the Lorentz symmetry, which holds significant importance in quantum field theory. In the context of condensed matter physics, the Schrödinger equation treats space and time differently, making continuous mixing symmetry impossible. Nevertheless, this does not rule out the possibility of discrete mixing symmetries <cit.>. Recently, it has been discovered that the spacetime crystals that exhibit a discrete Lorentz symmetry can be realized in ultracold atomic gases confined to an optical lattice <cit.>. But the models are constructed on finite-sized lattices and do not exhibit continuous Floquet-Bloch bands in the thermodynamic limit. In this paper, we present the first evidence of the existence of continuous Floquet-Bloch band theory that incorporates mixing symmetry. We thoroughly identify and classify the mixing groups in 1+1 dimensions. The resulting Floquet-Bloch theories are categorized based on both symmetry groups and the winding number of quasi-energy in the Brillouin zone (see Tab. <ref> for a summary). Unlike previously studied symmetries, the operator of mixing symmetry does not commute or anti-commute with the Hamiltonian. Therefore, we rely on group representation theory for constructing models. The band theory with mixing symmetry exhibits a quasi-energy-momentum relation that remains invariant under mixing transformations. Consequently, the particle propagator in real spacetime exhibits invariance when the spacetime coordinates undergo the transformation A, which is a distinctive characteristic of mixing symmetry. We discuss the possible realization of mixing symmetry in cold atoms on an optical lattice. The precise control achieved at the single-site level in experiments allows for programmable Hamiltonians with locally adjustable potential energies on each lattice site, facilitated by microelectromechanical systems mirrors <cit.>. We show that the Floquet-Bloch band with mixing symmetry can be implemented using a quadratic quantum Fourier transform (QQFT) protocol <cit.> on a driven optical lattice that features only onsite potential and nearest-neighbor hopping. The mixing symmetry can be observed by locating a Bose-Einstein condensate on the lattice and monitoring the atom density. Method.— When studying a quantum model, the usual approach involves writing down the Hamiltonian in real spacetime and then extracting the underlying symmetry from it. However, our objective is to construct a model with specific symmetry. We begin by providing a complete list of the mixing symmetry groups. Next, we establish the unitary representation of each group within the quasi-momentum-energy space. In this representation, the symmetry manifests as a constraint on the dispersion relation (DR), which is the function E(k) describing the quasi-energy E as a function of quasi-momentum k. For continuous Floquet-Bloch bands, we discover a fundamental equation governing the topology of the DR, which serves as a basis for band classification. By finding an E(k) that satisfies both the symmetry and topology conditions, we obtain the Floquet Hamiltonian Ĥ_F. Finally, we demonstrate how to realize Ĥ_F using a time-periodic Hamiltonian Ĥ(t) with locality in real spacetime. Our approach is inspired by the principles of quantum field theory, which relies on the unitary representation of the Poincaré group <cit.>. Mixing groups.— We are considering a 1+1-dimensional spacetime where spatial rotation or mirror reflection is absent, allowing us to concentrate on the study of mixing symmetry. There are two noncollinear translational vectors. Without loss of generality, we assign one vector (𝐞_x) to the spatial direction (x axis), and the other vector (𝐞_t) to the temporal direction (t axis), as shown in Fig. <ref>. This choice can always be made by employing a coordinate transformation that rotates 𝐞_t and 𝐞_x into the t and x axes, respectively. It is important to note that we exclusively consider symmorphic groups in this paper. For nonsymmorphic groups, nonsymmorphic symmetries may become significant when 𝐞_t and 𝐞_x are not orthogonal to each other <cit.>. To simplify the representation, we choose the lattice constants as the units of time and length, resulting in e_t = (1,0) and e_x=(0,1). Suppose that the 2-by-2 matrix A represents a mixing transformation. In this study, we focus on cyclic transformations, which are the ones that satisfy A^M = 1 for some positive integer M (called the order). By imposing the cyclic condition, we significantly reduce the number of symmetry groups, enabling us to exhaustively examine their representations. An arbitrary symmetry transformation can be expressed as a combination of A and translation. We denote this combined transformation as P(j, m, n), which acts on the coordinates as follows: ([ t'; x' ])= P(j,m,n) ([ t; x ]) = A^j ([ t; x ]) + ([ m; n ]) , where j,m and n are integers. P(j,m,n) denotes j times of mixing transformation followed by a translation of m units in time and n units in space. A symmetry group is a set of Ps that meet the group axioms. The closure under multiplication requires that the spacetime lattice {(m, n)^T | m,n ∈ℤ.} must keep invariant under A. Together with the existence of inverse element, we infer that the order of A can only be 2,3,4 or 6 <cit.>. The symmetry group can be expressed as 𝒫 = { P(j,m,n) | j=0, 1, ⋯, M-1; m,n∈ℤ. } with M=2,3,4 or 6. For given M, the group 𝒫_M is uniquely determined by A. The possible As in 𝒫_2, 𝒫_3, 𝒫_4 or 𝒫_6 are given in the supplementary materials. A few examples can help us understand the mixing group. The form of A in 𝒫_2 or 𝒫_4 is shown in Tab. <ref>, where a,b,c are integers satisfying bc=-a^2 ± 1, respectively. If a=0 and b=c=1, then A is the exchange of t and x (dubbed A_e) and belongs to 𝒫_2. If a=0, b=1 and c=-1, then A represents a rotation in the t-x plane by 90^∘ (dubbed A_r) and belongs to 𝒫_4. Figure <ref>a schematically illustrates the operations of A_e and A_r. The transformation A conserves the area of the parallelogram formed by two noncollinear vectors, because det(A)= ± 1. But A does not necessarily conserve the Euclidean length of a vector (e.g., consider the case a=2,b=-3 and c=1). Therefore, A can be not only rotation, reflection or inversion in the t-x plane, but also nonorthogonal transformations. Notice that A is distinguished from the discrete Lorentz transformation <cit.>, as the latter does not have a finite order. Floquet-Bloch band theory.— Each quantum theory is a unitary representation of its corresponding symmetry group. In our case, we aim to construct the unitary representations of 𝒫_M, and we follow a similar approach as described in Ref. [Wang21]. To denote the unitary operator of P(j,m,n), we use Û(j,m,n), which follows the same multiplication rule as P. The translation operators Û(0,m,n) commute with each other and share common eigenstates. In the Floquet-Bloch band theory, the eigenstates of translations are typically represented as |k,α⟩, where k ∈ [ -π, π) denotes the quasi-momentum, α is the band index, and the corresponding quasi-energy is denoted as E_α(k) with E_α(k) ∈ [ -π, π). When the operator Û(0,m,n) acts on |k,α⟩, it results in e^i E_α(k) m - i k n|k,α⟩. The pair (k,E_α(k)) represents a point in the Floquet-Bloch-Brillouin zone (FBBZ), which is topologically equivalent to a torus (see Fig. <ref>b,c). The DR of each continuous Floquet-Bloch band, i.e., the set of (k,E_α(k)) points, forms a loop on the torus. Since any element in 𝒫_M can be factorized into P(j,m,n)=P(0,m,n)P(j,0,0), the representation of 𝒫_M can be determined by examining the action of the mixing transformation operator Û(j,0,0) on the basis states |k,α⟩. Note that the single-particle Hilbert space is spanned by |k,α⟩. To determine the representation, it is sufficient to investigate the action of Û(1,0,0) since Û(j,0,0) = Û(1,0,0)^j. For this purpose, we utilize the multiplication rule: P(0,m',n')P(1,0,0) = P(1,0,0) P(0,m,n) <cit.>, or equivalently, Û(0,m',n') Û(1,0,0) = Û(1,0,0) Û(0,m,n), where (m',n')^T= A (m,n)^T. Acting both sides of Eq. (<ref>) on |k,α⟩, we find that Û(1,0,0) |k,α⟩ is also an eigenstate of translation operators, denoted by |k',α'⟩ =Û(1,0,0) |k,α⟩ without loss of generality. And Eq. (<ref>) determines a relation between k and k' <cit.>, which reads ([ k'; E_α'(k') ]) = A̅([ k; E_α(k) ]) (mod 2π), where A̅= det(A) A. Especially, we find A̅ = -A and A̅ = A for the symmetry classes 𝒫_2 and 𝒫_4, respectively. The modulo operation in Eq. (<ref>) ensures that ( k', E_α'(k')) falls within the FBBZ. According to definition, A̅ is invertible, and then it is a one-to-one continuous map between FBBZ and itself. In other words, A̅ acts as a homeomorphism on the FBBZ. Equation (<ref>) reveals that each point (k,E_α(k)) within the DRs is mapped by A̅ to another point within the DRs. A̅ establishes a one-to-one correspondence between the set of points within the DRs and itself. In a spacetime crystal with N continuous bands, each with its corresponding DR as a loop on the FBBZ torus, A̅ acts as a homeomorphism. Consequently, the image of a loop (DR) under A̅ is guaranteed to be another loop (DR). Thus, A̅ maps each DR loop to another DR loop, effectively acting as a permutation of the N bands. Topology of dispersion relation.— Equation (<ref>) is the sufficient and necessary condition for a unitary representation of 𝒫_M. Constructing a representation involves in finding the A̅-invariant DRs (solution of Eq. (<ref>)). For general A̅, these DRs can be highly nontrivial. For instance, if A is the exchange A_e, then A̅_e exchanges the quasi-momentum and quasi-energy. Our familiar DRs, such as quadratic or trigonometric functions, are not A̅-invariant. The nontriviality of A̅-invariant DRs arises from the fact that their loops exhibit nontrivial topology. The topology of a loop on a torus is characterized by a pair of integers, which corresponds to the fundamental group of the torus. As the DR is a continuous function of k within the range of [-π,π), a DR loop must wind around the torus exactly once in the k-direction. The topology of a DR loop is denoted as (1, w), where w represents the number of times the DR winds around the torus in the positive E-direction while completing one revolution in the positive k-direction. It is important to highlight that w has long been recognized as the average particle displacement over one period. In each cycle, w units of charge are pumped through the system <cit.>. If A̅ maps band α to α', their DRs' winding numbers (w_α and w_α') are connected to each other according to <cit.>±([ 1; w_α' ]) = A̅([ 1; w_α ]). Equation (<ref>) is our key result, which constrains the topology of an A̅-invariant DR. It has no solution for A̅ in 𝒫_3 or 𝒫_6 <cit.>, indicating that these symmetry classes have no representation with continuous bands. By substituting the expression of A into Eq. (<ref>), we obtain the band classification for 𝒫_2 and 𝒫_4. In 𝒫_2, bands are classified as singlets or doublets. A singlet remains invariant under A̅, while a doublet consists of two bands mapped by A̅ into each other, sharing the same winding number. For 𝒫_4, bands are classified as doublets with odd-function DRs of k and quadruplets (quartets of four bands). There are no singlet bands since w_α'=w_α constradicts Eq. (<ref>). Additionally, the two bands in a doublet have different winding numbers. Table <ref> summarizes the classification of Floquet-Bloch bands with mixing symmetry. Except for A with a=±1 (unconventional space inversion or time reversal), the DR's winding number must be nonzero <cit.>. Figure <ref> presents examples of DRs that satisfy Eq. (<ref>). Figure <ref>(a) shows a doublet pair of bands, mapped into each other by A̅_e in 𝒫_2. Both bands have a winding number of +1. Figure <ref>(b) shows a doublet pair of bands, mapped into each other by A̅_r in 𝒫_4. The two bands have winding numbers of +1 and -1, respectively. The simplest DRs with mixing symmetries (solution of Eq. (<ref>)) are linear ones: E(k)=wk, where w= ± 1, ± 2,⋯ represents the winding number. From Eqs. (<ref>) and (<ref>), we can fully determine the mixing symmetries of any linear Floquet-Bloch band <cit.>. In the 𝒫_2 symmetry class, a linear band is a singlet, which keeps invariant under the map A̅ with elements satisfying a=-b w± 1 and c=-bw^2 ± 2w. On the other hand, in the 𝒫_4 symmetry class, two bands E(k)=wk and E'(k)=w'k can form a doublet, where w'=w ± 1 or w'=w± 2 <cit.>. By utilizing the A̅-invariant E_α(k), we can readily establish the many-body quantum theory by introducing the creation operator ĉ^†_k,α and the annihilation operator ĉ_k,α and expressing the symmetry operators in terms of them <cit.>. Specifically, the time translation operator is Û(0,1,0)=e^i Ĥ_F, where Ĥ_F is the effective Floquet Hamiltonian: Ĥ_F = ∑_k,α E_α(k) ĉ^†_k,αĉ_k,α. Note that the symmetry condition of DRs is independent of whether the particles are bosons or fermions. The operators ĉ_k,α,ĉ_k,α^† are either commutative or anti-commutative, depending on the species of particles. A few comments are necessary. First, we ignore the interaction between particles in the model (<ref>). Constructing an interacting theory is significantly more challenging and falls beyond the scope of the current paper, as the mixing symmetry imposes constraints not only on the DR but also on the particle interactions. Second, for an exhaustive enumeration of quantum theories, we should also consider the possibility of Û(1,0,0) being an anti-unitary operator, which is discussed in the supplementary materials. Realization with local Ĥ(t).— We aim at realize a given Floquet Hamiltonian Ĥ_F, or equivalently the energy band E(k), in a cold atom system on an optical lattice. To achieve experimental feasibility, it is crucial that the time-periodic Hamiltonian Ĥ(t) possesses locality in real spacetime. However, this poses a challenge due to the nonzero winding numbers of the DRs, which is a characteristic feature of mixing symmetry. Let us consider a specific example: the linear DRs E(k)=wk. Upon Fourier transformation, the Floquet Hamiltonian Ĥ_F contains infinitely-long-range hopping terms in real space, making them currently inaccessible using existing technology. In fact, if Ĥ(t) exhibits locality (i.e., short-range hopping) and simultaneously maintains space translational symmetry at each time t, then the DRs of Ĥ_F must possess zero winding <cit.>. Therefore, in order to have an A̅-invariant DR, Ĥ(t) must break instantaneous translational symmetry. To design Ĥ(t) for a given DR, we employ the recently developed QQFT protocol <cit.>, which gives rise to highly flexible Hamiltonian engineering so that the DRs become completely programmable and the long-range tunnelings in Ĥ_F become accessible to optical lattice experiments. In one period, denoted as [0,1), the time-dependent Hamiltonian is expressed as Ĥ(t)= ∑_p=1^D I_p(t) Ĥ_p . Here, I_p(t) is the indicator function, which is defined as 1 for t∈ [(p-1)/D,p/D) and zero elsewhere. The parameter D represents the depth of the Hamiltonian sequence. Each Ĥ_p contains only onsite potentials and nearest-neighbor hoppings. It can generally be written as Ĥ_p = ∑_x(g^(p)_xψ̂^†_xψ̂_x+1 + u^(p)_xψ̂^†_xψ̂_x + h.c.), where ψ̂^†_x and ψ̂_x are the creation and annihilation operators at site x, respectively, and g and u denote the hopping strength and onsite potentials, respectively. The unitary evolution over one period is given by e^-iĤ_F = e^- i Ĥ_D/D⋯ e^- i Ĥ_2/D e^- i Ĥ_1/D. For a lattice model with L sites, the depth D scales as L log L for large L <cit.>, in the QQFT protocol. As the system size increases, the effort required for simulation grows super-linearly. In recent developments in cold atom technology, spatially resolved control of the atom-confining potential has been achieved, enabling the realization of a sequence of local Hamiltonians like Eq. (<ref>). It has been shown that systems with sizes up to several tens of sites are accessible in present experiments <cit.>. Figure <ref>(a) illustrates the sequence of Ĥ_p operations that generate E(k)=wk on a chain of length L=8. Within each period, a total of 39 operations are performed, including 32 swaps between neighboring sites, 6 local Fourier transformations, and one evolution of the onsite potential. For more detailed information, please refer to the supplemental material. Mixing symmetry in the wave function.— To observe the mixing symmetry, one can utilize the fact that the mixing symmetry manifests itself in the particle propagator in real spacetime. For a particle initially located at position x=0 and time t=0, its wave function at a later time (multiples of the period) satisfies: Ψ_α(t,x) = Ψ_α'( t',x'), where (t',x')^T = A(t,x)^T and t,x are arbitrary integers. α' is the map of band-α under A̅. For α'=α (a singlet band in the 𝒫_2 class), Eq. (<ref>) imposes a strong constraint on the wave function. For α' ≠α, Eq. (<ref>) provides a connection between the wave functions in different bands. For a concrete example, let us see a linear band with E(k)=wk, in which the particle moves at a constant speed, just like a classical particle. The wave function is calculated to be Ψ_α(t,x)=δ_x,w t. In previous discussions, we already show that such a band exhibits the 𝒫_2 symmetry when the elements of A are a = -b w± 1 and c=-bw^2 ± 2 w. It is easy to verify that δ_x,w t does remain invariant as (t,x)^T transforms under A. Using the QQFT protocol, we perfectly repeat the evolution of wave function on a lattice of length L=2^l. Figure <ref>(b) displays the DR as w=2, and Fig. <ref>(c) displays the corresponding wave function in the QQFT simulation. The probability distribution of particles, i.e. |Ψ(t,x)|^2, obviously meets the same symmetry as shown in Eq. (<ref>). In experiments, instead of a single particle, one can use the Bose-Einstein condensate (BEC) for observation, and then, | Ψ(t,x)|^2 represents the density of atoms. The density distribution forms a symmetric pattern which remains invariant under A, which will be a smoking gun signal of mixing symmetry. Discussion.— This paper presents an innovative discovery of Floquet-Bloch band theories that exhibit a unique mixing symmetry, which intertwines the space and time coordinates. We provide a comprehensive classification of Floquet-Bloch bands based on the cyclic mixing transformations of finite order. Notably, only the groups 𝒫_2 and 𝒫_4 possess continuous representations, where the mixing symmetry imposes constraints on the dispersion relation of each band. Furthermore, we reveal that the winding number of the dispersion relation on the Floquet-Bloch-Brillouin torus must adhere to a symmetry condition. To achieve a non-zero winding number, it is essential for the time-dependent Hamiltonian of the theory to break the instantaneous translation symmetry, a feat attainable through the implementation of QQFT on an optical lattice. Remarkably, the mixing symmetry manifests in the atom density, which becomes experimentally measurable, demonstrating its impact on the spacetime distribution. This discovery unveils a broader symmetry family that has been previously ignored, as the mixing symmetry transcends pure spatial or temporal characteristics and instead establishes correlations between space and time. Its exploration enhances our comprehension of symmetry in crystals. Looking ahead, intriguing open questions include the investigation of noncyclic mixing symmetry and the exploration of mixing-symmetry protected topological states of matter. Acknowledgement.— The work is supported by National Natural Science Foundation of China (Grants Nos. 11835011, 11774315), and the Junior Associates program of the Abdus Salam International Center for Theoretical Physics. We thank X. Wang for useful discussions. 35 natexlab#1#1 bibnamefont#1#1 bibfnamefont#1#1 citenamefont#1#1 url<#>1 urlprefixURL Oka09 T. Oka and H. Aoki, Phys. Rev. B 79, 081406(R) (2009). Kitagawa10 T. Kitagawa, E. Berg, M. Rudner, and E. Demler, Phys. Rev. B 82, 235114 (2010). Lindner11 N. H. Lindner, G. Refael, and V. Galitski, Nat. Phys. 7, 490 (2011). Cooper19 N. R. Cooper, J. Dalibard, and I. B. Spielman, Rev. Mod. Phys. 91, 015005 (2019). Rudner20 M. S. Rudner and N. H. Lindner, Nat. Rev. Phys. 2, 229 (2020). Rechtsman13 M. C. Rechtsman, J. M. Zeuner, Y. Plotnik, Y. Lumer, S. Nolte, M. Segev, and A. Szameit, Nature 496, 196 (2013). Wang13 Y. H. Wang, H. Steinberg, P. Jarillo-Herrero, and N. Gedik, Science 342, 453 (2013). Jotzu14 G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Nature 515, 237 (2014). Sorensen05 A. S. Sørensen, E. Demler, and M. D. Lukin. Phys. Rev. Lett. 94, 086803 (2005). Eckardt05 A. Eckardt, C. Weiss, and M. Holthaus, Phys. Rev. Lett. 95, 260404 (2005). Goldman14 N. Goldman and J. Dalibard, Phys. Rev. X 4, 031027 (2014). Oka19 T. Oka and S. Kitamura, Annu. Rev. Condens. Matter Phys. 10, 387 (2019). Else16 D. V. Else, B. Bauer, and C. Nayak, Phys. Rev. Lett. 117, 090402 (2016). Khemani16 V. Khemani, A. Lazarides, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. 116, 250401 (2016). Yao17 N. Y. Yao, A. C. Potter, I.-D. Potirniche, and A. Vishwanath, Phys. Rev. Lett. 118, 030401 (2017). Ponte15 P. Ponte, Z. Papić, F. Huveneers, and D. A. Abanin, Phys. Rev. Lett. 114, 140401 (2015). Lazarides15 A. Lazarides, A. Das, and R. Moessner, Phys. Rev. Lett. 115, 030402 (2015). Bordia17 P. Bordia, H. Lüschen, U. Schneider, M. Knap, and I. Bloch, Nat. Phys. 13, 460 (2017). Rudner13 M. S. Rudner, N. H. Lindner, E. Berg, and M. Levin, Phys. Rev. X 3, 031005 (2013). Ashcroft76 N. W. Ashcroft and N. D. Mermin, Solid state physics (Tomson Learning Inc., London, UK, 1976). Altland97 A. Altland and M. R. Zirnbauer, Phys. Rev. B 55, 1142 (1997). Schnyder08 A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Phys. Rev. B 78, 195125 (2008). Kitaev09 A. Kitaev, AIP Conf. Proc. 1134, 22 (2009). Nathan15 F. Nathan and M. S Rudner, New J. Phys. 17, 125014 (2015). Potter16 A. C. Potter, T. Morimoto, and A. Vishwanath, Phys. Rev. X 6, 041001 (2016). Else16b D. V. Else and C. Nayak, Phys. Rev. B 93, 201103(R) (2016). Roy17 R. Roy and F. Harper, Phys. Rev. B 96, 155118 (2017). Bradlyn17 B. Bradlyn, L. Elcoro, J. Cano, M. G. Vergniory, Z. Wang, C. Felser, M. I. Aroyo, and B. A. Bernevig, Nature 547, 298 (2017). Po17 H. C. Po, A. Vishwanath, and H. Watanabe, Nat. Commun. 8, 50 (2017). Kruthoff17 J. Kruthoff, J. de Boer, J. van Wezel, C. L. Kane, and R.-J. Slager, Phys. Rev. X 7, 041069 (2017). Morimoto17 T. Morimoto, H. C. Po, and A. Vishwanath, Phys. Rev. B 95, 195155 (2017). Xu18 S. Xu and C. Wu, Phys. Rev. Lett. 120, 096401 (2018). Peng19 Y. Peng and G. Refael, Phys. Rev. Lett. 123, 016806 (2019). Mochizuki20 K. Mochizuki, T. Bessho, M. Sato, and H. Obuse, Phys. Rev. B 102, 035418 (2020). Wang18 P. Wang, New J. Phys. 20, 023042 (2018). Wang20 X. Li, J. Chai, H. Zhu, and P. Wang, J. Phys.: Condens. Matter 32, 145402 (2020). Wang21 P. Wang, J. Phys. A: Math. Theor. 54, 115003 (2021). Wang22 P. Wang, Z. Huang, X. Qiu, and X. Li, Phys. Rev. B 106, 134313 (2022). 2016_Weiss_Science Y. Wang, A. Kumar, T.-Y. Wu, and D. S. Weiss, Science 352, 1562 (2016). browaeys2020many A. Browaeys and T. Lahaye, Nat. Phys. 16, 132 (2020). OurSI See Supplementary Materials. Weinberg S. Weinberg, The Quantum Theory of Fields (Cambridge University Press, Cambridge, England, 1995). Qiu20 X. Qiu, J. Zou, X. Qi, and X. Li, npj Quantum Inf. 6, 87 (2020). Supplementary Materials § MIXING GROUPS According to definition, the mixing symmetry group has two important subgroups. One is the cyclic group that contains the mixing transformations, i.e., 𝒜={1,A,A^2,⋯, A^M-1} with M being the order. The other consists of the translations, reading 𝒯= {(m,n) | m,n ∈ℤ.}. Usually, the group that has 𝒜 and 𝒯 as subgroups is not unique. In this paper, we only consider the symmorphic group, which is the direct product of 𝒜 and 𝒯. The group element is written as P(j,m,n), which represents the mixing transformation A^j followed by the translation of vector (m,n). It is easy to see that, 𝒫={ P(j,m,n)} is a group if and only if the spacetime lattice 𝒯 keeps invariant under A. Because A is invertible (A^-1=A^M-1), 𝒯 keeps invariant under A if and only if m' and n', defined by (m',n')^T = A(m,n)^T, are integers for arbitrary m,n∈ℤ. Furthermore, this condition can be simplified into A(1,0)^T and A(0,1)^T being integer pairs. We generally express the matrix A and its inverse as A=([ a_11 a_12; a_21 a_22 ]) and A^-1 = 1/det(A)([ a_22 -a_12; -a_21 a_11 ]), respectively. Then, the condition that A(1,0)^T and A(0,1)^T are integer pairs translates into a_11, a_12, a_21, a_22 being all integers. But A^j (1,0)^T and A^j (1,0)^T must be also integer pairs for j=2,3,⋯, M-1. The case of j=M-1, or equivalently j=-1, is especially important, from which we derive that a_11/det(A), a_12/det(A), a_21/det(A), a_22/det(A) are integers. For a_ij and a_ij/det(A) to be both integers, we require det(A) = ± 1. To see it, one can use proof by contradiction (the assumption det(A)=± 2, ± 3, ⋯ leads to contradiction). To find all the cyclic As, we study the eigenvalues of A, i.e. a pair of complex numbers expressed as λ_± = a_11+a_22/2 ±√((a_11+a_22/2 )^2- det(A)). The cyclic condition (A^M=1) indicates | λ_±| ≡ 1, which is possible only if a_11+a_22 = 0, ± 1 ,± 2. As a_11+a_22 = 0 and det(A)=-1, a straightforward calculation shows A^2=1. Such As can be written in a more compact form as A=([ a b; c -a ]), where a,b,c are arbitrary integers satisfying bc = -a^2+1. As a_11+a_22 = 0 and det(A)=+1, we find A^4=1, and A has the same expression as Eq. (<ref>) but with bc = -a^2-1. As a_11+a_22 = ± 1, only det(A)=1 is consistent with | λ_±| ≡ 1 but det(A)=-1 is not, and we find A^6=1 or A^3=1, respectively. As a_11+a_22 = ± 2, the calculation shows that there does not exist a finite M so that A^M=1, except for A=± 1, which is trivial and then ignored. To summarize, the values of M are 2,3,4 or 6, and the corresponding symmetry groups are denoted by 𝒫_2,𝒫_3, 𝒫_4 or 𝒫_6, respectively. For a given M, 𝒫_M is a class of groups, with different groups having different A. In 𝒫_2, A is the matrix (<ref>) with bc=-a^2+1. In 𝒫_4, A is the matrix (<ref>) with bc=-a^2-1. In 𝒫_3, A is the matrix (<ref>) with the components being arbitrary integers that satisfy a_11+a_22 = - 1 and a_11a_22-a_12a_21=1. In 𝒫_6, A is the matrix (<ref>) with the components being arbitrary integers that satisfy a_11+a_22 = + 1 and a_11a_22-a_12a_21=1. § UNITARY AND ANTI-UNITARY REPRESENTATIONS We use |k,α⟩ to denote the single-particle eigenstate of the translation operators Û(0,m,n) with m,n∈ℤ. According to the Floquet-Bloch band theory, without loss of generality, the corresponding eigenvalue can be expressed as e^imE_α(k)-ikn, where k and E_α(k) are the quasi-momentum and quasi-energy, respectively, and α is the band index. Let us calculate Û(1,0,0)|k,α⟩. From the definition of P(j,m,n), it is easy see P(0,m',n')P(1,0,0) = P(1,0,0)P(0,m,n) with (m',n')^T= A(m,n)^T. Û(j,m,n) is the representation of P(j,m,n), then they satisfy the same multiplication rule. We obtain Û(0,m',n')Û(1,0,0) |k,α⟩= Û(1,0,0)Û(0,m,n) |k,α⟩ = e^imE_α(k)-iknÛ(1,0,0) |k,α⟩. Equation (<ref>) tells us that Û(1,0,0) |k,α⟩ is the eigenstate of Û(0,m',n') with the eigenvalue being e^imE_α(k)-ikn. But m' and n' can be arbitrary integers, because (m',n')^T= A(m,n)^T and A is invertible. Û(1,0,0) |k,α⟩ is then the common eigenstate of the translation operators, denoted by |k',α'⟩ without loss of generality. Using the notations k' and α', we calculate the left-hand side of Eq. (<ref>) and then obtain e^im'E_α '(k')-ik'n' = e^imE_α(k)-ikn. Using the fact that det(A)=± 1 and the expression of A^-1 in Eq. (<ref>), we quickly find ([ k'; E_α'(k') ]) = A̅([ k; E_α(k) ]) (mod 2π) with A̅= det(A) · A = ± A. In the above derivation, we assume that Û(1,0,0), i.e. the representation of A, is a unitary operator. To make our discussion complete, we also need to consider the possibility of Û(1,0,0) being an anti-unitary operator. In this case, the multiplication rule keeps the same, but Eq. (<ref>) changes into Û(0,m',n')Û(1,0,0) |k,α⟩= e^-imE_α(k)+iknÛ(1,0,0) |k,α⟩. Due to the reason mentioned above, we still assume Û(1,0,0) |k,α⟩=|k',α'⟩. Then Eq. (<ref>) becomes e^im'E_α '(k')-ik'n' = e^-imE_α(k)+ikn. Equation (<ref>) keeps the same but with A̅= - det(A) · A. Comparing the anti-unitary representation with the unitary representation, we find that the dispersion relation satisfies the same equation with only the sign of A̅ changing. On the other hand, if we do the change A→ -A in the unitary representation, the sign of A̅ also changes, since det(A)= det(-A). Moreover, if A is a cyclic matrix, so is -A. Therefore, for each anti-unitary representation, there exists a unitary representation that has exactly the same A̅, and then the dispersion relation, i.e. the solution of Eq. (<ref>), is also the same. The consideration of anti-unitary representation leads to nothing new in the dispersion relation. § TOPOLOGY OF DISPERSION RELATION We assume that the Floquet-Bloch band is continuous, or in other words, E_α(k) is a continuous function of k everywhere in the Floquet-Bloch-Brillouin zone (FBBZ). The dispersion relation (DR) of each band is then a loop on the FBBZ torus. The transformation A̅ (mod 2π) defined by Eq. (<ref>) maps a point in the FBBZ to another point in the FBBZ. Furthermore, it is a one-to-one map. Otherwise, suppose (k_1,E_1)≠(k_2,E_2) are mapped into the same (k',E'), then we have A̅(k_1-k_2,E_1-E_2)^T = 2π( m,n )^T with m,n being some integers. But the matrix A̅=± A or its inverse always map an integer pair into another integer pair, and then (k_1-k_2,E_1-E_2) = 2π( m',n' ) with m',n' being integers. This is impossible except for k_1 = k_2 and E_1=E_2, because (k_1,E_1) and (k_2,E_2) are both in the FBBZ. The one-to-one map A̅ is by definition continuous, so is its inverse. A̅ is then a homeomorphism. As a consequence, an arbitrary loop on the FBBZ torus must be mapped by A̅ into another loop. In the main text, we show that the spacetime crystal has the mixing symmetry if and only if the single-particle DRs are A̅-invariant. And if the DRs are A̅-invariant, then the DR of a band α, i.e. a loop, must be mapped into the DR of another band α' (it is possible that α=α'). Note that, from the pure mathematical point of view, it is also possible that the image of a DR loop is a non-DR loop (e.g., a loop on which k keeps a constant but E travels around the torus once). But in that case, the DRs are not A̅-invariant, and then the corresponding spacetime crystal has no mixing symmetry, which is uninteresting to us. Next, we study the topologies of the DR loops of α and α'. Using the knowledge of the fundamental group of torus, we describe the topology of a loop by two integers, which are the numbers of times the loop winds around the torus in the positive k- and E-directions, respectively. A DR loop winds around the torus once and only once in the k-direction, otherwise, there would exist some k∈ [-π,π) at which E(k) has no definition or has multiple values, which contradicts with the fact that E(k) is a function of k defined in the domain [-π,π). Therefore, the topology of the α-band DR is given by the pair (1,w_α), in which w_α is the number of times the loop winds in the positive-E direction as it winds once in the positive-k direction. An easy way of calculating w_α is by depicting E_α(k) in the extended quasi-energy zone, in which the range of E is extended to (-∞,∞) instead of being limited in [-π,π). In the extended-zone scheme, we can force E_α(k) to be continuous in the absence of the modulo operation, E_α(k) then becomes a curve in the k-E plane with k∈ [-π,π) and E∈(-∞,∞). The continuity of E_α(k) (mod 2π) requires (E_α(π) - E_α(-π)) being an integer times of 2π, and this integer is exactly w_α: E_α(π) - E_α(-π) = 2π w_α. Now, we study the image of {(k,E_α(k))} under the matrix A̅, in the extended-zone scheme. Without the modulo operation, A̅ is an invertible one-to-one map in the k-E plane, moreover, it is a linear map. Therefore, when (k,E_α(k)) starts from the left end (-π,E_α(-π)), and goes towards the right end (π,E_α(π)), its image (k',E_α'(k')) draws a curve in the plane. The end points of the image curve are (k'_0,E_α'(k'_0))^T=A̅(-π,E_α(-π))^T and (k'_1,E_α'(k'_1))^T = A̅(π,E_α(π))^T, respectively. Then, the winding number of α' evaluates w_α' =(E_α'(k'_1) - E_α'(k'_0))/( k'_1- k'_0). An important property of the image curve is that | k'_1-k'_0| must be 2π. The proof is as follows. First, the range of k' must be integer times of 2π, because the image is a complete DR loop (of band α') after the modulo operation. Second, the range of k' cannot be 2π n with n>1. Otherwise, as (k,E_α(k)) travels around the α-DR loop once, (k',E_α'(k')) already travels around the α'-DR loop n times, which contradicts with the fact that A̅ (mod 2π) is a one-to-one map on the torus. Based on the above arguments, we derive ±([ 1; w_α' ]) = A̅([ 1; w_α ]), where "±" corresponds to k'_1-k'_0 = ± 2π, respectively. § MIXING SYMMETRIES OF LINEAR E(K) We determine the mixing symmetries of a linear DR, given by E(k)=wk with w=± 1,± 2,⋯, by making the following observation. If the topology condition ±(1,w')^T= A̅(1,w)^T is satisfied, we can multiply both sides by k to obtain (k',E')^T = A̅(k,E)^T, where k'=± k and E'=w' k'. Therefore, the topology condition is sufficient for one linear band to be mapped by A̅ into another linear band. Let us first consider the 𝒫_2 symmetry class. Since w=w' under the map A̅ (see Tab. I of the main text), a linear E(k) is always mapped into itself and remains a singlet band in the 𝒫_2 class. Using the equation ±(1,w)^T= A̅(1,w)^T and the expression of A̅, we immediately find a = -b w± 1 and c=-bw^2 ± 2 w. For a given w, there exist an infinite number of mixing matrices (with different b) in the 𝒫_2 class: A= ([ -bw ± 1 b; -bw^2 ± 2w bw∓ 1 ]). The linear band E(k)=wk always exhibits 𝒫_2 symmetries. Next, we consider the 𝒫_4 symmetry class. Since E(k)=wk is an odd function of k, the linear band must be one branch of a doublet (Tab. I of the main text). Suppose the DR of its paired band is E'(k') =w'k'. Using ±(1,w')^T= A̅(1,w)^T and the expression of A̅, we find a=-bw± 1, c=-bw^2± 2w -2/b, and w'=w∓ 2/b. c and w' must be integers, therefore, b can only take the values ± 1, ± 2. For a given w, there exist 8 mixing matrices in the 𝒫_4 symmetry class: A= ([ -w ± 1 1; -w^2 ± 2 w - 2 w∓ 1 ]), ([ w ± 1 -1; w^2 ± 2 w + 2 -w∓ 1 ]), ([ -2w ± 1 2; -2w^2 ± 2 w - 1 2w∓ 1 ]), ([ 2w ± 1 -2; 2w^2 ± 2 w +1 -2w∓ 1 ]). The corresponding w' is given by w'=w∓ 2, w± 2, w∓ 1, w± 1, respectively. The above analysis exhausts all the mixing symmetries of a linear band. § CONSTRUCTION OF Ĥ(T) Our target is to simulate Ĥ_F = ∑_k E(k) ĉ^†_kĉ_k that has mixing symmetry by a periodic Hamiltonian Ĥ(t) with locality. First, we will prove that, if Ĥ(t) has both locality and space translation symmetry at each t, then the winding number of E(k) is zero. For simplicity, we consider a lattice model, in which a set of sites are spatially located at the coordinates j= 0, ± 1, ± 2, ⋯, respectively. In condensed matter community, the lattice models are widely employed in the study of particles moving in a periodic potential, because it is more difficult to directly deal with the differential operators in the continuous space. Without loss of generality, we define Ĥ(t) =∑_j∑_Δ j=-R^R f(Δ j,t) ψ̂^†_jψ̂_j+Δ j , where f(Δ j,t)=f^*(-Δ j,t) is the hopping strength, and ψ̂^† and ψ̂ are the onsite creation and annihilation operators, respectively. The assumption that Ĥ(t) has space translation symmetry at each moment, is hidden in the fact that f(t) is independent of j. The locality of Ĥ(t) manifests itself as the existence of a distance cutoff for hopping. The largest distance over which there are nonzero hopping terms is set to be R. After a Fourier transform, Eq. (<ref>) changes into Ĥ(t) =∑_k E(k,t) ψ̂^†_kψ̂_k, where E(k,t) = ∑_Δ j=-R^R f (Δ j,t)e^ikΔ j. E(k,t) is a sum of finite number of terms, with each term being a trigonometric function of k. If we depict these trigonometric functions on the FBBZ torus, they all have zero winding number, and then their sum, i.e. E(k,t), must also have zero winding number. The Floquet Hamiltonian Ĥ_F can be obtained by integrating Ĥ(t) over one period. Because Ĥ(t) at different t commutes with each other. Then we obtain E(k) = ∫^T_0 dt E(k,t) /T where T=1 is the period. Since E(k,t) at each t has zero winding, E(k) must also have zero winding. The DRs of a spacetime crystal with mixing symmetry usually have nonzero winding. And due to the above arguments, if we ask Ĥ(t) to be local and we want to simulate a Ĥ_F with nonzero-winding DRs, we need to break the instantaneous translation symmetry in Ĥ(t). In previous theoretical or experimental studies, people often focus on the atom-confining potential that keeps the instantaneous translation symmetry. This explains why the mixing symmetry has not been observed accidentally. The recently developed digital-micromirror-device and sub-wavelength techniques have realized programmable instantaneous-translation-symmetry-breaking potentials in the cold atomic gases. This provides the foundation for experimentally realizing Ĥ(t). Because we already know the DRs of Ĥ_F, the quadratic quantum Fourier transform (QQFT) protocol is especially useful for designing Ĥ(t) <cit.>. Here we briefly review the idea of QQFT. The Floquet Hamiltonian is defined by the fact that e^-iĤ_F is the evolution operator of quantum state over one time period. The QQFT protocol gives a sequence of local Hamiltonians, denoted by Ĥ_1,Ĥ_2,⋯Ĥ_D, which are consecutively engineered so that the evolution operator can be factorized as e^-iĤ_F = e^-iĤ_D/D⋯ e^-iĤ_2/D e^-iĤ_1/D, where D is the depth of the Hamiltonian sequence and 1/D is the lifetime of each Hamiltonian. To obtain the Ĥ_ps, we utilize the fact that Ĥ_F=∑_k E(k) ĉ^†_kĉ_k is quadratic. On a lattice of size L, we perform the Fourier transform ĉ^†_k=∑_je^ikj/√(L)ψ̂^†_j with ψ̂^†_j being the onsite creation operator, and then reexpress the Floquet Hamiltonian as Ĥ_F= Ψ̂^†ℋΨ̂, where Ψ̂ is the array of ψ̂_js and ℋ is a Hermitian matrix with the elements being ℋ_j,j' = ∑_k E(k)e^ik(j-j')/L . To proceed, we exploit a formula of quadratic-exponent operators, which can be easily derived from the Baker-Campbell-Hausdorff formula. For arbitrary Hermitian matrices ℋ_1, ℋ_1, ⋯ℋ_d and a single Hermitian matrix ℋ that satisfy e^-i ℋ_d⋯ e^-i ℋ_2 e^-i ℋ_1 = e^-i ℋ, we always have e^-i Ψ̂^†ℋ_d Ψ̂⋯ e^-i Ψ̂^†ℋ_2 Ψ̂ e^-i Ψ̂^†ℋ_1 Ψ̂ = e^-i Ψ̂^†ℋΨ̂. Equation (<ref>) simply says that the factorization of an evolution operator with quadratic Hamiltonian (such as e^-iĤ_F) is equivalent to the factorization of the corresponding unitary matrix e^-iℋ. To make Ĥ_p = Ψ̂^†ℋ_p Ψ̂ a local Hamiltonian, we need to ask the L-by-L matrix ℋ_p to be local. In the QQFT protocol, each ℋ_p contains only the diagonal elements (onsite potentials) and the off-diagonal elements ℋ_j,j+1 (hopping between nearest-neighbor sites). Observing Eq. (<ref>), we immediately find the next factorization: e^-i ℋ = e^-i e^i ℱℰ e^ -i ℱ = e^i ℱ e^-i ℰ e^ -i ℱ, where ℰ is a diagonal matrix with the diagonal elements being E(k), and ℱ is defined by (e^i ℱ)_j,j'=1/√(L) e^i2π jj'/L. In Eq. (<ref>), ℰ is already diagonal and then satisfies the locality condition. Furthermore, e^i ℱ is recognized to be the Fourier transformation, which can then be factorized into a sequence of local unitary matrices by using the algorithm of quantum Fourier transform (see Ref. [Wang22] for the detail). The factorization of e^i ℱ depends only upon the value of L. The analytical expressions of ℋ_ps have been obtained, as L is an integer power of 2, i.e. L=2^l. The sequence depth of e^i ℱ scales as Lln L. As an example, we give the sequence of Hamiltonians that generate the required dispersion relation on a one-dimensional lattice of length L=2^3=8. For simplicity, we label the lattice sites as j=0, 1, ⋯, 7. In this case, the unitary evolution over a single period can be factorized into e^-iℋ= R^(2) A^(2) R^(2)†R^(1) A^(1) R^(2)† A^(0)R^(2)† e^-i ℰ R^(2) A^(0)†R^(2) A^(1)†R^(1)† R^(2) A^(2)†R^(2)†. Here, R^(1) and R^(2) are the permutation matrices, which are realized by using a sequence of swaps, say R^(1) = S^(1,2) S^(5,6) and R^(2) = S^(3,4) S^(4,5) S^(5,6) S^(2,3) S^(3,4) S^(1,2), respectively. S^(j,j+1) is the swap (the Pauli matrix σ_x) between two neighbor sites j and j+1. For the realization of S^(j,j+1), the corresponding Hamiltonian is h_j,j+1=h_j+1,j=-h_j,j=-h_j+1,j+1 = π/2 and h_i,i'=0 for i,i'≠ j,j+1 (it is easy to verify S^(j,j+1)=e^-ih). The Hamiltonian h is definitely a local one, involving only an operation on two neighbor sites. A^(q) with q=0,1,2 is the local Fourier matrix, which couples 2j with 2j+1 sites for j=0,1,2,3. Its nonzero matrix elements are ( [ A^(q)_2j,2j = 1/√(2) A^(q)_2j,2j+1= 1/√(2)e^i2π(j% 2^q)/2^q+1; A^(q)_2j+1,2j =1/√(2) A^(q)_2j+1,2j+1= -1/√(2)e^i2π(j% 2^q)/2^q+1 ]), where % denotes the remainder. The corresponding Hamiltonian, i.e. i ln [A^(q)], has only the couplings between two nearest-neighbor sites. Finally, the Hamiltonian ℰ in Eq. (<ref>) is made of the on-site potentials. For a linear dispersion E(k)=w k, the elements of ℰ can be written as ℰ_i,j=δ_i,j2π/L j w. One can also use the modulo 2π operation to force ℰ_i,j to be in the interval [-π,π). In the construction of the Hamiltonian sequence, we notice that multiple swaps that are commutative with each other can be combined into one without breaking the locality of Hamiltonian. For example, S^(1,2) and S^(5,6) in R^(1) can be realized by using a single Hamiltonian that has the coupling between site-1 and site-2 and at the same time, also the coupling between site-5 and site-6. Such a consideration reduces the depth of the Hamiltonian sequence. In the case of L=8, we find the depth to be D=39. The sequence consists of 32 swaps, six A^(q) and one e^-i ℰ. § MIXING SYMMETRY OF THE SINGLE-PARTICLE PROPAGATOR In the main text, we derived from the multiplication rule that |k',α'⟩=Û(1,0,0)|k,α⟩, which illustrates the transformation of a single-particle state under Û(1,0,0). In the language of many-body physics, it is more convenient to define Û(1,0,0) based on its action on the creation or annihilation operators. This can be expressed as ĉ^†_k'α' = Û(1,0,0) ĉ^†_kαÛ^† (1,0,0). The field operators in real space are obtained through Fourier transformation of ĉ^†_kα, given by ψ̂^†_xα = ∑_k e^-ikx/√(L)ĉ^†_kα, where L is the system size. The time evolution of field operators is defined as ψ̂^†_x α(t) = e^iĤ_F tψ̂^†_x α e^-iĤ_F t for integer t (integer multiples of the period). Utilizing Eq. (<ref>), we can derive the following expression: Û(1,0,0) ψ̂^†_x α(t) Û^† (1,0,0) = ψ̂^†_x' α'(t'), where (t',x')^T = A(t,x)^T, and t,x,t',x' are all integers. The transformation Û(1,0,0) induces changes in both the spatial and temporal coordinates of the field operators, which are determined by the matrix A. The propagator of particles in band-α is defined as G_α(t_1 x_1,t_2x_2)=-i θ(t_1-t_2) ⟨[ψ̂_x_1α(t_1), ψ̂^†_x_2α(t_2)]_±⟩, where the plus (minus) sign corresponds to fermions (bosons), and θ represents the Heaviside function. The coordinates t_1,x_1,t_2,x_2 are all integers. The angle brackets ⟨⟩ denote the expectation value with respect to the vacuum state. Due to discrete translational symmetry, G_α depends only on the difference Δ t=t_1-t_2 and Δ x=x_1-x_2 for integer coordinates. Using Eq. (<ref>), we immediately find: G_α(Δ t,Δ x) = G_α'(Δ t',Δ x'), where (Δ t',Δ x')^T=A (Δ t,Δ x)^T. This equation explains how the mixing symmetry manifests in the particle propagator. For α'=α (a singlet band in the 𝒫_2 class), the propagator must remain invariant after a linear operation A on the spacetime coordinates, imposing a strong constraint on the propagator. For α'≠α, the propagator of band-α after the coordinate transformation becomes the propagator of band-α'. Thus, Eq. (<ref>) establishes a connection between propagators of different bands. In experiments, what can be measured is the wave function, or more precisely, the absolute magnitude of the wave function. The wave function is directly linked to the propagator. If we initially locate a particle at position x=0 at time t=0, its wave function at a later time satisfies, according to Eq. (<ref>) and (<ref>), Ψ_α(t,x) = Ψ_α'( t',x'), where ( t',x')^T = A ( t,x)^T and t,x are arbitrary integers. An alternative way to prove this result is by using Ψ_α (t,x) = ∑_k e^ikx-itE_α(k)/L and Eq. (<ref>).